I hate statistics

A week or so back I walked into the lecture room to give a lecture on electromagnetic waves, and was promptly asked: "Marcus, how much statistics do you use in your research?"  My initial reaction was to think "what has this got to do with electromagnetic waves?" and then, realizing that clearly it had nothing to do with EM waves, "what’s the ulterior motive to this question?", but  kindly another student spelled it out more transparently. "Why do we have to do the statistics paper next year?".

We require our fourth year engineering students to do a paper on statistics in their fourth year. Obviously some students don’t relish this prospect.

The truth is that I don’t use much statistics at all in my work, beyond mean, standard deviation, and occasional use of a normal distribution. Once I think I got as far as a t-test.   But that’s the nature of the work I do; it’s not statistically taxing.  But what is necessary is a fundamental understanding that statistics does matter.

Most physics students have some idea of this, but it’s often full of misconceptions. A common one is that the ‘error’ in a measurement equals the ( ‘student-measured value’ minus the ‘real answer looked up in a databook’) divided by the ‘real answer’ times 100%. That’s not an ‘error’; that’s the percentage difference between  your measurement and a text-book value.  So, when I talk about uncertainly with my experimental physics class, I get them to think about what they would do if they didn’t have a textbook to look up the ‘right’ answer with – i.e. they were the very first people ever to do this experiment. That, of course, is the case for research. No ‘right’ answer to compare with.

I’ve found a task that’s worked well is to give them a fictional set of data for the acceleration due to gravity on ‘Planet Waikato’. I make it fictional so they have to lose the notion that acceleration due to gravity equals 9.81 m s-2, as the textbooks say it does for earth. They only have the data-set I give them to work on.  Then I tell them they are building a rocket to leave Planet Waikato, and they need to know the acceleration due to gravity to within 1% uncertainty so they can select the right amount of fuel.  Does the data given them allow them to know the acceleration due to gravity to within this uncertainty or not?   That tends to get them thinking about how we can analyze results of experiments, and what we can say with confidence (and how much confidence) and what we can’t say with confidence.  That’s basically what statistics is about.

Just how to do a t-test, ANOVA, chi-squared test, etc, and under what circumstances, I leave out completely. It’s something you can look-up, or consult a statistician for when you need to. The key thing is knowing that you need to.

The question is then, do we need an entire paper in year 4 for our engineers (but not our physicists) to instruct them in the way of statistics?  Probably the best people to ask are our graduates, several years after graduation.

2 thoughts on “I hate statistics”

  • declan kruppa says:

    Hi Marcus, I studied physics in the 1980s. We were taught to record averages of measurements along with the standard error of the mean
    \frac{\sigma}{\sqrt{n}}
    Then when comparing two experimental results one could judge whether a difference was statisticaly significant by comparing it with multiples of the standard error, it seemed straght forward.
    now later in life I discover that other people (social and medical sciences) do things in a bit more complicated way. For instance, set up a null hypothesis and then calculate a critical t test value to judge the same thing. Am I missing something, but isn’t this the same thing, but putting a layer of abstraction in the way which makes it less straight forward and harder to see whats going on?

  • Marcus Wilson says:

    You ask “Isn’t this the same thing?” Well, approximately, yes. What you are formally doing when you look at the standard uncertainty in the mean and compare it with another result (call it X) is that you are setting up the null hypothesis “My experimental value is X”. You might not have written it down, but you were doing it implicitly. Then you assumed that your mean values would obey a normal distribution (which is usually OK especially if you have taken lots of readings) and then looked at whether your measured mean was more than (say) two standard uncertainties different from X. You then made a judgement on that. It’s pretty much the same thing as the formal approach, though, formally, if you are comparing two distributions, a t-test is the way to do it. I would expect in the large majority of cases, your conclusion as to whether two sets of measurements were ‘the same thing’ would be the same.
    I don’t think it puts a layer of abstraction in the way. It recognizes the formal nature of the mathematical approach.

Leave a Reply