The numbers are the numbers, except when they’re not.

I’m not quite sure of what to make of the new figures for COVID-19 (as we must now call the novel coronavirus – though I’m not sure the capitalisation is correct) from Hubei province:

Image from https://www.bbc.com/news/world-51495484 

The spike yesterday is a consequence of a using a different way of defining a case – one based on a clinical diagnosis (i.e. someone has the symptons) rather than a definitive test. There’s sense in that, but it does make for a mess when one tries to interpret trends in what is going on. Moreover, with only Hubei province using this new method, data from China seems now to be reported using two different standards. This can’t make it easy for researchers into COVID-19.

Maybe more worryingly though, from a science perspective, are the reports that two senior Hubei officials were sacked soon after the new figures came out.  Now, I have not seen any statement that they were sacked because of the new figures but I can’t help wondering that the two events are linked.  That is not how science is done. You cannot reward people because their results look good or punish people because their results are not what you want them to be. If science is to be done correctly, and be helpful to our understanding, the reported results should be the actual results, not a gently massaged variant of them that looks nicer than the original.

At school (see this post) and university, however, we often teach our first-years to be dishonest. Yes, you read that right. In undergraduate science labs (I speak from the physics perspective, but I suspect it is true elsewhere too), we often teach students to make up results.  The scenario is this: When we give credit for results that look like the textbook theory (and I have seen and used, to my shame, mark schemes that do exactly that) we implicitly tell the students that they will get better grades by looking up what the results ‘should be’ and recording those in preference to what their results actually were.  Anyone who has done a physics degree can tell you that half (or more) of the experiments you do don’t work out the way they are ‘supposed to’. And when we reward the obtaining of ‘textbook’ answers we tell students that dishonesty gets you credit. I knew this at university, and as a consequence I ‘gently shaped’ my results accordingly. Some of this was probably subconscious and subtle – e.g. double-checking points that ‘don’t look quite right’ but not double-checking points that ‘do look right’ – and I wouldn’t (at the time) have called it dishonest. After all, I knew the textbook physics – what was supposed to happen – and I was simply demonstrating that I knew by getting ‘good’ results. But it wasn’t good science.

Unfortunately this often flows on past the first degree. A PhD student’s supervisor appears ‘happy’ when the PhD student gets results that confirm the supervisor’s new hypothesis. And the examiner comments on the great thesis they have done. The postdoc’s supervisor obtains follow-on funding for the postdoc based on the postdoc’s promising (but ‘massaged’) results. Academic promotion is then obtained on the basis of a couple of exciting papers in high-impact journals (just don’t delve into the quality of the data too much). It happens – see this example of a physicist jailed for dishonestly submitting false data for the purposes of defrauding the US government.

When I assess undergraduate practical work, I now no longer assess on the basis of whether their results ‘match what the texbook says’.  Instead, I assess on the student’s accurate recording and interpretation of their results. It is more important to record what actually happened, completely and accurately, and discuss what it means. Anything else is not science.

Banner image: An extract from Thomas Edison’s laboratory notebook

Leave a Reply