Will Grant & Rod Lambert, from the Australian National Centre for the Public Awareness of Science, listed these 10 common mistakes in an article published in The Conversation. And as they say, if we're honest we've probably made at least one of them at some point. This article would probably be a really useful resource for teachers working with their students on how to assess the validity of a particular piece of information, and I've already passed it on to my first-year students.
NB I see that Ken's also posted on this over at Open Parachute, but these are points that deserve to be shared widely, so let's continue anyway 🙂
Judging a topic based on just one study. A recent example of this would be the media coverage given to claims about a bacterium being able to use arsenic instead of phosphorus in its DNA. But for an example which did real harm, consider the widespread acceptance and promotion of a claimed link between the MMR vaccine and autism – a claim that went against the existing evidence when it was first published and has now been thoroughly discredited. Using single studies, which are often 'outliers', is a very common habit among promoters of woo, for whom this comment by Grant & Lambert is particularly apt:
If you do it deliberately, it's cherry-picking. If you do it by accident, it's an example of the exception fallacy.
The second miskake on the list is forgetting that while an effect might be statistically signficant, it may also be so small as to be meaningless in the real world.
And the related error: failing to look closely at what an 'effect size' actually translates into.
We might have a treatment that lowers our risk of a condition by 50%. But if the risk of having that condition was already vanishingly low (say a lifetime risk of 0.002%), then reducing that might be a little pointless.
Judging the extremes by the majority. Exposure to fluoride is a good example here; too much, and there's significant risk of fractures. But too little also increases the risk of damage – the response relationship here is not linear.
Being more likely to accept information that agrees with what we already know. This is one we have to guard against, all the time, because everyone's prone to it. As Steven Novella has said:
Questioning our own motives, and our own process, is critical to a skeptical and scientific outlook. We must realise that the default mode of human psychology is to grab onto comforting beliefs for purely emotional reasons, and then justify those beliefs to ourselves with post-hoc rationalisations. It takes effort to rise above this tendency, to step back from our beliefs and our emotional connection to conclusions and focus on the process. The process (i.e. science, logic, and intellectual rigor) has to be more important than the belief.
Falling for the snake oil – it's easy to be seduced by glib presentations, especially when they sound science-y at times. How else to explain the rise in popularity of the Food Babe (and the commodities she offers), for instance? Or Natural News and its ilk?
Forgetting that qualities aren't quantities and quantities aren't qualities. A new drug may hold the promise of extending life, but as I get older I find I also think about quality of life.
And forgetting that a model is never going to be a perfect representation of reality. If they were, we probably wouldn't call them models.
Context matters, of course. Grant and Lambert use the complexities around cycle helmet laws as their example (something that's also been discussed on Sciblogs in the past).
And finally, just because it's peer reviewed, that doesn't make it right. Back in 2005, John Ioannidis wrote, "[p]ublished research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment." This simply reflects how science operates. As Grant and Lambert point out:
even if we assume that the reviewers made no mistakes or that there were no biases in the publication policies (or that there wasn’t any straight out deceit), an article appearing in a peer reviewed publication just means that the research is ready to be put out to the community of relevant experts for challenging, testing, and refining.
If that subsequent challenging, testing, and refining support the original paper, then it's on stronger ground. Which is why (coming back to the beginning again) it's not a wise idea to rely on just a single paper to support a case.