Sometimes I base these blogs on a scientific paper that's caught my eye. I'm hoping that sometimes you'll search out the original reference and read it for yourself. But when a paper is cited in support of an argument – how can you decide whether the contents stack up?
Simple is good. A paper that's reporting on a test of a single hypothesis is less likely to throw up random data, the more so if the hypothesis has been carefully framed and selected to start with.
Was there a large sample size? As you already know, with a small sample, an unusual or aberrant data point may skew your results.
There's nothing wrong with a negative result (in statistical terms, one which has to accept the null hypothesis). What often happens is that only the positive results get published, & this does tend to skew our view of science. There are times (a lot of them, actually) when the experimental data aren't what the hypothesis predicted. That's scientific reality.
How significant are the data? If there's only a small effect from the treatment, beware of special pleading that inflates its importance.
Are there multiple sources of supporting evidence? In other words, are there a range of papers, in a variety of journals, that confirm the results presented here? OK, that's not always going to be possible – in a field where our knowledge is rapidly expanding, someone is always going to stick their neck out with something novel. But even then, they'll have been building on previous work by other scientists. And these cutting-edge studies are always subjected to intense scrutiny, as other researchers try to replicate their results – if there's good science there, it will eventually be confirmed. And if it has no real substance, it will eventually be rejected (so-called "cold fusion" is a good example!). In other words, good science is eventually self-correcting.