The legitimate science community is increasingly concerned over the proliferation of bad science--biased, not peer-reviewed, unrepeatable.
It's a problem, because there are true believers who insist public policy be made, but they point to pseudo-scientific papers that don’t pass the smell test.
How can you tell a
scientific paper ought to red-line your BS meter? There are many clues, and any
paper with more than one of these problems should worry you. Here are some things to look
for.
The authors:
Are they real researchers? Are they associated with a legitimate institution? If
the authors are clearly associated with an advocacy organization, for example,
you have some cause to at least be cautious. Is their scientific training in the subject
area of the paper? (Nothing like a “policy advisor” from an institute you never
heard of, writing “authoritatively” about genetic engineering or vaccination. Or a political science major professing expertise on climate change or fishery regulation.)
The publisher:
There are reliable peer-reviewed journals. Peer reviewed science means that
before publishing your work, the publisher shows it to other experts in the
field, and the author needs to respond to their criticisms to the satisfaction
of the journal’s editors.
But now there are journals that require no peer
review and some that will print anything for an advance payment. Medical ethicist Arthur Caplan calls them “predatory
publishers.”
Peer review or refereeing is
a powerful tool to ensure the science is good and relevant. Sometimes journals
will even do “blind reviews,” sending out papers for review without telling the
reviewer who wrote them—a technique to avoid favoritism.
The short version: Read the title, the abstract, and near the end, the conclusion and
occasionally the discussion. Do they make sense? Are they consistent? Do they betray an obvious bias?
Be
aware that conclusions may reflect the authors’ interpretation of the science
in addition to what they actually found in their research. If the abstract is
all you’re reading, you’re not getting the whole story.
The meat.
Few lay readers get this far, but here’s where the science makes its bones. There’s
often an introduction, a review of methods used, and the results. Be wary of too-small sample sizes. Be aware of
so finely focused a research area that you can’t realistically draw broad
conclusions (although some people will.) Be alert for which data is statistically
significant and that which isn’t. Look for red flags.
As an example, the seminal
document among folks who believe that low-level electromagnetic fields cause
injury is the Bioinitiative Report, a selective review of studies on
electromagnetics. It fails the real-science test on several levels.
The lead author is an
activist without scientific training. It was initially published online without
peer review. It was long, and complicated, and few in the media took the time to test it. But when independent national
and international scientific organizations studied it, they tore it apart for cherry-picking
the data it cited (in many cases peer-reviewed data) and then basing
conclusions on that selective data.
The Health Council of the
Netherlands was one of its many critics, saying the project was so badly done
that it had the opposite of the desired result: “...the BioInitiative report is not an objective and balanced reflection of the current
state of scientific knowledge. Therefore, the report does not provide any
grounds for revising the current views as to the risks of exposure to electromagnetic
fields.”
The Bioinitiative Report was
eventually extensively revised and shortened to resolve some but not all of its
issues, and was published in the peer-reviewed journal Pathophysiology.
In another example, a group
of Australian researchers set out to determine how hogs respond differently to
being fed genetically modified corn as opposed to organic corn. If you read the
conclusion of the authors, you would be led to the conclusion that pigs fed GM
corn got more stomach inflammation than the others.
This one is trickier than the
Bioinitiative report. The conclusions partly reflect the research, but leave
some interesting parts out.
The first hurdle is the lead researcher, a biochemist and university professor, also operates
an anti-GMO website that is funded by anti-GMO organizations. Critics of the biotech industry regularly decry research by scientists who work for or whose work is partly funded by industry. But yep, it works both ways.
The paper was published in a journal that specializes in organic farming issues.
The paper was published in a journal that specializes in organic farming issues.
And while the author's conclusions
make a big point about the severe inflammation numbers, there is little
reference to another fact: While it was true that more GM-fed pigs than organic-fed pigs had severe
inflammation, it was also true that twice as many GM-fed pigs as organic-fed pigs had no
inflammation at all.
The takeaway is that the conclusions didn’t tell the whole story.
Rarely does a scientific
paper stand alone. To be taken seriously, its experiments need to be repeated,
and the repetitions need to get the same results
You may recall the kerfuffles
over cold fusion. A couple of times now, researchers have announced they’ve
succeeded in conducting nuclear fusion on a tabletop, at near room temperature.
The problem is that other researchers have never been able to repeat the
results, leaving much of mainstream science deeply skeptical.
Clearly, few scientific
papers are perfect, but if you’re going to rely on them, it makes sense to
understand what they don’t say, what they do, and now much of it is believable.
© Jan TenBruggencate 2015
No comments:
Post a Comment