Apparently Everyone Now Agrees Science Is Badly Broken
Research is afflicted with pervasive confirmation bias that is massively yielding false positives
My "Broken Science" cover article in Reason now has a lot of company. In that article I noted that "there is no one single cause for the increase in nonreproducible findings in so many fields. One key problem is that the types of research most likely to make it from lab benches into leading scientific journals are those containing flashy never-before-reported results. Such findings are often too good to check. 'All of the incentives are for researchers to write a good story—to provide journal editors with positive results, clean results, and novel results,' notes the University of Virginia psychologist Brian Nosek. 'This creates publication bias, and that is likely to be the central cause of the proliferation of false discoveries.'"
Now comes the current issue of New Scientist that features, "The Unscientific Method" by Sonia van Gilder Cooke in which she reports …
…dubious results are alarmingly common in many fields of science. Worryingly, they seem to be especially shaky in areas that have a direct bearing on human well-being – the science underpinning everyday political, economic and healthcare decisions. No wonder the whistle-blowers are urgently trying to investigate why it's happening, how big the problem is and what can be done to fix it. In doing so, they are highlighting flaws in the way we all think, and exposing cracks in the culture of science.
Science is often thought of as a dispassionate search for the truth. But, of course, we are all only human. And most people want to climb the professional ladder. The main way to do that if you're a scientist is to get grants and publish lots of papers. The problem is that journals have a clear preference for research showing strong, positive relationships – between a particular medical treatment and improved health, for example. This means researchers often try to find those sorts of results. A few go as far as making things up. But a huge number tinker with their research in ways they think are harmless, but which can bias the outcome.
Both Cooke and I focused on how researchers all too often succumb to confirmation bias by sorting through the statistical debris of their experiments, p-hacking and HARKing—in search of some kind of correlation that they can claim is "significant."
Over at the journal First Things, software engineer William Wilson has another insightful article, "Scientific Regress," on how science is has been undermined by careerism and the normal human penchant for confirmation bias. His critique of peer review is disturbingly correct:
If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The "bad" papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, "some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis."
What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you've built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons.
Wilson is describing what Nobel Prize–winning chemist Irving Langmuir identified in a 1953 lecture as "pathological science," or "the science of things that aren't so." As I explain in my book, The End of Doom:
To explain how researchers and whole fields of science can end up studying phenomena that don't actually exist, Stanford University bio-statistician John Ioannidis fancifully describes the highly active areas of scientific investigation on Planet F345 in the Andromeda Galaxy. The Andromedean researchers are hard at work on such null fields of study as "nutribogus epidemiology, pompompomics, social psycho-junkology, and all the multifarious disciplines of brown cockroach research—brown cockroaches are considered to provide adequate models that can be readily extended to humanoids."
The problem is that the Andromedean scientists don't know that their data dredging and highly sensitive nonreplicated tests are massively producing false positives. In fact, the Andromedean researchers have every incentive—publication pressure, tenure, and funding—to find effects, the more extravagant the better. But in fact, the manufactured discoveries are just estimating the net bias operating in each of these "null fields."
It's at last becoming more widely recognized that a lot of researchers have built their careers on investigating "things that aren't so" and defending what turn out to be expanding "null fields."
Show Comments (50)