Apparently Everyone Now Agrees Science Is Badly Broken

Research is afflicted with pervasive confirmation bias that is massively yielding false positives


Jason Keisling

My "Broken Science" cover article in Reason now has a lot of company. In that article I noted that "there is no one single cause for the increase in nonreproducible findings in so many fields. One key problem is that the types of research most likely to make it from lab benches into leading scientific journals are those containing flashy never-before-reported results. Such findings are often too good to check. 'All of the incentives are for researchers to write a good story—to provide journal editors with positive results, clean results, and novel results,' notes the University of Virginia psychologist Brian Nosek. 'This creates publication bias, and that is likely to be the central cause of the proliferation of false discoveries.'"

Now comes the current issue of New Scientist that features, "The Unscientific Method" by Sonia van Gilder Cooke in which she reports …

…dubious results are alarmingly common in many fields of science. Worryingly, they seem to be especially shaky in areas that have a direct bearing on human well-being – the science underpinning everyday political, economic and healthcare decisions. No wonder the whistle-blowers are urgently trying to investigate why it's happening, how big the problem is and what can be done to fix it. In doing so, they are highlighting flaws in the way we all think, and exposing cracks in the culture of science.

Science is often thought of as a dispassionate search for the truth. But, of course, we are all only human. And most people want to climb the professional ladder. The main way to do that if you're a scientist is to get grants and publish lots of papers. The problem is that journals have a clear preference for research showing strong, positive relationships – between a particular medical treatment and improved health, for example. This means researchers often try to find those sorts of results. A few go as far as making things up. But a huge number tinker with their research in ways they think are harmless, but which can bias the outcome.

Both Cooke and I focused on how researchers all too often succumb to confirmation bias by sorting through the statistical debris of their experiments, p-hacking and HARKing—in search of some kind of correlation that they can claim is "significant." 

Over at the journal First Things, software engineer William Wilson has another insightful article, "Scientific Regress," on how science is has been undermined by careerism and the normal human penchant for confirmation bias. His critique of peer review is disturbingly correct:

If peer review is good at anything, it appears to be keeping unpopular ideas from being published. Consider the finding of another (yes, another) of these replicability studies, this time from a group of cancer researchers. In addition to reaching the now unsurprising conclusion that only a dismal 11 percent of the preclinical cancer research they examined could be validated after the fact, the authors identified another horrifying pattern: The "bad" papers that failed to replicate were, on average, cited far more often than the papers that did! As the authors put it, "some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis."

What they do not mention is that once an entire field has been created—with careers, funding, appointments, and prestige all premised upon an experimental result which was utterly false due either to fraud or to plain bad luck—pointing this fact out is not likely to be very popular. Peer review switches from merely useless to actively harmful. It may be ineffective at keeping papers with analytic or methodological flaws from being published, but it can be deadly effective at suppressing criticism of a dominant research paradigm. Even if a critic is able to get his work published, pointing out that the house you've built together is situated over a chasm will not endear him to his colleagues or, more importantly, to his mentors and patrons.

Wilson is describing what Nobel Prize–winning chemist Irving Langmuir identified in a 1953 lecture as "pathological science," or "the science of things that aren't so." As I explain in my book, The End of Doom:

To explain how researchers and whole fields of science can end up studying phenomena that don't actually exist, Stanford University bio-statistician John Ioannidis fancifully describes the highly active areas of scientific investigation on Planet F345 in the Andromeda Galaxy. The Andromedean researchers are hard at work on such null fields of study as "nutribogus epidemiology, pompompomics, social psycho-junkology, and all the multifarious disciplines of brown cockroach research—brown cockroaches are considered to provide adequate models that can be readily extended to humanoids."

The problem is that the Andromedean scientists don't know that their data dredging and highly sensitive nonreplicated tests are massively producing false positives. In fact, the Andromedean researchers have every incentive—publication pressure, tenure, and funding—to find effects, the more extravagant the better. But in fact, the manufactured discoveries are just estimating the net bias operating in each of these "null fields."

It's at last becoming more widely recognized that a lot of researchers have built their careers on investigating "things that aren't so" and defending what turn out to be expanding "null fields."


NEXT: Saudi Arabia's Alleged 9/11 Connection Just One of Many Reasons the U.S. Ally is a Problem

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. So peer review is the equivalent of legislative committees. It functions to keep good work hidden, and put bad work forward.

    1. Monopoly. The idea of competing truths is a nice one. … talk about the marketplace of truths.

    2. This phenomenon is not new. It’s the reason why Max Planck described science advancing with the funerals of new ideas’ opponents.

  2. I hate to break it to you.

  3. My “Broken Science” cover article in Reason now has a lot of company.

    Kind of like when they had two movies about meteors crashing into Earth or two movies about Wyatt Earp or Hercules come out at the same time. And here like those cases, we all know what was clearly the superior product.

    1. Deep Impact, Tombstone and the Disney one.

      1. You are so right on one, very wrong on another and don’t really have an opinion on the third.

    2. Or the two Boston Marathon bombing movies currently being filmed.

  4. But remember, The Science Guy thinks you should be thrown in jail if you don’t think “the science is settled.” Because, well, science says so!

    1. “If God did not exist, it would be necessary to invent him.” ~ Voltaire

    2. “A great many people think they are thinking when they are merely rearranging their prejudices.”

      -William James

      Beliefs from facts are defended with facts. Beliefs from emotions are defended with emotions.

      -my paraphrase of a Spinoza quote

  5. Worryingly, they seem to be especially shaky in areas that have a direct bearing on human well-being ? the science underpinning everyday political, economic and healthcare decisions

    Predictable, given that those are the areas that can be most easily used to justify power grabs and promote agendas.

    1. + politics and economics aren’t sciences.

      1. They are if you are God… Or, rather, they sure look like sciences if you think you are God.

    2. Also they are the most difficult to quantize and control at the appropriate scales.

      1. I think that is really it. They can be done scientifically, but with greater methodological difficulty than, say, physics or chemistry.

    3. Shorter Keynes: Whatever the government does is good because it stimulates the economy.

  6. “the science underpinning everyday political, economic and healthcare decisions”

    Maybe this is evidence that politics, economics, and healthcare decisions are so incredibly complex with so many variables that it’s impossible to control enough variables to come to scientifically valid conclusions.

    It’s almost like there’s some sort of knowledge problem regarding economics and politics where it’s impossible for elites and intellectuals to ever have enough information to effectively centrally plan a society. I just wish someone had pointed this out before.

    1. Economics, politics, and psychology are not hard sciences. I would rather them not be called sciences at all. Our brains are perhaps the most complex systems in the universe that we know of. It is essentially impossible to prove any definitive cause of any correlations of any human behavior with anything.

      1. It is essentially impossible to prove any definitive cause of any correlations of any human behavior with anything.

        Oh, I think I can. With ANY human behavior, if you subsidize it you get more of it and if you tax it you get less of it. Is that enough of a correlation?

      2. If you define science as something like “a method for testing ideas and gaining knowledge through empirical observation and experimentation” then I think economics, politics, and psychology certainly *could* be sciences. In practice, it’s difficult to do good experiments in those fields for a number of reasons (difficulty in creating good control groups, difficulty in reproducing an experimental setup, difficulty in identifying causal effects in a multi-causal environment, etc.). That in turn creates a knowledge problem, as Irish pointed out. And given the stakes involved in these areas, there is added incentive to fill that knowledge void with personal biases.

        1. I would say that they could be sciences, but the newly cool Bayesian confidence coefficients of most of the studies performed would be low. Whereas Newton-Einstein Gravity applied to anything smaller than a galaxy and larger than an electron can be taken at near certainty*.

          *Why stars orbit galaxies at the speed and distance they do is still being resolved, I know. And I’m not really a fan of the non-interacting dark matter. But the transition zone between 1/r^2 and 1/r in competing theories is also…. Unsatisying.

      3. Well, to be fair to economics, Praxeology is as “hard” of a study as you can do on humans.

        Then again, Austrian economics is the only version of economics that could actually be called a science with a straight face. You want to avoid calling the Monetarists and Keynesians scientists, I’m right there with you.

    2. Nonsense! Why, just look at this control room for Chile’s TOP MEN.

      1. It is interesting how centralized workers’ states are.

      2. Is…is that a food-o-matic from Fallout in the background? 0.o

  7. That a lot of published research can’t be readily reproduced is definitely troubling. But I think I’m more disturbed by the apparent inability of every-day research to uncover that irreproducibility by itself.

    If someone publishes a faulty paper about some new exoplanet that isn’t remarkable in any way, I can see how it would linger in the literature because it really isn’t that consequential.

    But I would have hoped that a faulty cancer treatment that is actively being used would have been recognized by practitioners or researchers that were trying to build off of the original result.

    1. “But I think I’m more disturbed by the apparent inability of every-day research to uncover that irreproducibility by itself.”

      The problem is that the results are ‘replicated’ and extended by the same dubious methods as the original research. What happens is that an apparently ‘ground-breaking’ new phenomenon is published. As a result, a whole bunch of researchers jump in and try to do similar research. The attempted replications or extensions that succeed are also published to acclaim. The failures are left in the file drawer. And the new field is off and running.

      1. It’s just very difficult for me to understand how that happens and perpetuates. Maybe I’m trying to transfer too much from my own field. If it the original evidence for dark matter or dark energy had been exaggerated or mistaken, subsequent research would have found that out. Even if no one tried to reproduce the original observations, it would eventually become apparent that the outgrowth of those observations was not internally consistent.

        I guess it’s a different way of doing things?

        1. The science is not broken, the “government” of the science is broken. Not so much broken science in the computer industry: the ground is littered with the corpses of agile development and waterfall methods and authoring systems that died (relatively) quickly when the run into the cold hard wall of reality, and get spanked. In fields with limited access to publication through the bottlenecks of journals, and artificially lengthened delay between the science and the marketplace smackdown (looking at you, pharma and climatology), you see the parallels to the financial cronyism that allows government boondoggles to spend so much money before they are proved worthless.

  8. Spot the Not: famous scientific frauds

    1. Not long after the discovery of x-rays, a French scientist claimed to have discovered a new type of ray, the n-ray. Scientists in other countries tried to detect n-rays, but only French scientists were successful.

    2. This young German scientist had a promising career until he admitted that he had published fake results 16 times.

    3. This Russian scientist claimed to have invented a resurrection machine. In fact, the dead were merely random drunks he found passed out on the streets of St Petersburg.

    4. His perpetual motion machine drew huge crowds until it was revealed it was being secretly powered by an old man turning a crank with one hand and eating a loaf of bread with the other.

    5. This anthropologist claimed to have discovered a Stone Age tribe- in fact, he bribed villagers to pose for pictures in caves while wearing loincloths.

    6. This scientist claimed to have observed Lamarckian evolution in toads, when in fact his assistants had been secretly injecting ink them to make them look different.

    1. Isn’t it a little mean to put a spot the not in a lesser thread than the lynx?

      1. Sometimes I can’t post there in time. Besides, it is more relevant to this thread.

        1. I know #1 was real. I think #5 was a movie plot, but I can’t be sure.

    2. #3

      1: Prosper-Ren? Blondlot
      2: Jan Sch?n
      4: Charles Redheffer
      5: Tasaday
      6: Paul Krammerer

    3. These are deliberate frauds. Most of those social science results are the result of SELF deception.

      1. Blondlot’s N-rays were the result of self-deception on the part of French scientists? they saw what they wanted to see.

  9. Most of us want to have good income but don’t know how to do that on Internet there are a lot of methods to earn huge sum at home, so I thought to share with you a genuine and guaranteed method for free to earn huge sum of money at home anyone of you interested should visit the page. BE I am more than sure that you will get best result.VF06

    ——- http://www.E-cash10.COM

  10. I will soon have a published paper in my AI-related field of expertise that demonstrates pretty conclusively that the paradigm virtually every other researcher uses to measure success is invalid. The peer review was instructive, because nobody really questioned the core findings, but I was sniped at for (a) not proposing an alternative method that works better (working at all would be working better); and (b) not mentioning forcefully enough that practitioners in my field know there are problems (but they keep using the bad methods and people get published based on “improvements” that I demonstrated cannot be significant in the real world).

    My paper, when it appears next month, will probably make me a pariah in my field because it drops a crater in what everyone else in it has built careers on. My expectation is that people will read it, say a few “to be sures,” and then simply ignore it and its implications.

  11. Lets be clear. We aren’t talking about SCIENCE science, only all those soft sciences. I’m an engineer, and I can tell you that any poorly done science would very quickly be exposed because … stuff will break. If you claim your new steel alloy is 30% stronger than regular steel, or your bluetooth protocol is 20% faster than others, people are going to check and you’ll be quickly be found out.

    I think it goes to show how complex the social sciences are, if no one recognizes that the basis of all their theories is total nonsense.

    1. I have an engineering background too, and we have it easy, really. We can fairly precisely define pass/fail criteria, populations, controls, ranges, you name it. It’s much harder in the soft sciences, as you say.

      All the more reason to be an extreme hard-ass when you are a reviewer on a psychology paper. The problem is, to be a hard-ass, you have to have a lot of time to devote to reviewing, and I’ve always wondered how much time the reviewers have.

    2. It took 25 launches before poorly done engineering was discovered on the Space Shuttle?

      1. Of course, not everything about a design can be known. And having a failure happen after only 25 uses of a design is by definition “quickly exposed”. You’ve simply proved Berserkerscientist’s point.

        Also note, it was known well before launch 25 that the SRB segment joint design was inadequate.

  12. I remember reading Junk Science Judo by Steve Milloy a long time ago. He was way ahead of this in my opinion.

    One thing he asserted that still sticks with me today is that to trust any study, the percent-change of whatever phenomenon being studied needs to be 100% or more. So, most studies I see mentioned in the mainstream press say that they noticed a 30% improvement/increase/decrease/etc in X. I immediately throw it in my mental trash.

  13. Everyone now agrees that science is broken, though Bailey still insists that vaccine proponents are 100% correct and that anyone who has any doubts about the validity of what they are saying is a menace to society.

    1. Oh bother. Vaccines and their effects are much less subjective to study than these soft-science areas like psychology. The benefits of vaccines are known to a very high certainty.

      It’s weak to say the phrase “science is broken” and then apply that to everything science has ever learned.

      1. On the contrary. The Institute of Medicine has recognized that the vaccine safety research is deficient.

        The former head of NIH, Dr. Bernadine Healy, acknowledged that the question about vaccines and autism is an open one:…..nd-autism/

        We have a senior scientist at the CDC (William Thompson) who stepped forward as a whistleblower admitting that the CDC fudged their data so as to not show vaccines as contributing to the autism epidemic. They tried to trash all of the evidence, but fortunately he saved a copy.

        There is worldwide opposition to the vaccine against HPV, due to the seriousness of the harm caused and the lack of credible evidence for its benefits.

        Of course, the flu shot is a joke.

        Vaccines are the poster child for broken science.

Please to post comments

Comments are closed.