Most Preclinical Life Science Research Is Irreproducible Bunk
$28 billion in research funding wasted every year.
Biomedical science is broken, according to a new study. In their article, "The Economics of Reproducibility in Preclinical Research," published in the journal PLoS Biology, a team of researhers led by Leonard Freedman of the Global Biological Standards Institute reports that more than half of preclinical research cannot be replicated by other researchers. From the abstract:
Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone.
Back in 2012 I reported on other studies that also found that the results in about 9 out of 10 landmark biomedical papers could not be reproduced:
The scientific journal Nature published a disturbing commentary claiming that in the area of preclinical research—which involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in people—independent researchers doing the same experiment cannot get the same result as reported in the scientific literature.
The commentary was written by former vice president for oncology research at the pharmaceutical company Amgen Glenn Begley and M.D. Anderson Cancer Center researcher Lee Ellis. They explain that researchers at Amgen tried to confirm academic research findings from published scientific studies in search of new targets for cancer therapeutics. Over 10 years, Amgen researchers could reproduce the results from only six out of 53 landmark papers. Begley and Ellis call this a "shocking result." It is.
And ten years ago in his groundbreaking article, "Why Most Published Research Findings Are False," Stanford Uninversity statisitician John Ioannidis found:
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
Science is supposed to build and organize knowledge in the form of testable explanations and predictions. If reported research results cannot be reliably replicated, they are not science.