Science

How Much Scientific Research Is Actually Fraudulent?

It may be more than you think.

|

Fraud may be rampant in biomedical research. My 2016 article "Broken Science" pointed to a variety of factors as explanations for why the results of a huge proportion of scientific studies were apparently generating false-positive results that could not be replicated by other researchers. A false positive in scientific research occurs when there is statistically significant evidence for something that isn't real (e.g., a drug cures an illness when it actually does not). The factors considered included issues like publication bias, and statistical chicanery associated with p-hacking, HARKing, and underpowered studies. My article did not address the possibility that the lack of reproducibility could be because a significant proportion of preclinical and clinical biomedical studies were actually fraudulent.

My subsequent article, "Most Scientific Findings Are False or Useless," which reported the conclusions of Arizona State University's School for the Future of Innovation in Society researcher Daniel Sarewitz's distressing essay, "Saving Science," also did not consider the possibility of extensive scientific dishonesty as an explanation for the massive proliferation of false positives. In his famous 2005 article, "Why Most Published Research Findings Are False," Stanford University biostatistician John Ioannidis cited conflicts of interest as one factor driving the generation of false positives but also did not suggest that actual research fraud was a big problem.

How bad is the false-positive problem in scientific research? As I earlier reported, a 2015 editorial in The Lancet observed that "much of the scientific literature, perhaps half, may simply be untrue." A 2015 British Academy of Medical Sciences report suggested that the false discovery rate in some areas of biomedicine could be as high as 69 percent. In an email exchange with me, Ioannidis estimated that the nonreplication rates in biomedical observational and preclinical studies could be as high as 90 percent.

The possibility that fraud may well be responsible for a significant proportion of the false positives reported in the scientific literature is suggested by a couple of new Dutch studies. Both studies are preprints that report the results of surveys of thousands of scientists in the Netherlands aiming to probe the prevalence of questionable research practices and scientific misconduct.

Summarizing their results, an article in Science notes, "More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature. And one in 12 [8 percent] admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results." Daniele Fanelli, a research ethicist at the London School of Economics, tells Science that 51 percent of researchers admitting to questionable research practices "could still be an underestimate."

In June, a meta-analysis of prior studies on questionable research practices and misconduct published in the journal Science and Engineering Ethics reported that more than 15 percent of researchers had witnessed others who had committed at least one instance of research misconduct (falsification, fabrication, plagiarism), while nearly 40 percent were aware of others who had engaged in at least one questionable research practice.

In a blistering editorial earlier this week, former editor of the medical journal The BMJ Richard Smith asks if it's "time to assume that health research is fraudulent until proven otherwise." Smith calls attention to a systematic review of randomized controlled trials recently submitted to the journal Anaesthesia by British anesthetist John Carlisle. He found that of the 153 studies for which individual patient data were available, 44 percent had untrustworthy data and 26 percent were what he called "zombie" trials whose results are animated by false data. Carlisle pointed out that many of the zombie trials came from researchers in Egypt, China, India, Iran, Japan, South Korea, and Turkey.

In an editorial, Ioannidis observes that the zombie anesthesia trials added up to "100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan." Taking the number of clinical trials from these countries listed with the World Health Organization's registry and extrapolating from the false trial rates identified by Carlisle, Ioannidis estimates that there are "almost 90,000 registered false trials from these countries, including some 50,000 zombies." Consequently, he concludes that "hundreds of thousands of zombie randomised trials circulate among us." Since randomized controlled trials are the gold standard for clinical research, Ioannidis adds, "One dreads to think of other study designs, for example, observational research, that are even less likely to be regulated and more likely to be sloppy than randomised trials."

In his BMJ editorial, Smith cites the work of Barbara K. Redman, author of Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach. During a webinar on research fraud, Smith reported that she insisted "that it is not a problem of bad apples but bad barrels if not of rotten forests or orchards." Redman argues, according to Smith, "that research misconduct is a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes." The research publication system is built on trust and peer review is not designed to detect fraud. Journals, publishers, funders, and research institutions have little incentive to check for fraud and a big disincentive against damaging their reputations by retracting studies.

So what can be done to stem the tide of apparently fraudulent research? Ioannidis suggests that one useful step would be to require that all datasets must be made available for reanalysis by other researchers. That is how Carlisle was able to identify untrustworthy and zombie anesthesia studies. Some hard thinking needs to be done about how to change incentives from publishing studies to discovering the true things about the world. For the time being, Smith may be right that "it may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary."

Nevertheless, I still agree with Ioannidis, who once told me, "Science is, was, and will continue to be the best thing that has happened to human beings."