Politics

Can Most Cancer Research Be Trusted?

Addressing the problem of "academic risk" in biomedical research

|

When a cancer study is published in a prestigious peer-reviewed journal, the implcation is the findings are robust, replicable, and point the way toward eventual treatments. Consequently, researchers scour their colleagues' work for clues about promising avenues to explore. Doctors pore over the pages, dreaming of new therapies coming down the pike. Which makes a new finding that nine out of 10 preclinical peer-reviewed cancer research studies cannot be replicated all the more shocking and discouraging. 

Last week, the scientific journal Nature published a disturbing commentary claiming that in the area of preclinical research—which involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in people—independent researchers doing the same experiment cannot get the same result as reported in the scientific literature. 

The commentary was written by former vice president for oncology research at the pharmaceutical company Amgen Glenn Begley and M.D. Anderson Cancer Center researcher Lee Ellis. They explain that researchers at Amgen tried to confirm academic research findings from published scientific studies in search of new targets for cancer therapeutics. Over 10 years, Amgen researchers could reproduce the results from only six out of 53 landmark papers. Begley and Ellis call this a "shocking result." It is.

The two note that they are not alone in finding academic biomedical research to be sketchy. Three researchers at Bayer Healthcare published an article [PDF] in the September 2011 Nature Reviews: Drug Discovery in which they assert "validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced." How bad was the Bayer researchers' disillusionment with academic lab results? They report that of 67 projects analyzed "only in 20 to 25 percent were the relevant published data completely in line with our in-house findings."

Perhaps results from high-end journals have a better record? Not so, say the Bayer scientists. "Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility. Indeed, our analysis revealed that the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target or the number of independent groups that authored the publications."

So what is going wrong? Neither study suggests that the main problem is fraud. Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results. For example, Begley met with the lead scientist of one promising study to discuss the problems Amgen was having in reproducing the study's results.

"We went through the paper line by line, figure by figure," said Begley to Reuters. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning." Sadly, Begley explains in an email that they cannot reveal which studies are flawed due to the insistence by many researchers on confidentiality agreements before they would work with the Amgen scientists. So much for transparency. 

In 2005, epidemiologist John Ioannidis explained, "Why Most Published Research Findings Are False," in the online journal PLoS Medicine. In that study Ioannidis noted that reported studies are less likely to be true when they are small, the postulated effect is weak, research designs and endpoints are flexible, financial and nonfinancial conflicts of interest are present, and competition in the field is fierce. 

The academic system encourages the publication of a lot of junk research, Begley and Ellis agree. "To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication," they note. And journal editors and grant reviewers make it worse by pushing researchers to produce "a scientific finding that is simple, clear and complete—a 'perfect' story." This pressure induces some researchers massage data to fit an underlying hypothesis or even suppress negative data that contradicts the favored hypothesis. In addition, peer review is broken. If an article is rejected by one journal, very often researchers will ignore the comments of reviewers, slap on another cover letter and submit to another journal. The publication process becomes a lottery; not a way to filter out misinformation.

Given all the brouhaha [PDF] over how financial interests are allegedly distorting pharmaceutical company research, it's more than a bit ironic that it is pharmaceutical company scientists who are calling academic researchers to account. Back in 2004, an American Medical Association report [PDF] on conflicts of interest noted that reviews comparing academic and industry research found, "Most authors have concluded that industry-funded studies published in peer-reviewed journals are of equivalent or higher quality than non-industry funded clinical trials." In an email, Begley, who was an academic researcher for 25 years before joining Amgen, agrees, "My impression, I don't have hard data, is that studies from large companies is of higher quality. Those companies are going to lay down millions of dollars if a study is positive. And they don't want to terminate a program prematurely so a negative study is more likely to be real."

These results strongly suggest that the current biomedical research and publication system is wasting scads of money and talent. What can be done to improve the situation? Perhaps, as some Nature online commenters have bitterly suggested, researchers should submit their work directly to Bayer and Amgen for peer review? In fact, some venture companies are hedging against "academic risk" when it comes to investing in biomedical startups by hiring contract research organizations to vet academic science. 

Barring the advent of drug company peer review, more transparency will help. Begley and Ellis recommend that preclinical researchers be required to present all findings regardless of the outcome; no more picking the "best" story. Funders and reviewers must recognize that negative data can be just as informative as positive. Universities and grant makers should recognize and reward great teaching and mentoring and rely less on publication as the chief promotion benchmark. In addition, funders should focus more attention on developing standardized tools and protocols for use in research rather than just hankering after the next big "breakthrough."

Researchers, funders, and editors should also consider the more radical proposals offered by Ioannides and colleagues including upfront registries of studies in which their hypotheses and protocols are outlined in public. That way if researchers decide later to fiddle with their protocols and results at least others in the field can find out about it. Another option would be to make peer-review comments available in public even for rejected studies. This would encourage researchers who want to resubmit to other journals to answer and fix problems identified by reviewers. The most intriguing idea is to have the drafts of papers deposited in to common public website where journal editors can scan through them, invite peer reviews, and make offers of publication. 

The chief argument for government funding of academic biomedical research is that it will produce the basic science upon which new therapies can be developed and commercialized by pharmaceutical companies. This ambition is reflected in the slogan on the website of National Institutes of Health (NIH), which reads "the nation's medical research agency—supporting scientific studies that turn discovery into health." These new studies give the public and policymakers cause to wonder just how much of the NIH's $30 billion annual budget ends up producing the moral equivalent of junk science? 

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books