Can Most Cancer Research Be Trusted?

Addressing the problem of "academic risk" in biomedical research

When a cancer study is published in a prestigious peer-reviewed journal, the implcation is the findings are robust, replicable, and point the way toward eventual treatments. Consequently, researchers scour their colleagues' work for clues about promising avenues to explore. Doctors pore over the pages, dreaming of new therapies coming down the pike. Which makes a new finding that nine out of 10 preclinical peer-reviewed cancer research studies cannot be replicated all the more shocking and discouraging. 

Last week, the scientific journal Nature published a disturbing commentary claiming that in the area of preclinical research—which involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in people—independent researchers doing the same experiment cannot get the same result as reported in the scientific literature. 

The commentary was written by former vice president for oncology research at the pharmaceutical company Amgen Glenn Begley and M.D. Anderson Cancer Center researcher Lee Ellis. They explain that researchers at Amgen tried to confirm academic research findings from published scientific studies in search of new targets for cancer therapeutics. Over 10 years, Amgen researchers could reproduce the results from only six out of 53 landmark papers. Begley and Ellis call this a “shocking result." It is.

The two note that they are not alone in finding academic biomedical research to be sketchy. Three researchers at Bayer Healthcare published an article [PDF] in the September 2011 Nature Reviews: Drug Discovery in which they assert “validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced.” How bad was the Bayer researchers’ disillusionment with academic lab results? They report that of 67 projects analyzed “only in 20 to 25 percent were the relevant published data completely in line with our in-house findings.”

Perhaps results from high-end journals have a better record? Not so, say the Bayer scientists. “Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility. Indeed, our analysis revealed that the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target or the number of independent groups that authored the publications.”

So what is going wrong? Neither study suggests that the main problem is fraud. Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results. For example, Begley met with the lead scientist of one promising study to discuss the problems Amgen was having in reproducing the study’s results.

"We went through the paper line by line, figure by figure," said Begley to Reuters. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning." Sadly, Begley explains in an email that they cannot reveal which studies are flawed due to the insistence by many researchers on confidentiality agreements before they would work with the Amgen scientists. So much for transparency. 

In 2005, epidemiologist John Ioannidis explained, “Why Most Published Research Findings Are False,” in the online journal PLoS Medicine. In that study Ioannidis noted that reported studies are less likely to be true when they are small, the postulated effect is weak, research designs and endpoints are flexible, financial and nonfinancial conflicts of interest are present, and competition in the field is fierce. 

The academic system encourages the publication of a lot of junk research, Begley and Ellis agree. “To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication,” they note. And journal editors and grant reviewers make it worse by pushing researchers to produce “a scientific finding that is simple, clear and complete—a ‘perfect’ story.” This pressure induces some researchers massage data to fit an underlying hypothesis or even suppress negative data that contradicts the favored hypothesis. In addition, peer review is broken. If an article is rejected by one journal, very often researchers will ignore the comments of reviewers, slap on another cover letter and submit to another journal. The publication process becomes a lottery; not a way to filter out misinformation.

Given all the brouhaha [PDF] over how financial interests are allegedly distorting pharmaceutical company research, it’s more than a bit ironic that it is pharmaceutical company scientists who are calling academic researchers to account. Back in 2004, an American Medical Association report [PDF] on conflicts of interest noted that reviews comparing academic and industry research found, "Most authors have concluded that industry-funded studies published in peer-reviewed journals are of equivalent or higher quality than non-industry funded clinical trials.” In an email, Begley, who was an academic researcher for 25 years before joining Amgen, agrees, “My impression, I don't have hard data, is that studies from large companies is of higher quality. Those companies are going to lay down millions of dollars if a study is positive. And they don't want to terminate a program prematurely so a negative study is more likely to be real.”

These results strongly suggest that the current biomedical research and publication system is wasting scads of money and talent. What can be done to improve the situation? Perhaps, as some Nature online commenters have bitterly suggested, researchers should submit their work directly to Bayer and Amgen for peer review? In fact, some venture companies are hedging against “academic risk” when it comes to investing in biomedical startups by hiring contract research organizations to vet academic science. 

Barring the advent of drug company peer review, more transparency will help. Begley and Ellis recommend that preclinical researchers be required to present all findings regardless of the outcome; no more picking the “best” story. Funders and reviewers must recognize that negative data can be just as informative as positive. Universities and grant makers should recognize and reward great teaching and mentoring and rely less on publication as the chief promotion benchmark. In addition, funders should focus more attention on developing standardized tools and protocols for use in research rather than just hankering after the next big “breakthrough.”

Researchers, funders, and editors should also consider the more radical proposals offered by Ioannides and colleagues including upfront registries of studies in which their hypotheses and protocols are outlined in public. That way if researchers decide later to fiddle with their protocols and results at least others in the field can find out about it. Another option would be to make peer-review comments available in public even for rejected studies. This would encourage researchers who want to resubmit to other journals to answer and fix problems identified by reviewers. The most intriguing idea is to have the drafts of papers deposited in to common public website where journal editors can scan through them, invite peer reviews, and make offers of publication. 

The chief argument for government funding of academic biomedical research is that it will produce the basic science upon which new therapies can be developed and commercialized by pharmaceutical companies. This ambition is reflected in the slogan on the website of National Institutes of Health (NIH), which reads “the nation’s medical research agency—supporting scientific studies that turn discovery into health.” These new studies give the public and policymakers cause to wonder just how much of the NIH’s $30 billion annual budget ends up producing the moral equivalent of junk science? 

Ronald Bailey is Reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  • T||

    Hey, wait, didn't we just read a poll about how people no longer trust the scientific establishment? Looks like the lack of faith was justified.

  • Marshall Gill||

    Exactly. It is not a lack of faith in the scientific method, it is a lack of faith in those who call themselves "scientist" and yet do not use the scientific method.

  • T||

    I guess the Journal of Irreproducible Results will be swamped with entries soon.

  • R C Dean||

    independent researchers doing the same experiment cannot get the same result as reported in the scientific literature nine out of 10 times.

    A lot depends on what counts as "the same result", no?

    If I do a study that says substance X reduces tumors by 10% in 90% of lab rats, is that study invalidated if someone else finds that it reduces tumors by 9% in 87% of lab rats?

  • PM||

    I doubt very much that such minor, within-the-margin-of-error variations would be completely discarded. Generally scientists design their experiments within a particular confidence interval that is integral to the validity of the results. If subsequent results do not fall in the same confidence interval then they should rightly be discarded.

  • Metazoan||

    I do agree. Especially in the life sciences, where virtually nothing is always.

  • Brand||

    "Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results."

    It's disheartening that the quest for deserved fame (in the case of curing a disease i.e. Jonas Salk) is resulting in a massive waste of epidemiologist's time.

  • Anacreon||

    More than fame, which is not really found in the sciences (can you name three famous contemporary scientists?) -- it is the rigors of the world of academia, with its "publish or die" mantra.

  • some guy||

    Most scientists don't want to be a household name. They just want their work to be required reading for students in their field. That's their definition of fame.

  • shrike||

    Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results.

    Reminds me of libertarian hack G. Edward Griffin (author of Ron Paul's favorite anti-Fed book) and the laetrile cure for cancer he tried to scam people with.

    Griffin was after a qucik buck on both his anti-Fed quackery and his cancer "cure" quackery.

  • PM||

    OMG! Did you see that red herring?!

  • shrike||

    You think laetrile is a cure for cancer?

    Please go on!

  • PM||

    There it is again! It damn near bit me! Did you see it?

  • Brutus||

    Here fishy, fishy...

  • Old Mexican||

    Re: shrike,

    Griffin was after a qucik [sic] buck on both his anti-Fed quackery and his cancer "cure" quackery.


    Because being against an inherently fraudulent institution is akin to selling snakeoil, in shrikey's book.

  • shrike||

    Who is profiting from the Fed's "fraud"? Other than taxpayers? ($75 billion last year)...

  • Old Mexican||

    Re: shrike,

    Who is profiting from the Fed's "fraud"? Other than taxpayers?


    Depends on which taxpayers you're talking about. The greater beneficiaries of credit expansion are the banks. The greatest beneficiary of bond purchasing is the government. The rest of us are the innocent bystanders who get fleeced.

  • shrike||

    Check this out, OM.

    http://boards.straightdope.com.....ay.php?f=7

    Really, we need a new LP victim.

  • Old Mexican||

    Re: shrike,

    Seriously, shrike - you linked to a popular newspaper blog as a way to submit a counterargument?

    After being amused by The Straight Dope for a couple of years, I got to read their definition of money and simply stopped. They may be spot on on most subjects but economics ain't their thing.

  • shrike||

    LMAO. Dopes didn't go with your nutty monetary policy?

    I am not surprised.

    You are a True Believer - I like that!

  • shrike||

    I like OM because he is an honest debate opponent.

    OM tries to win on merit - that is rare here. (less than five posters)

  • PM||

    Or, stated differently, OMG! Did you see that red herring?!

    The replicability of research results in peer-reviewed scientific journals and a fake cancer cure pushed by a guy who once had something to do with Ron Paul are so absurdly unrelated that there really is no proper response except to point out the non-sequitur and shake your head.

  • shrike||

    Griffin is a dingbat. His anti-Fed book was published in the early 90's and still no one takes it seriously.

    Find me ONE credible source who reviewed it. WSJ, FT, Bloomberg, any economist, etc.

    The guy is a fucking flake.

  • Brutus||

    Stay on topic, fuckstick.

  • T||

    You know, we really don't enforce that rule on anybody else, so it's not fair to enforce it on shriek.

  • Brutus||

    There's a difference between drifting off-course and deliberately driving the discussion off-course.

  • ||

    He is staying on topic. He wants to discuss those magnificent herrings. Just because that wasn't the topic of the article, or the topic the rest of us were discussing, doesn't mean he hasn't resolutely stuck to it.

  • NeonCat||

    Maybe those herrings are cancerous. It would explain the unnatural redness of them…

  • Old Mexican||

    Re: shrike,

    His anti-Fed book was published in the early 90's and still no one takes it seriously.


    No one?

  • shrike||

    It sells well, no doubt.

    Find me a review from a credible source. please?

  • ||

    Holy shit! Did you see that one? It's bigger still! Herrings... herrings everywhere.

  • Appalachian Australian||

  • ||

    Thank you for that link. I am saving it to beat the shit out of some commenters over at pjmedia, although they havent shown up in a while. In fact, I wouldnt be surprised to find out that the shithead in the article is the commenter.

  • Hunter||

    I work in academic neuroscience and I must say that the whole business is a mess, incentive wise, nowadays. There are many reasons for it, tight funding is one, NIH frankly doesn't provide enough money to do enough replications and maintain a solid publication record unless you are in one of the very few top end labs. We've trained too many ph.d.'s for far too few jobs and funding is oriented towards flash studies that may not have much to do with solving actual human problems, not that there is anything wrong with truly basic research, but when most of the money comes from NIH the incentive to turn your work into a 'disease model' is high. That and most scientists don't seem to understand regression to the mean or what Ioannidis called 'null fields', where whole area's of scientific study are basically reporting only positive noise. I don't think it would solve all of these problems, but the MRC/HHMI model has much to recommend it over the current NIH system, without the constant pressure to be submitting grants and sniffing the political winds. It is significant that for all the basic research done on the treatment of neuropsychiatric disease, we really haven't produced any new drugs in decades (though there have been a few successful surgical interventions). If my field could stop pretending that it had valid models of say, depression, we might be able to get back to figuring out what depression actually is. Sadly, this paper has the ring of truth to it.

  • shrike||

    My educated guess (stress on guess) - the chemical pharma innovations are finished and they won't admit it.

    If you're not in biotech you are just a worthless laggard feeding off the Bushpigs Medicare Part D largesse.

    Bush/Dick were the biggest grifters of all time.

  • Brutus||

    Now, if Obama had come up with Medicare Part D...

  • PM||

    "Why do you want to withhold drugs from sick old people, you heartless bastard!"

  • Metazoan||

    I agree that Bushpig Medicare part D sucks, but are you seriously suggesting that there are no more small molecule pharmaceuticals to be developed?! The very idea is preposterous, given the enormous potential still laying undiscovered in the human genome/proteome.

  • shrike||

    I said chemical - not biopharma.

    Biotech has huge potential. Chemical does not.

  • Hunter||

    Big Pharma has pretty much admitted this by firing their neuropsychiatric research divisions en masse, only Lilly still has one I believe. It doesn't follow that there are no small molecules of use in the treatment of these diseases, just that the available models to test them on aren't all that useful. We really don't know much about these diseases, so we are a long way from being able to do rational drug design to treat them.

  • Metazoan||

    That's fine, I agree and defer to your experience. It just seemed like the suggestion was that traditional drugs (small molecules) had run their course, which is of course nowhere near true. I do agree that new approaches are being/will be applied.

    That said, even success in the molecular realm with regards to rational design (hard to come by as it is) is still ineffective without targeting the correct molecules. Of course, (and I know I'm rambling here), even getting the wrong molecule is still a learning experience. I don't think SSRIs are the be-all-end-all of antidepressants, yet some colleagues' work on LeuT, a bacterial homolog of the selective serotonin symporter is still very enlightening for molecular dynamics in general, and (I think) can still provide decent therapeutic avenues.
    In conclusion, you are still correct that the models do need to change quite a lot, with an understanding that rational drug design can't happen well without an actually good model.

  • Hunter||

    You are right that we probably aren't done with small molecules, though, actually, SSRI's which really were the result of rational drug design have turned out to be only marginally better than placebos (see Marcia Angell's article in the New York Review of Books for a general audience intro to these issues). The problem was and is that the 'chemical imbalance' hypothesis that drove that development has never been well established to actually occur in humans, and we've built most of our animal models on that basis. So, most of our models aren't useful for detecting new targets or mechanisms. I continue to be nagged by the idea that the 'silver bullet' model that we pharmacologists have had in mind since Fleming and Florey may be perfectly valid for infectious disease but may not work terribly well for an organ as complex as the brain (there are exceptions of course, the orexin system comes to mind). I think we have to do a whole lot more basic research unencumbered by old ideas like chemical imbalance before we can understand what our targets really are.

  • Jerryskids||

    So do you know anything about the "heart attack vaccine"?

    I generally don't trust stories like this because the press is going to hype it and in this case the fact that they are calling this a 'vaccine' demonstrates a higher level of ignorance.

  • James West||

    I do research in pulmonary arterial hypertension (PAH), and am well funded to do it (by the NIH). In my field, there are two sources for failed clinical trials, that could have been predicted to fail by anyone paying enough attention to the literature: (1) Pharma throws their drugs at PAH, completely lacking a molecular theory, in the hopes of getting lucky. These fail, or succeed with such narrow definitions of success that for all intents they fail. (2) People use wildly flawed models to predict treatments, which then fail to have any effect in humans. However, since they're the STANDARD wildly flawed models, they get taken seriously.

    I actually think that the current NIH funding paradigm is probably the best possible - pharma doesn't really do basic research, and even if they did, we'd have to pay much more for it (they'd all keep it secret...). Other countries (and internal funding within the US institutes) have less competitive funding - it goes to the same people, who then have very little motivation to do anything at all.

    In other words - like Democracy, our current funding system is the worst, except for all the others.

  • fresno dan||

    I think you give a good analysis. I would add a more prosaic point - reporting that something doesn't work is like kissing your sister - theres just no interest in publishing negative results.
    And people can publish honestly but the results are a fluke. Take an experiment where you are trying to get 10 heads in a row (in say 100 flips). That is a rare event, but it does happen. But unless you know how many failures surround the positive event, you have no way of knowing the frequency of the event happening...

  • Jerryskids||

    You mean that incentives change behavior? If you are an evil corporation looking to profit off the misery of others you want to do good science so that you can make a good product whereas if you are a government-funded researcher you are going to do whatever sort of science supports getting more government funding. Who would have thunk it?

  • James West||

    Unfortunately, mostly pharma's scientists have incentives to get -existing- drugs applied to all sorts of at best marginally related conditions, rather than target the actual molecular etiology of a specific condition. Mixed incentives exist in every field - I personally think academia gets it the closest to right of anyone.

  • ||

    The problem with much of this type of research is that it applies inductive statistics with limited sample sizes and numerous variables. A research can easily form twenty incorrect hypotheses. If he accepts a finding as statistically significant with 95% confidence, odds are that one of those hypotheses will make a good journal article. That's why the journal articles always conclude with language to the effect of "more research is needed" (translation: "please fund our next research proposal".)

  • Old Mexican||

    So what is going wrong? Neither study suggests that the main problem is fraud. Instead they conclude that the scramble for funding and fame (which are inextricably linked) has resulted in increasingly lax standards for reporting research results.


    DENIER!!!!

  • David_TheMan||

    If there was no FDA and you could fund your research by quickly putting out different forms of treatment in the free market, while giving full disclosure to participants in research, would this be an issue at all?

  • Robert||

    To the extent it's an issue even as things stand now, of course it would still be. Or maybe we have different ideas of what "the issue" is. AFAICT, the issue is how hard science is.

  • PM||

    OMG! ARE YOU INSANE! THINK OF TEH CHILDRENZ!

  • Greg F||

    These results strongly suggest that the current biomedical government funded research and publication system is wasting scads of money and talent.

    There, fixed it.

  • James West||

    I've got a lot of friends in pharma and, trust me, they're worse: with NIH funding, you have pressure to get -any- result, which causes sloppy science. In industry, you have pressure to get a -specific- result, which causes even sloppier science.

  • Robert||

    Look, Bailey: Any science where the signal to noise ratio is high -- that'd be biology, and even a lot of physics & chemistry where they're measuring small deviations in big things -- is hard. Really the fuck hard. The low reproducibility of published findings is not at all news or surprising, let alone shocking or disturbing. It's just a fact of life. You want to explore the frontier, you expect to find more false things than truths. I could go on & on with my experience and that of others I've been connected to, but is that 900 char. limit still in effect?

  • Robert||

    I meant low signal to noise, or high noise to signal, sorry.

  • Ron Bailey||

    Robert: Both of the studies claim to have taken the factors you cite into account. Unfortunately Nature does not allow access for nonsubscribers, but you can take a look at the study by the Bayer researchers and see what you think.

  • Ron Bailey||

    Robert: See also Hunter's comments above.

  • ||

    "Oh, but did the scientists think of this?!?" Thank God for those people, I'm sure no scientist would have ever considered one of the fundamental aspects of their field in their research.

    "Gee, what about the sun, that has an effect on climate?!?"

    "Gee, what about the margin of instrument error, did anyone think of that?!?"

    Here's to you, Mr. Internet Commentator Who Passed High School Chemistry Trying to Teach Scientific Experts How Science Works!

  • Robert||

    What did the Bayer researchers expect? Good leads from a single publication or lab? That's asking a lot!

  • James West||

    Excellent post. Also, this article has a bit of a 'the sky is blue' quality to it. Those of us who do science for a living already know that the literature is only reliable in the sense that nobody's lying, not in the sense that you should trust the conclusions of any particular paper.

  • Johnfreedoe||

    Sometimes they work together (!!!)

    http://www.houstonpress.com/20.....the-hatch/

  • ||

    Awesome. We haven't had a really good nut punch since Balko left.

  • Todd||

    The problem, along with crummy motivations for more gummint cash, is the crap protocols used in academia. I'm a career biotech guy, but my last job, I decided to give this academic lab in a Big Name School a chance. They were sloppy by industry standards, though admittedly a lot better than your typical academic lab. (They also had WAY more money than your typical academic lab as well.) Still, there's just a lot of general sloppiness. As much as I loathe the FDA, Good Laboratory Practices regulation is a good thing, if only because it forces you to make sure everything is out in the open and all eventualities are accounted for. (Whether we need a FDA to push such regs is another issue.) In academia, it's not unusual to not have a clue on a deep level what's going on. "Secret sauce" reagents, poorly documented protocols, equipment that's unreliable...you name it, it happens. Throw in the small sample size, and the poor reproducibility makes perfect sense.

  • ||

    I guess this is my 'those darn kids' speech.

    My father was a metallurgist so I grew up in a household where science was religion.
    When I went to university my profs were fanatical about procedure, method etc., you know, back in horse and buggy days.

    Over the decades I have watched standards decline and social and political activists infiltrate the scientific community. I have watched Scientific American go from a respected, peer-reviewed journal to a fucking hippie rag; it isnt worth the paper it is printed on now. My own confidence in scientific findings has gone from rock-solid to nearly non-existent. ( I still trust science, the scientific method and the skepticism inherent in good science is the most profound discoveries/creations of man. The social activists masquerading as scientists these days, not so much)

    When I was in university if 9 out of 10 research studies of any kind were unreplicatable it would have been a huge scandal and heads, many heads, would have rolled. Now, hardly anyone understands the significance of it.

    What government creeps and progressives have done to our culture is a crime. How bad does it have to get before we do something about this blight?

  • Robert||

    They were unreplicatable, you just didn't know it. You did things over until you were satisfied; how do you know that once the results were published, they'd come out the same in someone else's hands?

  • ||

    Because we put them in someone else's hands. Having one guy/team do something over and over? We had multiple groups doing replications before we could express any confidence in our findings. Publishing was done for the purpose of having others give it a go, not to impress some bureaucrat-shit head into giving money.

    In any case, I havent been in a lab for decades.

  • Robert||

    What kind of science was it? Maybe you were in a field with a high ratio of signal to noise. As time goes on, there's less of that as the low hanging fruit gets picked.

  • ||

    Chem, and yes that is true.
    One more vodka and I am off to bed.

  • Untermensch||

    You can date the change in SciAm to 1993. They had a chance in editorial board some time that year and the change was dramatic and almost instantaneous. They dropped many of the quirky features they'd had that gave it life and replaced them with (very) thinly veiled political grandstanding. I used to subscribe and stopped that year because it wasn't worth it. About once a year I pick it up to see if it's gotten any better and it never has. If anything they've tried to turn it into Reader's Digest in the past few years with a few bullet points about the articles for their illiterate readers.

    Of course, for real fun, my collection of Scientific American from 1893 that I inherited is great fun. It beats the crap out of anything else around now.

  • Robert||

    Academic protocols aren't sloppy, just idiosyncratic. It could hardly be otherwise when you have rapid turnover in investigators (grad students, post-docs) and investigations (grant funded). GLP is ok for what it does, but that's only an environment where you know in advance all of what you want to do; basic research needs flexibility.

  • Mendelism||

    Coming at this from the other side, I'm in an academic research environment in which it is basically not allowed to try to publish anything that has even a chance at eventually being found to be a false-positive. This is actually becoming the consensus (and a very welcome one) in human genetics, which has had more than its fair share of false-positive reports through the years. There was a wasteland of literature that is now actually being "cleaned up".

    But the result is that I'm in a competition for faculty spots, or positions in the pharma/biotech industries, against a lot of folks who do not share the same standards for what constitutes a "publishable" or "reportable" finding. My resume looks much weaker in comparison, and I am starting to see that I'll need to either oversell/overinterpret the results of studies that I've done, or accept that I'm going to be working for less capable but more dishonest scientists for the rest of my life. I got into science because I thought it was close to a "meritocracy". Not so. Fucking shame.

  • ||

    It is no help at all to tell you that I have found that being true to your principles is not very lucrative. I am very fortunate that I am able to stay true and pay the bills. I have seen more than a few hot-shit, do anything to win types shoot past me only to go down in flames later on. I dont know how old you are or how long you have been in your profession, but over time you will no doubt see the same.
    I sleep like a baby at night.

  • Dan||

    This is the direct result of academic institutions relying on grant money to fund their research.

    A lot of grant money is provided with the understanding that the people that provided it expect a certain result from a study. And there is tremendous pressure on the researchers to produce those results in order to keep the grant money flowing.

    It's almost if not completely impossible to be impartial and objective when your very livelyhood is dependant upon producing a desired result.

  • James West||

    NIH never expects a particular result. However, I do have to say that I've got a grant expiring this year for which we found that the central hypothesis wrong at worst and weak at best. Needless to say, we're not trying to renew it.

  • TiggyFooo||

    Junk Science sounds pretty cool to me dude.

    www.Surf-Tools.tk

  • C. S. P. Schofield||

    What I want to know, as regards cancer research, is how big an industry is the anti-smoking crusade, where does the money come from, where does it go, and how well accounted-for is it?

    Anything that doesn't match the "tobacco companies are Beelzebub" and "you can get cancer by LOOKING at a cigarette" narrative is ruthlessly dissected as regards to who paid for it. It seems to me that what is sauce for the goose would make fine gravy for the gander.

  • Astra||

    I am an astrophysicist. There is increasing pressure in the physical sciences to push out papers to get more grant money but the problem appears to be much worse in biomedical science. They publish results that would be rejected in my field as statistically insignificant. They are do a terrible job handling systematic uncertainties.

    I tend to trust results in Nature and Science less than those in the main journals for each subfield. Those two try to publish sexy results and will therefore push out papers that are on the hairy edge. Of course their batting average will be lower.

  • James West||

    I've a doctorate in physics, but switched to biomedical almost 20 years ago - I do remember being shocked by the difference in culture. In physics, we were happy to have one really carefully done paper every two years. Now, something's wrong if I don't have fifteen new publications per year. Moreover, you CAN NOT PUBLISH a carefully done study - the reviewers rapidly get bored and confused if you don't tell a simple, coherent story.

  • Hunter||

    Yeah, I was just in the lunch line with some guys from a lab a collaborate with when the comment cam up: 'just because it is in Nature doesn't mean it's wrong'.

  • Danno||

    End taxpayer support for all research. Did Telsa get taxpayer money???

  • James West||

    The problem is that research without taxpayer support is slow and erratic, and the public benefits of research sufficient that it would do substantial harm to have our research so hindered. Research -will- be done by private companies, but they won't share it with each-other, which means that the public will pay for it over and over and over again. It's just cheaper to have it taxpayer supported.

  • sciencenerd||

    As a medical researcher and a Libertarian, I could identify with much that was said in this article. The lab I work in has tried to reproduce experiments done in other labs and we have been unable to do so. Suspicions of why vary from the wondering if the authors of the paper left out certain steps or details in the publication, to fears of outright fraud on the part of the research authors. It is a fact that negative results are rarely published in journals although negative data can be quite informative too. I, personally, do not think that government funded research, be it through grants given to academia or that done by professional scientists working for the government, is inherently less truthful, reproducible or sound than that done by the private sector. Many times, the government funded research consists of pilot studies that are, at best, suggestive of methods that might work in industry. It’s up to industry to optimize the methods and make them reproducible. That’s what industry does.

  • sciencenerd||

    At one time, NIH research grants and intramural programs encouraged researchers to pursue basic science questions that many times did not have immediate or direct commercial applications. This is no longer true. The NIH is now encouraging all government funded research to be “translational”. That is, they are trying to have the NIH do basic research that can be picked up by private industry in order to make a commercial product or test with very little optimization. I think that this is unfortunate. Some of the most important medical research questions do not have immediate commercial applications. That does not mean that the information will not be helpful to industry down the road. Industry will not do these types of experiments because of the economics involved. This is truly a place where government funded science is important because these types of experiments are what lead to new methods, and looking at an immediate, direct commercial application should not be all that is driving the science.

  • Jameskep||

    Confirmation bias much, Mr. Bailey? I encourage readers to actually read the AMA review that Mr. Bailey cites. First, the sentence he cites is talking about industry-funded trials (which includes academia-based researchers) and not "industry research." Second, the sentence he is talking about is embedded within a section discussing bias in industry-sponsored trials. For example, evidence "support[s] the notion that conclusions are generally more positive in trials funded by the pharmaceutical industry, in part due to biased interpretation of trial results." Third, Mr. Bailey starts his writing focusing on cancer trials and then pulls the nice trick of using this example to malign the whole academic system. Again, confirmation bias much?

    We get it. You think academia is filled with perverse incentives but industry, because its incentives are purely monetary, does not have perverse incentives. That does not make sense. The truth is -- there is bias in both and both serve a great purpose. Let's focus on the flaws of both and how to improve them.

  • دردشه عراقية||

    Thanks

GET REASON MAGAZINE

Get Reason's print or digital edition before it’s posted online

  • Video Game Nation: How gaming is making America freer – and more fun.
  • Matt Welch: How the left turned against free speech.
  • Nothing Left to Cut? Congress can’t live within their means.
  • And much more.

SUBSCRIBE

advertisement