Following The Money In Science—Does It Matter?
Yesterday, the Washington Post ran an interesting op/ed that questioned the recent spate of stories claiming that drug company and other industry money is corrupting scientists. Written by Harvard Med School professor Thomas Stossel and Mass General endocrinologist David Shaywitz, the op/ed argues:
There is little hard evidence showing that financial ties between university or government researchers and drug companies create health hazards for consumers. Nevertheless, these links are now widely portrayed as dangerous, corrupting the pursuit of scientific truth and threatening the public…This issue surfaced recently, and ominously, when the consumer watchdog group Public Citizen investigated the biases and financial conflicts among experts serving on the FDA's drug advisory committees. The FDA uses these committees -- staffed by university researchers and other top experts in a field -- to evaluate new drugs and make recommendations about their approval. More often than not, the FDA follows those recommendations.
Public Citizen reviewed 221 committee meetings from 2001 to 2004. The study found that although about a third of advisory committee members had ties to drug companies (FDA requires disclosure of such connections), those links had no significant impact on whether particular drugs received approval. In other words, there was no smoking gun.
Stossel and Shaywitz contentiously conclude:
Medical care available to Americans is immensely better today than when we began our careers in medicine, in large measure because physicians have far superior technology at their disposal. And while much of the knowledge underlying these developments originated in universities, it was biotechnology firms and other companies that transformed this knowledge into the new drugs and devices that have proved so useful to the public. Little of this technology -- be it vaccines for hepatitis, heart valves, or new anti-inflammatory drugs for rheumatoid arthritis -- was developed by scholars and researchers without supposed conflicts of interest. And none of it came from advocacy organizations such as Public Citizen or their boosters at JAMA.
Whole thing here.
Some of my views on the demonization on the pharmaceutical industry here.
And some of my views on shortsighted pharmaceutical industry shenanigans here.
Disclosure: Yes, yes, as you all know, I own small amounts of stock in various biomedical companies. If you think that affects the accuracy and fairness of my analysis of these issues you're entitled to your opinion, erroneous though it is.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Damn Ron,
Counterpointing the ad hominem attackers is havlf the fun of coming here. Why spoil our fun with pre-emptive acknowledgement?
Here are a couple choice sentences from a San Jose Mercury News article:
I must say that those two sentences stood out like two sore thumbs as I was reading the otherwise informative and unjarring article. If only the second disdisclaimer appeared, I don't think it would have looked so out of place. The first one is the real irritant, and the second only underscores the irritation.
This issue surfaced recently, and ominously, when the consumer watchdog group Public Citizen investigated the biases and financial conflicts among experts serving on the FDA's drug advisory committees.
True story: in college, when I was very idealistic and always ready to donate money and time to what I considered 'good causes,' I also sent in some money once for a subscription to "Public Citizen." Total number of issues I subsequently received: zero.
I do think that there is a lot of evil to be said about the pharmaceutical industry, but I'll give Big Pharma this much credit: when you give them money in exchange for something you don't actually need, by God they give you something you don't actually need in exchange for your money! Unlike certain magazines that I could mention but won't.
... you're entitled to your opinion, erroneous though it is.
Hmmmmmmmmmm.....
So oversight of the medical cartel is not influenced by who is on the medical cartel payroll. OK.
Now 'splain this. Marijuana is an anti-depressant. The medical cartel sells a lot of anti-depressants. Members of the medical cartel are some of the biggest donors to "The Partnership for a Drug Free America". Can you say rent seeking? I knew you could.
Depressing.
Ron,
You may also want to disclose that you benefit from economic growth and technological advances. It may bias your analysis. 😀
Public Citizen thinks all research should be funded by government because, unlike the pharma companies, government research is absolutely unbiased and has no political/economic agenda.
Oh right.
As a scientist, I think these worries are overblown. While subtle biases in the direction of the money will exist, in general, the following holds for virtually all scientific scenarios:
1: A scientist's actual pay or salary has virtually no correlation with specific results, especially in a direct, short-term way.
2: A scientist, at a personal level, is much more concerned with the success of his or her idea and the status it brings than any trivial difference in long-term pay he or she may receive. THIS bias is a much bigger problem.
3: Science's system of peer-review is largely self-correcting, and making a mistake is deadly for one's career. Likewise, CATCHING somebody making a mistake is a huge boost for your career. This strongly limits the extents to which bias can occur.
4: In general, psychological studies have shown that YOU are more biased than you think, but everyone else is a lot LESS biased than you think. This applies here as it does anywhere else claims of bias pop up.
Chad-
I more or less agree with you regarding bias and published original studies. I still think it's good if very significant results are confirmed by groups with distinct funding sources, but I more or less agree with you on the self-correcting features of science.
The bigger problem is probably when scientists act as consultants and render their expert opinion, issue recommendations, or summarize the broader findings of the field. If something is plausible then I could easily come up with quantitative estimates to show how plausible it is, cite supporting studies, and outline the positive things that will happen if this something really works. ("Something" could be a drug, a technology, an argument for why a substance is not harmful at a particular dose, etc.)
But lots of plausible ideas turn out to be wrong. If a scientist lends his credibility and reputation to a hypothesis in a consulting role, that could be a good way to mislead people. Especially if the estimates and plausibility arguments are delivered in marketing and lobbying language, rather than the cautious language of science ("We hypothesize that..." "It appears that..." "Data is consistent with..." "There is a correlation..."), and if the caveats are left out ("But further work is needed..." "While this is consistent with our hypothesis we cannot rule out..." "A direct measurement would require...").
There's a big difference between outlining the plausibility arguments in favor of a hypothesis and publishing actual experiments that test the hypothesis. I think the culture and processes of science are pretty good at policing us when we engage in the process of publishing experiments. Not perfect, of course, but pretty good. But scientists who step outside that process to present plausibility arguments in a consulting role maybe a little more dubious.
I don't know how often scientists do the thing that I just outlined, but I think we should remember the distinction here, and remember that peer review and replication are sober processes that don't always happen in consulting relationships.
In summary: Argument ad funderam shouldn't carry much weight when evaluating peer-reviewed results that have been independently replicated. But argument ad funderam should be taken into consideration when "experts issue a report surveying the latest reults". That leaves far more room for cherry-picking.
For instance, my thesis had four chapters. Three of them reported my results, complete with detailed descriptions of methods, error bars, all necessary caveats, etc.
But the first chapter was my overview of the field. It was my take on the state of the field, what you need to know to really understand my results, what the open questions are, and what the significant results are. It was a very subjective piece of writing, because I would say things like "Most people present [insert a really technical thing here] as the surprising phenomenon, but in fact [insert phenomenon here] is just a consequence of a much more general characteristic of wave propagation. The real surprise is [insert the other half of that phenomenon here]." I totally editorialized.
My statements weren't contradictory to established results, and none of my committee members objected. My thesis is a testament to the fact that I contributed something to the field, so I've earned the right to stake out some territory and weigh in on the relative significance of various results, as well as the best way to understand things. However, it would be a huge mistake for an outsider to take the opinions expressed in my thesis as the final word on what's hot and what's not in the optics of disordered media.
My published results are subject to peer review and replication. My introductory thesis chapter, while appropriate in context, is a very different beast. The same distinction should be made between an expert's peer-reviewed and replicated findings, and his recommendations based on the current state of knowledge. The later may be subject to considerable bias deriving from any number of directions.
If it is possible for a scientific report to present an incomplete picture, is it possible for a non-scientist writing about scientific issues to present an incomplete picture? Particularly if that writer has certain political and ideological biases?
You guys can go ahead and pretend that I'm referring to the people who write for Scientific American.
I agree with you, thoreau. If you want to see a great example of what you are talking about, did you happen to see the recent report by the surgeon-general about second-hand smoke? There is so much difference between the report summary (for reporters, politicians, and the public) and the actual data that I just about flipped a lid.
I have no doubt that in this case, the line between science and political advocacy had been crossed (by our very own paid officials).
Fortunately, both sides do this so it more-or-less cancels. Also, in these cases, money is less important than one's own personal biases. The expert on global warming paid by Exxon or the Sierra Club (or even the EPA, whose workers lean way left) is far more biased because of their deep-rooted personal beliefs than a few dollars of hypothetical grants they might be able to get if they manipulated the data.
Fortunately, both sides do this so it more-or-less cancels.
Except that clashes of that sort often produce more heat than light. When biased people spew biased reports and talk past each other, people who don't understand the underlying science are more likely to take a side based on their allegiances and biases rather than the merits of the arguments put forth. Which may be inevitable anyway, but it's sad.
There are processes in this world for resolving conflicting information: Markets, where everybody has to put their money where their mouth is, are said to be good at predicting weather (e.g. prices of wheat futures influenced by weather). Peer review and publishing tend to act as filters, since the good studies are replicated and then cited, while the bad results are never replicated and everybody moves on. The blogosphere is good at fact-checking when the matter in dispute is initially obscure but the answer is easily verified: Evidence is linked to, and anybody can examine the link and compare it with the claims bein gmade. Polls are said to give reliable answers on questions that don't have a strong emotional or ideological component (e.g. ask how many marbles are in jars).
But all of those processes involve people doing more than just reciting assertions. They either have to replicate something, provide easily verified evidence, take risks in the market, or put aside emotions. But dueling experts just have to employ rhetoric.
There may be no answer to this problem. It may be unsolvable. But the fact that "both sides do it" doesn't really improve the situation. It may not make it any worse, but it surely doesn't improve it.
Out of curiosity, what's your area of research, Chad?
So I guess no one ever withheld evidence of problems with Vioxx and still had shills in the FDA to approve and continue to support it in the face of evidence to the contrary.
There was an article in our local paper about the bottleneck that new drugs face: the unwillingness of the insurance companies to pay for "experimental" treatment, under which label they clump all new drugs.
Lack of market can be the greatest dampener of innovetion there is.
Of course, they can call on their legislators and strike a deal with Medicare or Medicaid to get themselves a market...