Getting Risk Right: Understanding the Science of Elusive Health Risks, by Geoffrey Kabat, Columbia University Press, 248 pp., $35.00
Eating bacon and ham four times a week could make asthma symptoms worse. Drinking hot coffee and tea may cause cancer of the esophagus. South Africa's minister of health warns that doggy-style sex is a major cause of stroke and cancer in men. And those claims are just drawn from the health headlines this week.
The media inundate us daily with studies that seem show modern life is increasingly risky. Most of those stories must be false, given that life expectancy for American men and women respectively has risen from 71.8 and 78.8 years in 1990 to 76.3 and 81.1 years now. Apparently, we are suffering through an epidemic of bad epidemiology.
When it comes to separating the wheat of good public health research from the chaff of studies that are mediocre or just plain bad, Albert Einstein College of Medicine epidemiologist Geoffrey Kabat is a national treasure. "Most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true," he declares in his excellent new book Getting Risk Right. Kabat's earlier book, the superb Hyping Health Risks, thoroughly dismantled the prevalent medical myths that man-made chemicals, electromagnetic fields, radon, and passive smoking were significant causes of such illnesses as cancer and heart disease. His new book shows how scientific research, particularly epidemiology, so often goes wrong—and, importantly, how hard it is for it to go right.
Kabat first reminds readers that finding a correlation between phenomena X and Y does not mean that X causes Y. Nevertheless many researchers are happy to overinterpret such findings to suggest causation. "If researchers can slip into this way of interpreting and presenting results of their studies," observes Kabat, "it becomes easier to understand how journalists, regulators, activists of various stripes, self-appointed health gurus, promoters of health-related foods and products, and the public can make the unwarranted leap that the study being reported provides evidence of a causal relationship and therefore is worthy of our interest."
From there he moves to some principles that must be kept in mind when evaluating studies. First and foremost is the toxicological maxim that the dose makes the poison. The more exposure to a toxin, the greater the harm. Potency matters greatly too. Often very sensitive assays show that two different compounds can bind to same receptors in the body, but what really matters biologically is how avidly and how strongly one binds compared to the other.
Another principle: Do not confuse hazard, a potential source of harm, with risk, the likelihood that exposure to the hazard will cause harm. Consider bacon. The influential International Agency for Research on Cancer declared bacon a hazard for cancer last year, but the agency does not make risk assessments. Eating two slices of bacon per day is calculated to increase your lifetime risk of colorectal cancer from 5 to 6 percent. Put that way, I suspect most people would continue to enjoy cured pork products.
Kabat also argues that an editorial bias skews the scientific literature toward publishing positive results suggesting harms. Such findings, he notes, get more attention from other researchers, regulators, journalists, and activists. Ever since Rachel Carson's 1962 book Silent Spring wrongly linked cancer to exposures to trace amounts of pesticides, the American public has been primed to blame external causes rather than personal behaviors for their health problems. Unfortunately, as Kabat notes, the existence of an alarmed and sensitized public is all too useful to scientists and regulators. He quotes an honest but incautious remark in the air pollution researcher Robert Phalen's testimony to the California Air Resources Board: "It benefits us personally to have the public be afraid, even if these risks are trivial."
Kabat suggests that the precautionary principle—"better safe than sorry"—is largely an ideological ploy to alarm the public into supporting advocate's policy preferences. He also decries "the simplistic notion that the 'consensus among scientists' is always correct." He notes that the scientific consensus once held that ulcers were caused by spicy foods and stress instead of bacteria, and that estrogen-progestin therapy protected post-menopausal women against heart disease instead of increasing their risk of breast cancer. "The history of medical science is littered with long-held dogmas that, when confronted by better evidence, turned out to be wrong," Kabat observes.
Kabat then offers two case studies in how epidemiology has been misused. The first involves cell phones. After a couple of decades of research, the bulk of epidemiological evidence has found that cell phones have not increased the incidence of brain cancer, although a recent experiment reported that exposure to cell tower radio waves for nine hours per day boosted cancer in male, but not female, rats. Despite the overwhelming evidence that cell phones are safe to use, the city council of Berkeley, California succumbed to activist scaremongering and passed an ordinance last year requiring cell phone retailers to warn consumers not to carry their cell phones in pants or shirt pockets or tucked into bras.
Kabat's second example involves "endocrine disruption," an idea attributing ill effects to man-made substances, such as the plastic softener Bisphenol A (BPA), that supposedly mimic the behavior of hormones like estrogen and testosterone. Exposure to such substances has allegedly produced epidemics of lower sperm quality and hypospadias, in which the opening of the urethra is on the underside of the penis.
This hypothesis developed after the discovery that women who took therapeutic doses of the synthetic estrogen DES to prevent miscarriages were associated with higher risk of vaginal cancer in their daughters. To make a long story short, BPA also binds with estrogen receptors, but the amount of DES to which people were exposed was 100,000 times greater than average BPA exposure today. On top of that, BPA's potency is 10,000 times lower than that DES, which means that the estrogenic effects of current exposures to BPA is 1 billion times lower than the exposures to DES that were associated with increased risks of cancer.
In the face of such minuscule exposures and weak effects, proponents of the endocrine-disruption theory throw away the maxim that the dose makes the poison. Instead, they propose the novel notion that small doses might actually have bigger effects than larger doses. They even claim to have experiments to prove this. Unfortunately, nobody outside of their insular world has been able to replicate their studies. In a 2013 review article, a group of toxicologists damningly concluded, "Taking into account the large resources spent on this topic, one should expect that, in the meantime, some endocrine disruptors that cause actual human injury or disease should have been identified." Yet "with the exception of natural or synthetic hormones, not a single, man-made chemical endocrine disruptor has been identified that poses an identifiable, measurable risk to human health."
What about falling sperm counts and the alleged increase in deformed penises? Kabat reports that the research has not actually found that sperm counts are down. A 2010 review concluded that "the epidemiologic data on this issue amassed to date clearly demonstrates that the bulk of evidence refutes claims for an increase in hypospadias rates."
Turning from the cell phone and endocrine disruption scares, Kabat shows how good epidemiology can identify the real causes of real diseases. He traces how researchers linked renal failure in Belgian women to a Chinese herbal weight loss concoction mistakenly adulterated with the Aristolochia plant, which contains a toxin peculiarly damaging to kidneys. An American researcher then connected the Belgian cases to an epidemic of renal failure among farmers in the Balkans. It turns out that the farmers ate bread made from their own wheat, which was grown in fields infested with Aristolochia.
Kabat also recounts how researchers determined that human papilloma virus (HPV) is the chief cause of cervical cancer. This process began when a physician in the 1960s figured out that a type of lymphoma afflicting African children must be associated with some infectious disease. Kabat traces the epidemiological and experimental work that led to the finding that HPV causes about 5 percent of all cancers in the world. A woman infected with HPV is at a 100- to 500-fold greater risk of getting cervical cancer than a woman without such an infection. (By comparison, a smoker is at a 20- to 50-fold greater risk for lung cancer than a nonsmoker.) Thanks to these discoveries, there is now a vaccine that can prevent this scourge. Given the risks, any parent who does not have his or her children vaccinated against HPV is a fool.
"As we have seen," concludes Kabat, "the landscape in which health risks are studied and in which findings are disseminated is pervaded by false claims, oversold results, biases operating at the level of observational studies as well as psychological and cognitive biases, and professional and political agendas." Getting Risk Right is a potent antidote to the toxic misinformation polluting our public health discourse.