What the Doctor Orders

By treating risky behavior like a communicable disease, the public health establishment invites government to meddle in our private lives.


In the introduction to the first major American book on public health, U.S. Army surgeon John S. Billings explained the field's concerns: "Whatever can cause, or help to cause, discomfort, pain, sickness, death, vice, or crime–and whatever has a tendency to avert, destroy, or diminish such causes–are matters of interest to the sanitarian; and the powers of science and the arts, great as they are, are taxed to the uttermost to afford even an approximate solution to the problems with which he is concerned."

Despite this ambitious mandate–and the book's impressive length (nearly 1,500 pages in two volumes)–A Treatise on Hygiene and Public Health had little to say about the issues that occupy today's public health professionals. There were no sections on smoking, alcoholism, drug abuse, obesity, vehicular accidents, mental illness, suicide, homicide, domestic violence, or unwanted pregnancy. Published in 1879, the book was instead concerned with such things as compiling vital statistics, preventing the spread of disease, abating public nuisances, and assuring wholesome food, clean drinking water, and sanitary living conditions.

A century later, public health textbooks discuss the control of communicable diseases mainly as history. The field's present and future lie elsewhere. "The entire spectrum of 'social ailments,' such as drug abuse, venereal disease, mental illness, suicide, and accidents, includes problems appropriate to public health activity," explains Jack Smolensky in Principles of Community Health (1977). "The greatest potential for improving the health of the American people is to be found in what they do and don't do to and for themselves." Similarly, Introduction to Public Health (1978), by Daniel M. Wilner, Rosabelle Price Walkley, and Edward J. O'Neill, notes that the field, which once "had much narrower interests," now "includes the social and behavioral aspects of life–endangered by contemporary stresses, addictive diseases, and emotional instability."

The extent of the shift can be sensed by perusing a few issues of the American Public Health Association's journal. In 1911, when the journal was first published, typical articles included "Modern Methods of Controlling the Spread of Asiatic Cholera," "Sanitation of Bakeries and Restaurant Kitchens," and "The Need of Exact Accounting for Still-Births." Issues published in 1995 offered "Menthol vs. Nonmenthol Cigarettes: Effects on Smoking Behavior," "Correlates of College Student Binge Drinking," and "Violence by Male Partners Against Women During the Childbearing Year: A Contextual Analysis." The journal also covers strictly medical issues, of course, and even runs articles on traditional public health topics such as vaccination, nutrition, and infant mortality. But the amount of space taken up by studies of social problems and behavioral issues is striking.

In a sense, the change in focus is understandable. After all, Americans are not dying the way they once did. The chapter on infant mortality in A Treatise on Hygiene and Public Health reports that during the late 1860s and early 1870s, two-fifths to one-half of children in major American cities died before reaching the age of 5. The major killers included measles, scarlet fever, smallpox, diptheria, whooping cough, bronchitis, pneumonia, tuberculosis, and "diarrheal diseases." Largely because of such afflictions, life expectancy at birth was only 49 in 1900, compared to roughly 75 today, while the annual death rate was 17 per 1,000, compared to about half that today. Beginning in the 1870s, the discovery that infectious diseases were caused by specific microorganisms made it possible to control them through vaccination, antibiotics, better sanitation, water purification, and elimination of carriers such as rats and mosquitos. At the same time, improvements in nutrition and living conditions increased resistance to infection. Although it is difficult to separate the effects of public health programs from the effects of rising affluence and changing patterns of work, there's no question that disease-control efforts have had an important impact on the length and quality of life.

Americans no longer live in terror of smallpox or cholera. Despite occasional outbreaks of infectious diseases such as rabies and tuberculosis, the fear of epidemics that was once an accepted part of life is virtually unknown. The one major exception is AIDS, which is not readily transmitted and remains largely confined to a few high-risk groups. For the most part, Americans die of things you can't catch: cancer, heart disease, trauma. Accordingly, the public health establishment is focusing on those causes and the factors underlying them. Having vanquished most true epidemics, it has turned its attention to metaphorical "epidemics" such as smoking, obesity, and suicide. Along the way, the public health establishment has become the most influential lobby for ever-increasing government control over Americans' personal choices.

By the late 1930s, with the importance of infectious diseases declining, public health specialists started taking an interest in chronic conditions. "The public health establishment requires an issue that has salience in the public press, outside of the scientific community, in order to maintain support," says Barbara Rosenkrantz, professor emeritus of the history of science at Harvard. "Public health only becomes interesting when there are real problems."

The interest in chronic diseases led to lifestyle-oriented medical research that began in earnest after World War II. The Framingham heart study, a large-scale, long-term project that has helped identify risk factors for heart disease, began in 1948 and produced its first report in the early 1950s. The first influential studies linking cigarette smoking to lung cancer appeared about the same time. Philip Cole, a professor of epidemiology at the University of Alabama, Birmingham, argues that concern about smoking and other causes of non-infectious disease is "very much within the classical tradition of public health, even though it does not speak to the issue of contagion." He distinguishes this interest, manifested in the work of researchers and the recommendations of physicians, from "a continued effort on the part of government to usurp control of individuals' lives," a trend that worries him.

But the concerns of public health practitioners have a way of influencing public policy. Surgeons general of the U.S. Public Health Service have become official nags, urging us to shape up so we can reach the health goals they have set for the nation. The wide domain of public health allows them to champion whatever causes interest them. C. Everett Koop said we should achieve "a smoke-free society" by the year 2000. Antonia Novello condemned liquor and beer advertising that she found distasteful. Joycelyn Elders pontificated about gun control and masturbation. The circumstances of Elders's departure and the battle over her successor show that both liberals and conservatives take the top public health job quite seriously.

The key event that elevated the surgeon general's prestige and visibility occurred in 1964, when Luther M. Terry released the report of his Advisory Committee on Smoking and Health. The document, which declared that "cigarette smoking is a health hazard of sufficient importance in the United States to warrant appropriate remedial action," heralded the decline of the U.S. tobacco industry and the beginning of the contemporary anti-smoking movement. It helped put risky behavior at the top of the public health agenda.

The involvement of physicians in the auto-safety movement of the late 1960s and early '70s also helped legitimize the expansion of public health. "The automobile is the etiological agent in an epidemic accounting for some 50,000 deaths and 4 million injuries each year," wrote Seymour Charles of Physicians for Automotive Safety and John States of the American Association of Automotive Medicine in the July 4, 1966, issue of the Journal of the American Medical Association. They cited drunk-driving laws, seat-belt requirements, and other mandated design changes as examples of "preventive medicine." Today, advocates of a public health approach to violence cite the medicalization of traffic accidents as a precedent.

The establishment of Medicare and Medicaid in 1965 reinforced the argument that government should take an interest in the personal habits of its citizens because risky behavior might affect the public treasury. In a 1976 essay commissioned by Time, Dr. John H. Knowles, president of the Rockefeller Foundation, reviewed the rise of taxpayer-funded health insurance and declared that "the cost of sloth, gluttony, alcoholic overuse, reckless driving, sexual intemperance, and smoking is now a national, not an individual responsibility." Writing in Daedalus the following year, he said, "I believe that the idea of a 'right' to health should be replaced by the idea of an individual moral obligation to preserve one's own health–a public duty if you will."

In 1979, the surgeon general released Healthy People, a report that broke new ground by setting specific goals for reductions in mortality. "We are killing ourselves by our own careless habits," wrote Joseph Califano, then secretary of health, education, and welfare, calling for "a second public health revolution" (the first being the triumph over infectious diseases). Healthy People, which estimated that "perhaps as much as half of U.S. mortality in 1976 was due to unhealthy behavior or lifestyle," advised Americans to quit smoking, drink less, exercise more, fasten their seat belts, stop driving so fast, and cut down on fat, salt, and sugar. It also recommended motorcycle-helmet laws and gun control to improve public health.

Healthy People drew on a "national prevention strategy" developed by the U.S. Center for Disease Control (now the Centers for Disease Control and Prevention), an agency whose evolution reflects the expanding interests of public health. Established during World War II as a unit of the U.S. Public Health Service charged with malaria control in war areas, it became the Communicable Disease Center in 1946. By the end of the 1950s the CDC had acquired exclusive federal authority over communicable diseases. In the 1960s it took on a wide range of projects, including family planning and overseas smallpox control. In the early to mid-1970s it absorbed the Public Health Service's nutrition program, the National Institute of Occupational Safety and Health, and the National Clearinghouse on Smoking and Health. It also took over federal programs dealing with lead poisoning, urban rat control, and water fluoridation. In the late 1970s the CDC drew up a list of its main priorities, the most serious health problems facing the country. The list included smoking, alcohol abuse, unwanted pregnancies, car accidents, workplace injuries, environmental hazards, social disorders, suicide, homicide, mental illness, and stress. Today only one of the CDC's seven "centers" deals with the agency's original task, control of infectious diseases.

A cynic would view the CDC's growth as a classic example of bureaucratic survival: If the problem you were charged with solving starts to get better, find new problems that will continue to justify your budget. More generally, it is easy to dismiss public health's ever-expanding agenda as a bid for funding, power, and status. Yet the field's practitioners argue, with evident sincerity, that they are simply adapting to changing patterns of morbidity and mortality. Without speculating about motivation, we can ask whether it makes sense to apply the methods of disease control to problems that are not caused by germs. If it doesn't, much of what is done in the name of public health today is seriously misguided.

The question is especially important because public health generally implies government action. That used to mean keeping statistics, imposing quarantines, requiring vaccination of children, providing purified water, building sewer systems, inspecting restaurants, regulating emissions from factories, and reviewing drugs for safety. Nowadays it means, among other things, raising alcohol taxes, restricting cigarette ads, banning guns, arresting marijuana growers, and forcing people to buckle their seat belts. These measures are attempts to control illness and injury by controlling behavior thought to be associated with them. The idea is straightforward: Less drinking means less cirrhosis of the liver, less smoking means less lung cancer, less gun ownership means less suicide.

Whether or not these expectations are justified, treating behavior as if it were a communicable disease is problematic. First of all, behavior cannot be transmitted to other people against their will. Second, people do not choose to be sick, but they do choose to engage in risky behavior. The choice implies that the behavior, unlike a viral or bacterial infection, has value. It also implies that attempts to control the behavior will be resisted.

Resistance to public health measures is not new. Such interventions were often criticized, sometimes justifiably, as inappropriate exercises of government power. But in the past, public health officials could argue that they were protecting people from external threats: carriers of contagious diseases, fumes from the local glue factory, contaminated water, food poisoning, dangerous quack remedies. By contrast, the new enemies of public health come from within; the aim is to protect people from themselves rather than each other. The implications of this distinc tion can be better understood by considering a few of the "epidemics" that have taken the place of smallpox and cholera.

Smoking. In the public health literature, smoking is not an activity or even a habit. It is "the greatest community health hazard," "the single most important preventable cause of death," "the plague of our time," "the global tobacco epidemic." The disease metaphor has been used so much that it is now taken literally. The foreword to the 1988 surgeon general's report on nicotine addiction informs us, "Tobacco use is a disorder which can be remedied through medical attention." This definition was not always so casually accepted. In the 1977 monograph, Tobacco Use as a Mental Disorder, published by the National Institute on Drug Abuse, Jerome H. Jaffe noted that "a behavior that merely predisposes to other medical illnesses is not necessarily, in and of itself, a disease or disorder….We certainly would not want to consider skiing as a mental disorder, although it clearly raises the likelihood of developing several well-defined orthopedic disorders. Risk taking, per se, is not a mental disorder."

Yet today public health professionals consider smoking itself a disease, something inherently undesirable that happens to unwilling victims. "Free will is not within the power of most smokers," writes former CDC Director William Foege in "The Growing Brown Plague," a 1990 editorial in the Journal of the American Medical Association. If it were, they certainly would choose not to smoke. As Scott Ballin, chairman of the Coalition on Smoking or Health, explains, "The product has no potential benefits….It's addictive, so people don't have the choice to smoke or not to smoke."

These statements are part of a catechism intended to explain why so many people continue
to smoke, when clearly they shouldn't. That catechism does not admit the possibility that smoking might offer some people benefits that in their minds outweigh its hazards. This blindness is inherent in the public health perspective, which seeks collective prescriptions that do not take account of individual tastes and preferences. It recognizes one supreme value–health–that cannot be trumped by other considerations.

Having promoted smoking from risk factor to disease, the public health establishment now targets alleged risk factors for smoking, most notably cigarette advertising. "If exposure to cigarette advertising is a risk factor for disease," writes Rep. Henry Waxman (D-Calif.) in a 1991 JAMA editorial, "it is incumbent on the public and elected officials to deal with it as we would the vector of any other pathogen." In other words, banning cigarette ads is like draining the swamps where the mosquitos that carry malaria breed. That seems to be the assumption underly ing the Clinton administration's proposed restrictions on tobacco advertising.

The alarm about the danger posed by cigarette advertising is based largely on well-publicized studies in medical journals that prove less than the researchers' conclusions and accompanying editorials imply. A typical example is the 1991 JAMA study cited by the Clinton administration. The researchers reported that 6-year-olds were as likely to match Joe Camel with a pack of cigarettes as they were to match the Disney Channel logo with Mickey Mouse. "Given the serious consequences of smoking," they wrote, "the exposure of children to environmental tobacco advertising may represent an important health risk…." But recognizing Joe Camel is not tantamount to smoking, any more than recognizing the logos for Ford and Chevrolet (which most of the kids in the study did) is tantamount to driving.

The same issue of JAMA carried an article reporting that Camel's market share among smokers under the age of 18 increased from 0.5 percent in 1988 to nearly 33 percent in 1991. The authors attributed the change to the Joe Camel campaign and concluded that "a total ban of tobacco advertising and promotions…can be based on sound scientific reasoning." Yet during the period covered by the study, smoking among minors actually fell. So while Joe Camel may have had something to do with the shift in brand preferences (a shift that also occurred in other age groups, though less dramatically), he cannot be blamed for convincing more kids to smoke.

Jean J. Boddewyn, a marketing professor at Baruch College who is skeptical of the alleged link between tobacco advertising and consumption levels, has argued that medical journals are not an appropriate venue for such research. Writing in the December 1993 issue of the Journal of Advertising, he suggests that medical editors and reviewers lack expertise in the area and are too quick to publish articles that reflect badly on the tobacco industry. "How would the [Journal of Advertising's] reputation fare," he wonders, "if it published an article on the health consequences of smoking, after asking only advertising specialists to review it?" Boddewyn also complains that articles on tobacco advertising in medical journals rarely refer to relevant sources outside the public health literature.

Alcohol Abuse. Like smoking, alcohol abuse is considered a disease within the public health field. Community Health (1978), by C.L. Anderson, Richard F. Morton, and Lawrence W. Green, calls alcoholism "an inborn defect of metabolism," while Introduction to Public Health defines it as "the progressive chronic illness characterized by habitual heavy drinking that interferes with numerous…aspects of an individual's life." This view of alcoholism remains controversial, but it has a long pedigree, dating back to Benjamin Rush in the 18th century. As Introduction to Public Health concedes, however, "alcohol in moderation appears to do the body no permanent harm."

Nevertheless, heavy taxation of alcoholic beverages is a standard public health prescription for alcohol abuse. Restrictions on sales and advertising are also popular. Advocates of such measures hope to reduce overall consumption of alcohol and thereby reduce alcohol abuse. "The cost of alcohol should be greatly increased by taxation," Community Health recommends. "There is an excellent correlation between a low relative price, high consumption, and high cirrhotic mortality in international comparisons." In fact, it is not clear that reducing overall consumption would have much of an impact on alcohol abuse. It is precisely the people who have the biggest problems with alcohol who would be most resistant to changes in price or availability. Furthermore, societies or ethnic groups with relatively low drinking rates may have relatively high rates of abuse, and vice versa.

Assuming a tax hike did reduce alcohol problems, it would not necessarily be justified even on utilitarian grounds. Since moderate drinkers far outnumber alcoholics, their foregone pleasure might well outweigh the benefits of less alcohol abuse. As in the case of smoking, the public health paradigm simply ignores this issue.

Drug Abuse. In the early 1970s a debate raged in academic journals about the merits of studying illegal drug use as if it were an epidemic. Critics of this approach noted that illegal drug use is not caught like a virus; it is volitional behavior. Proponents of the disease model said this didn't matter.

Community Health explains: "If drug abuse is seen as a practice that is transmitted from one person to another, it may be considered for operational purposes as a contagious illness. This approach makes it possible to apply to its study the methods and terminology used in the epidemiology of infectious diseases. In the epidemiological model the infectious agent is heroin, the host and reservoir are both man, and the vector is the drug-using peer….The disease presents all the well-known characteristics of epidemics, including rapid spread, clear geographic bounds, and certain age groups and strata of the population being more affected than others."

But these criteria apply to many phenomena that we do not treat as epidemics, including clothing fashions, recreational trends, rumors, jokes, and political ideas. Clearly, not everything that spreads from person to person is an epidemic.

Presumably, an epidemic has to be something bad. Oddly, however, bad is defined not by the individuals involved but by the epidemiologist. A happy, productive, well-adjusted user of illegal drugs is still sick, still part of an epidemic, even though he doesn't realize it. Alternatively, as former drug czar William Bennett has argued, the moderate drug user is an asymptomatic carrier–a Typhoid Mary–spreading misery to others by setting a bad example, even though he feels fine. (Indeed, by doing well while doing drugs, he is a more serious threat in this regard than the addict in the gutter.) Either way, the user has to be isolated and cured, whether he likes it or not.

To be fair, it should be noted that many public health specialists do not tow the official government line, which defines any use of illegal drugs as abuse. They often acknowledge that prohibition has side effects and that the distinctions made by the law do not necessarily correspond with the objective hazards of various drugs. Defining drug use as a public health problem, rather than a crime, they tend to support "harm reduction" measures such as legal availability of syringes and needles, the use of medical marijuana, and "treatment" as an alternative to prison.

Still, the medical model has coercive implications, especially if "denial" is understood as one component of the "disease." Smolensky says law enforcement agencies should "ignore the user, since his is a medical problem which requires prevention, therapy, or rehabilitation, and is not a criminal act." This begs the question of what happens when the user resists "prevention, therapy, or rehabilitation."

Obesity. If smoking, alcohol abuse, and illegal drug use can be diseases, surely obesity can. It carries substantial health risks, and people who are fat generally don't want to be. They find it difficult to lose weight, and when they do succeed they often relapse. When deprived of food, they suffer strong cravings and other withdrawal symptoms.

Recently, the "epidemic of obesity" has been trumpeted repeatedly on the front page of The New York Times. The first story, which appeared in July 1994, was prompted by a study from the National Center for Health Statistics that found the share of American adults who are overweight increased from a quarter to a third between 1980 and 1991. "The government is not doing enough," complained Assistant Secretary of Health Philip R. Lee. "We don't have a coherent, across-the-board policy." The second story, published last September, reported on a New England Journal of Medicine study that found gaining as little as 11 to 18 pounds was associated with a higher risk of heart disease–or, as the headline on the jump page put it, "Even Moderate Weight Gains Can Be Deadly." The study attributed 300,000 deaths a year to obesity, including a third of cancer deaths and most deaths from cardiovascular disease. The lead researcher, JoAnn E. Manson, said, "It won't be long before obesity surpasses cigarette smoking as a cause of death in this country."

If, as Assistant Secretary Lee recommends, the government decides to do more about obesity–the second most important preventable cause of death in this country (soon to be the first)–what would "a coherent, across-the-board policy" look like? As early as June 1975, in its Forward Plan for Health, the U.S. Public Health Service was suggesting "strong regulations to control the advertisement of food products, especially those of high sugar content or little nutritional value." But surely we can do better than that. A tax on high-fat foods would help cover the cost of obesity-related illness and disability, while deterring overconsumption of ice cream and steak. Of course, such a tax would be paid by the lean as well as the overweight. It might be more fair and efficient to tax people for every pound over their ideal weight. Such a market-based system would make the obese realize the costs they impose on society and give them an incentive to slim down. Last year I suggested this plan in National Review, and the magazine received a couple of letters from readers who took it seriously. Fortunately, they were outraged; but I can't shake the thought that somewhere in Washington a public health bureaucrat read the item and said, "Hmmm…."

Violence. In Sentinel for Health: A History of the Centers for Disease Control, Elizabeth W. Etheridge notes that CDC Director William Foege encountered resistance when he pushed the boundaries of public health in the late 1970s. "Of all the areas," she writes, "violence was the most controversial and the one the public health community found hardest to accept." The opposition to this idea is not surprising. Even people who are willing to redefine risky habits as diseases may be troubled by the denial of individual responsibility implicit in treating assault and murder like an outbreak of influenza.

Focusing on guns is one way of obscuring the moral issues raised by the public health approach to violence. As Chicago pediatrician Katherine Cristoffel, founder of the HELP (Hand gun Epidemic Lowering Plan) Network, explained in a 1994 American Medical News article: "Gun violence should be treated like polio and tuberculosis and every other epidemic. Guns are a virus that must be eradicated." She drew a parallel with the campaign against smoking: "It is possible to ban guns. There's a precedent in cigarette smoking. Before the surgeon general's report, it was a moral issue, a personal rights issue. But once it was declared a public health issue, there was a dramatic change….Get rid of cigarettes, get rid of secondhand smoke, and you get rid of lung disease. It's the same with guns. Get rid of the guns, get rid of the bullets, and you get rid of the deaths."

This is not merely the opinion of a few wild-eyed activists. The public health establishment has consistently endorsed stricter gun control, treating firearms as a "risk factor," a "pathogen," a "social ill" to be minimized or eliminated. According to Healthy People, the 1979 surgeon general's report, "Measures that could reduce risk of firearm deaths and injuries range from encouraging safer storage and use to a ban on private ownership. Evidence from England suggests that prohibiting possession of handguns would reduce the number of deaths and injuries, particularly those unrelated to criminal assaults." In 1979 the CDC endorsed the goal of reducing the number of privately owned handguns, with an initial target of a 25 percent decrease by 2000. In 1992 C. Everett Koop declared violence "a public health emergency" and, in response, endorsed a national licensing system for gun owners.

Public health research on firearms, much of it funded by the CDC, has attracted a great deal of publicity, generating many of the factoids that supporters of gun control are fond of citing. Consider the claim that "a gun in the home is 43 times as likely to kill a family member as to be used in self-defense." This is based on a 1986 study by Arthur L. Kellermann and Donald T. Reay published in the New England Journal of Medicine. Examining gunshot deaths in King County, Washington, from 1978 to 1983, Kellermann and Reay found that, of 398 people killed in the home where the gun was kept, only two were intruders shot while trying to get in. "We noted 43 suicides, criminal homicides, or accidental gunshot deaths involving a gun kept in the home for every case of homicide for self-protection," they wrote. It's not a good idea, they suggested, to keep a gun at home.

But since Kellermann and Reay considered only cases resulting in death, which surveys indicate are a tiny percentage of defensive gun uses, this conclusion does not follow at all. "Mortality studies such as ours do not include cases in which burglars or intruders are wounded or frightened away by the use or display of a firearm," they conceded. "Cases in which would-be intruders may have purposely avoided a house known to be armed are also not identified." By leaving out such cases, Kellermann and Reay excluded almost all of the lives saved, injuries avoided, and property protected by keeping a gun in the home.

In contrast with the criminological literature, where scholars on both sides of the issue carry on a lively debate, studies published in the New England Journal of Medicine, JAMA, and other medical or public health journals almost invariably condemn gun ownership and advocate stricter gun control. As with the studies of tobacco advertising, the public health researchers rarely cite scholars from other disciplines, preferring to stay within a field where almost everyone agrees that guns are bad.

The public health research on gun ownership, like the research on other kinds of "unhealthy" behavior, is driven by the expectation that people will change their ways once they realize the risks they are taking. Healthy People notes that "formidable obstacles" stand in the way of improved public health. "Prominent among them are individual attitudes toward the changes necessary for better health," it says. "Though opinion polls note greater interest in healthier lifestyles, many people remain apathetic and unmotivated….Some consider activities to promote health moralistic rather than scientific; still others are wary of measures which they feel may infringe on personal liberties. However, the scientific basis for suggested measures has grown so compelling, it is likely that such biases will begin to shift ." (Emphasis added.) In other words, only those ignorant of the scientific evidence could possibly oppose the public health agenda.

This assumption is central to the public health mentality. Back in 1879, John S. Billings stated it quite candidly: "By some writers, as Wilhelm von Humboldt and John Stuart Mill, it is denied that the State should directly attempt to improve the physical welfare of its citizens, on the ground that such interference will probably do more harm than good. But all admit that the State should extend special protection to those who are incapable of judging their own best interests, or of taking care of themselves, such as the insane, persons of feeble intellect, or children; and we have seen that in sanitary matters the public at large are thus incompetent."

Billings was defending traditional public health measures aimed at preventing the spread of infectious diseases and controlling health hazards such as rotting animal carcasses. It is reasonable to expect that such measures will be welcomed by the intended beneficiaries, once they understand the aim. The same cannot be said of public health's new targets. Even when they know about the relevant hazards (and assuming the information is accurate), many people will continue to smoke, drink, take illegal drugs, eat fatty foods, buy guns, speed, eschew seat belts and motorcycle helmets, and otherwise behave in ways frowned upon by the public health establishment. This is not because they misunderstand; it's because, for the sake of pleasure, utility, or convenience, they are prepared to accept the risks. When public health experts assume that these decisions are wrong, they do indeed treat adults like incompetent children.

One such expert, writing in the New England Journal of Medicine 20 years ago, declared, "It is a crime to commit suicide quickly. However, to kill oneself slowly by means of an unhealthy life style is readily condoned and even encouraged." The article prompted a response from Robert F. Meenan, a professor at the University of California School of Medicine in San Francisco, who observed: "Health professionals are trained to supply the individual with medical facts and opinions. However, they have no personal attributes, knowledge, or training that qualifies them to dictate the preferences of others. Nevertheless, doctors generally assume that the high priority that they place on health should be shared by others. They find it hard to accept that some people may opt for a brief, intense existence full of unhealthy practices. Such individuals are pejoratively labeled 'noncompliant' and pressures are applied on them to reorder their priorities."

More than 75 years ago, H.L. Mencken complained about this tendency to impose a moral value–the par-amount importance of health–in the guise of medical science. "Hygiene is the corruption of medicine by morality," he wrote in 1919. "It is impossible to find a hygienist who does not debase his theory of the healthful with a theory of the virtuous." The public health establishment seeks government power to impose its vision of virtue on the rest of America.

And public health doctrine admits no limits. Principles of Community Health tells us that "the most widely accepted definition of individual health is that of the World Health Organization: 'Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.'" A government empowered to maximize "health" is a totalitarian government.

In response to such concerns, the public health establishment argues that government intervention is justified because individual decisions about risk affect other people." Motorcyclists often contend that helmet laws infringe on personal liberties," notes Healthy People, "and opponents of mandatory laws argue that since other people usually are not endangered, the individual motorcyclist should be allowed personal responsibility for risk. But the high cost of disabling and fatal injuries, the burden on families, and the demands on medical care resources are borne by society as a whole." This familiar line of reasoning implies that all resources–including not just taxpayer-funded welfare and health care but private savings, insurance coverage, and charity–are part of a common pool owned by "society as a whole" and guarded by the government. Similarly, "social cost" calculations for tobacco and alcohol count medical expenses, regardless of who pays them or under what circumstances, and "lost productivity," as if every individual owes a full lifetime of income (at the highest possible wage?) to "society as a whole."

As Faith T. Fitzgerald, a professor at the University of California, Davis, Medical Center, writes in the New England Journal of Medicine: "Both health care providers and the commonweal now have a vested interest in certain forms of behavior, previously considered a person's private business, if the behavior impairs a person's 'health.' Certain failures of self-care have become, in a sense, crimes against society, because society has to pay for their consequences….In effect, we have said that people owe it to society to stop misbehaving, and we use illness as evidence of misbehavior."

Most public health practitioners would presumably recoil at the full implications of the argument that government should override individual decisions affecting health because such decisions have an impact on "society as a whole." They are no doubt surprised and offended to be called "health fascists," when their goal is to extend and improve people's lives. But some defenders of the public health movement recognize that its aims are fundamentally collectivist and cannot be reconciled with the American tradition of limited government. In 1975 Dan E. Beauchamp, then an assistant professor of public health at the University of North Carolina and currently a professor in the School of Public Health at the State University of New York at Albany, presented a paper at the annual meeting of the American Public Health Association in which he argued that "the radical individualism inherent in the market model" is the biggest obstacle to improving public health.

"The historic dream of public health that preventable death and disability ought to be minimized is a dream of social justice," Beauchamp said. "We are far from recognizing the principle that death and disability are collective problems and that all persons are entitled to health protection." He rejected "the ultimately arbitrary distinction between voluntary and involuntary hazards" and complained that "the the primary duty to avert disease and injury still rests with the individual.' He called upon public health practitioners to challenge 'the powerful sway market-justice holds over our imagination, granting fundamental freedom to all individuals to be left alone.' Of all the risk factors for disease and injury, freedom may be the most important."