Who Caused Three Mile Island?

After five major investigations, the real reason for the Three Mile Island accident has remained a mystery. REASON's reporter has found the missing link.

|

Three Mile Island, March 28, 1979. Because of maintenance procedures elsewhere in the building, several indicators on the main control panel of the TMI nuclear power plant are temporarily inoperative. In compliance with an order from the Nuclear Regulatory Commission, identifying cardboard tags hang from the inactive indicators. These tags are large enough to obscure several other parts of the control panel.

It is 4:00 A.M., and an indicator on the control panel shows an unexpected pressure transient. The plant operators check the auxiliary feedwater valves. Although a control panel light indicates that they are closed, that light is covered by one of the maintenance caution tags hanging from another control. The operators react to the pressure transient on the assumption that the valves are open. As a result, there is an increase in heat, which causes activation of the emergency core cooling system, which functions perfectly.

But now comes a red light. There is a universal code of green for safe and red for dangerous. The operators conclude from the red light that the water level in the reactor is dangerously high. In fact, a red light here means that the water level is what it should be. (On this night it may even be a bit low.) The operators shut off what they believe to be excess water, thus stopping the emergency cooling system from functioning. The reactor is damaged, with serious consequences for the entire area for which it has been supplying electricity. Even more serious consequences ensue in the panic afterward, when government agencies close or delay the opening of nearly two dozen additional nuclear power plants.

About a month after the nuclear power plant accident at Three Mile Island receded from newspaper headlines, I received a clipping from the New York Times, sent by a friend who knew I often missed reading the paper when busy with other things. Back when TMI was in the headlines I had been setting up an experiment in my laboratory and so missed most of the running account.

But as I read the clipping's fairly clear description of what had happened, a phrase formed in my mind: What an incredible chain of human errors! Then I realized that I had come across that phrase before. After a bit of looking, I found it in energy analyst Petr Beckmann's description of the Brown's Ferry accident and fire of 1975.

The Brown's Ferry incident did not mean, by itself, that anything was necessarily wrong with the way nuclear power plants were set up. Good human factors design can reduce the probability of human error by a factor of a hundred or so, but it cannot eliminate human error altogether. One accident caused by human error proves nothing. But TMI was now the second "incredible chain of human errors" at a nuclear power plant, and so it made sense to ask what went wrong.

It was too late to save the victims of Three Mile Island (see "The Victims of Power Plant Accidents," p. 23). But finding out why the manufacturer of the TMI plant ignored design principles established decades earlier might prevent similar tragedies in the future and save the victims of interference with technologies yet to be invented. I became curious and began to dig. I anticipated having to go back a couple of decades, but my search for the pieces of the puzzle took me back almost a century.

The road to Three Mile Island began 80 years ago, decades before anyone thought of using nuclear energy to make power generation cleaner and safer, and years before the very source of that energy was first identified. Historians and philosophers could probably trace the causal chain even further into the past, but eight decades is close to the limit of how far a social scientist can go without risking conjecture and speculation. And so, for us, the story will start during a period of history known, with unintended irony, as the Era of Progress.

During the Progressive Era, several states enacted legislation compelling insurance companies to base their rates on actuarial experience (that is, on statistics showing the probability of the event being insured against). Within two decades, such legislation was in force in every state in the union. This legislation, ostensibly motivated by concern for the solvency of underwriting companies, had the effect of prohibiting American companies from writing insurance for new industries: with a new type of risk, there is no prior experience from which to derive actuarial data.

Nobody objected seriously to this legislation. Entrepreneurs in new industries could still obtain insurance from foreign companies, which were not subject to state-level American legislation. American insurance companies did not object, since it was now easier to sell policies as "a safe investment." Foreign companies writing insurance for unprecedented risks were happy to see their American competitors eliminated from the American market. Insurance regulation became a textbook example of progressive government action in the public interest.

Both before and after the enactment of this legislation, the casualty-insurance industry had a great deal to do with the safety of industrial operations in the United States. Casualty insurance companies employed safety consultants who helped them to set rates and provided safety improvement advice to the insured industries. If its safety consultant underestimated the likelihood of accidents in policyholders' plants, claims would exceed the income from premiums, and the insurance company would lose money. If the safety expert overestimated the risk, the rates offered by his company would be higher than those of competing insurance companies, and his company would lose business.

The safety consultant was also expected to suggest ways to make the policyholders' operations safer. It was in everyone's interest to follow the safety inspector's advice: the policyholders benefited from lower rates, and the company writing the insurance benefited from both fewer claims and the competitive advantage of being able to offer the lower premiums.

Since the management of a plant had a powerful financial incentive for reducing the safety inspector's estimate of the risk of operations in that plant, risk reduction became a cooperative enterprise. I have gone through a dozen file cabinets full of clippings and reports at the library of the College of Insurance, which trains safety inspectors, without finding any instances of a private insurance company's safety inspector leaving a policyholder's plant with the impression that vital safety information had been concealed during his visit. We will see later how different is the usual experience of government regulatory inspectors trying to perform a nominally identical function.

Progressive Era insurance legislation had only a minor impact on the activities of insurance companies with respect to industrial safety. Policy premiums were based primarily on actuarial experience in an entire class of plants, rather than on the actual conditions in any single policyholder's enterprise. Thus, casualty rate setting, or "underwriting," became a profession separate from that of the safety inspector/consultant. In most states, however, casualty insurance companies retained the authority to vary their premiums depending on the actual safety practices of their policyholders. Even in those states in which premium variations were most restricted, insurance companies still provided industrial safety services in order to reduce the size and number of claims, and they could refuse to insure an enterprise that did not cooperate with their safety personnel.

So insurance companies remained very concerned with industrial safety. Individual companies maintained laboratories to test and improve the safety of equipment used by their policyholders. In addition, the insurance industry established the Underwriters Laboratories to test mass-produced equipment used in homes and offices and to perform fundamental research on issues related to safety.

In the early 1950s, this research established that about 80 percent of all industrial accidents were due, not to defects or malfunctions in industrial equipment, but to error and confusion on the part of human beings operating it. This finding spurred a fundamental shift in the aims and methods of research on the role of people in industrial control systems. There had been a few studies dealing with human error, but they had been primarily concerned with estimating the likelihood of various errors and their impact on overall plant reliability. After 1955 or so, with the realization that such a large proportion of industrial accidents were attributable to human errors, human factors research zeroed in on the causes of those errors and how to eliminate them.

Before the end of the '50s, some principles of good human factors design had been established, and insurance company safety consultants were bringing the new discipline of engineering psychology to bear on the design of industrial plant equipment. By 1965, hardly any new equipment could be put into operation unless it conformed to the standards for safe human factors design established by Underwriters Laboratories. The exception was equipment for the new and burgeoning nuclear power industry.

When the use of nuclear reactors for the production of energy first became commercially feasible in the mid-1950s, the prospective operators of nuclear power plants came face-to-face with a problem created half a century earlier: no one had any previous experience with casualty claims arising from the operation of nuclear power-generating plants, and so American insurance companies were prevented by law from insuring them. The British companies that traditionally provided unprecedented-risk insurance in North America still had not fully recovered from the losses of World War II and were in no position to insure so large a new industry. Elsewhere in the world, except in Canada and West Germany, nuclear power plants were a government monopoly, protected from liability under the doctrine of sovereign immunity (see box, this page).

American power companies, boxed in by half-century-old insurance legislation, asked the federal government for help. The government, which for reasons of its own wanted the nuclear industry to develop rapidly, was quite willing to oblige. Its solution was not to attempt to undo the earlier legislation. Instead, it would organize a consortium of insurance companies to provide the new industry with a limited amount of coverage, beyond which the government itself would act as an "insurer of last resort." Under this arrangement, the insurance industry consortium would have no role in supervising the safety of nuclear power plants. This was to be the exclusive concern of the Atomic Energy Commission and later the Nuclear Regulatory Commission. West Germany and Canada soon copied the American arrangement.

The Price-Anderson Act, which effected the government insurance scheme, had a very serious result. It cut all links between the nuclear power industry and new safety research findings developed in the laboratories of the insurance underwriters. The Nuclear Regulatory Commission, of course, imposed safety standards on the growing industry. But in 1979, its safety regulations were still based on the most recent safety research of 1954.

Part of the explanation is that the Atomic Energy Commission wore two hats. In addition to monitoring the fledgling nuclear power industry, it was involved in military nuclear projects. Reactor design activity was no exception to the AEC's tendency to shroud all nuclear developments in a fog of secret classifications. Some of the manufacturers of nuclear power reactors had highly competent engineering psychologists working for their other divisions, but the AEC insisted on keeping nuclear reactor work secret and isolated. By 1970, no new design for a toaster or blender at General Electric could get off the drawing board without being examined by an expert in human factors. Yet the same company was designing, manufacturing, and delivering nuclear reactors that had never been seen, much less examined, by an engineering psychologist.

Engineering psychologists not involved in the design of reactor controls—and until 1980 that meant every single engineering psychologist—had no idea what was going on. In retrospect, it should have been clear that something was wrong. In an exhaustive search of the engineering psychology literature up to 1978, I failed to turn up a single article on the design of nuclear power plant control rooms.

There were only two even remotely related studies. One, a 1975 article, examined the control of zero-energy reactors, the type used in privately insured industry for the commercial production of radioisotopes. The other, a 1976 master's thesis in nuclear engineering at Iowa State University, dealt with human error rates in commercial light-water reactors. This thesis was concerned only with estimating the likelihood of various errors, as though the author had never been told that human errors can be virtually eliminated by proper human factors design—a typical piece of 1940s' reliability engineering, accepted by a nuclear engineering school in 1976. And that was all. Not a single publication on how nuclear power plant control rooms ought to be designed, when the design of control rooms for the chemical industry, for example, was the subject of 18 articles between 1969 and 1978.

If anyone thought about it at all, it must have seemed an effect of the usual AEC secrecy rather than of an actual absence of work. In hindsight, a silly assumption: there were dozens of declassified articles on human factors in the design of controls for naval warships and military aircraft, for which secrecy must have been as tight as for civilian power reactors. But, as usually happens when no one gets paid to think about something, nobody did.

It was only after the loss of the Three Mile Island plant in 1979 that engineering psychologists asked what the hell was going on in nuclear power plant control rooms. What they saw made them shiver.

One of the basic principles of human factors design is compatibility of control movements: a control lever should move up to move things up, down to move things down. Although nuclear plant control panels have a great deal of vertical mounting space available, most of the levers for control rods in the plant's core (the control rods are for stopping the nuclear chain reaction) have been mounted on horizontal surfaces. Spatially compatible movement has thus been made impossible. In some nuclear power plants, the rods are moved up and down with identical movements of two identical, identically mounted and often adjacent levers.

Another principle is that while identical controls, such as typewriter keys, improve the speed of operation, they invariably increase the likelihood of error (have you ever seen a long typewritten report without typos?). When the probability of setting off the wrong control must be minimized, the controls should be activated by as many mutually exclusive movements as possible, so that the operator can "catch the error before it is made" when the control does not move as intended.

A switch, for example, may be activated by being pushed, pulled, toggled up, toggled down, toggled right, toggled left, twisted clockwise, or twisted counterclockwise—and an operator can catch his error if he attempts any one of those motions on a switch activated by another type of motion. Operation will be slower than with identical pushbuttons, and in some situations engineering psychologists have to calculate in detail the speed-accuracy trade-offs for the operator's task. In nuclear power plants, however, accidents develop slowly (never less than several minutes, usually several hours), while activating the wrong control can cause serious and sometimes irreversible damage. In such contexts it is crucial that the different controls be activated differently.

For the same reason, gauges for different variables should be as distinctive as possible, so that the operator will know at a glance which gauge he or she is looking at. In most nuclear power plants, however, the operator is confronted by a sea of identical controls, distinguished from one another only by discreet uniformly colored labels that are invisible except at close proximity.

Another principle is avoiding sensory overload. It has been estimated that in one fairly frequent abnormal condition, nuclear plant operators would have to deal with 52 different alarms (bells, beeps, sirens, flashing lights, etc.), as well as information from 35 different analog gauges and 14 different digital indicators. Under this kind of sensory overload, most people are likely to have trouble keeping their sanity, much less dealing effectively with the problem at hand.

Still another fundamental principle is that all displays ought to be readable from the operator's actual position when manipulating the corresponding controls. In some of today's nuclear power plant control rooms, the displays showing the status of various valves in the plant are located at opposite ends of the control room from the switches controlling the corresponding valves. So the operator is forced to run back and forth across the control room.

In other control rooms, this principle is violated by the sheer physical size of the control room panels. In the Babcock and Wilcox control room at TMI, some displays were hidden by the control desk and could be read only by leaning over it. Other displays could only be read by walking around the operator's console to the other side, while still others were so high up on the wall that they could not be read without climbing on a ladder. The story of how that type of control room came about is an interesting lesson in the effects of political control over industrial operations.

Babcock and Wilcox, the company that built the Three Mile Island reactor, had learned early in its existence that its power plants depended on a host of licenses and permits granted by political officials at the state and local level. These officials usually did not know how nuclear reactors worked and could not be bothered to learn, but they could be depended on to have a "gut feeling" about whether the plant was or was not safe—a "gut feeling," the company discovered, based primarily on how the plant looked. And so, according to a senior engineering manager at Babcock and Wilcox, the company put on its payroll a commercial artist, whose job it was to make sure that state and local officials would be favorably impressed by the appearance of its power plants.

This man was something of a surrealist, strongly influenced by old Frankenstein movies. His designs featured enormous screen-filling expanses of blank space around each gauge, as well as giant handles of the type used to open and close large switches and valves, in electrical equipment contemporary with those Frankenstein movies. In nuclear power plants those enormous handles actually operated tiny relay switches a few millimeters across, but they took up enough space to create panels up to 46 feet wide and 23 feet high. The ladders came in when it was discovered that the humans who operate these controls were not built to the dimensions of King Kong.

Relationships with state-level public utility commissions have also taken their toll in another way. The rates electric companies are permitted to charge are usually based on a "fair return on investment," so that the power companies have every incentive to make their plants as expensive as possible—quite the opposite of the usual situation in industry. The public utility commissioners, on the other hand, have the job of keeping the costs of construction down, even though they have no idea which cost trade-offs have the least effect on safety.

To take just one example, most nuclear power plants are designed with twin control rooms, a sensible precaution in case one control room becomes inoperative as a result of fire, accident, or, as actually happened at TMI, radioactive contamination. The usefulness of many such twin control rooms has been compromised, however, by state commission orders that power companies save the ratepayers' money by having the cables to twin control rooms cut together to the same length. When cables to twin control rooms come out of a common conduit, the result is mirror-image control rooms—the one layout known to be most likely to produce human error.

Does it seem unbelievable that the engineers who designed the nuclear plants currently in operation would lack the knowledge to spot human factors hazards as obvious as those catalogued above? The reason it happens is that the training of engineers is controlled by the same factors as the subsequent employment of their skills—engineering schools teach whatever prospective employers want their employees to know. Mechanical engineers, for example, usually design machines for private, privately insured industries. At the Massachusetts Institute of Technology, whose curricula tend to set the standards for engineering schools around the world, mechanical engineers have had to take a course in man-machine systems, as a precondition for receiving the bachelor's degree, since the mid-1960s. At the same school it is still possible to get a doctorate in nuclear engineering without taking a single course in human factors or engineering psychology.

All this, of course, is likely to change now. The Nuclear Regulatory Commission has advertised openings for engineering psychologists on its staff. Human factors courses are being incorporated into nuclear engineering curricula. Future nuclear power plant control rooms will have to comply with some sort of human factors standards. There is, however, more to safety than just human factors, and more to human factors than just a set of standards to be met in the design of controls and control rooms. Failure to keep up with contemporary safety research is only one of the dangers of transferring responsibility for industrial safety from private casualty underwriters to a government agency.

Other dangers arise from the fact that the scope of action of a government agency in an otherwise non-totalitarian society must be limited to punishing proven violations of objectively defined rules. Unlike the agent of a private insurance company, who can reward safety practices with lower premiums, a government inspector, who can only punish infractions, will command neither respect nor cooperation from a profit-oriented management. Indeed, his relationship to nuclear power plant managers is inherently adversary. A private insurance company inspector/consultant is free to act on well-founded suspicions; if a policyholder considers the action arbitrary, he is free to take his business elsewhere. Yet in the cooperative atmosphere that exists between private insurers and their clients, such suspicions are rare. A government inspector, on the other hand, can suspect all he wishes but, outside a totalitarian society, cannot act on anything he cannot prove. And so instead of being corrected, safety problems are hidden.

The following is an excerpt from an NRC inspector's summary of conditions at the Zion nuclear power plant in Waukegan, Illinois, in 1979:

Safety is substantially worse because of poor attitude and marginal management. Inadequate-management controls. Management lacks ability to discipline employees for operator errors and carelessness. Personnel selection and discipline may be adversely affected by union relations. Poor management attitude and follow-ups.…Attitude regarding safety is poor.

If an insurance company's safety consultant walked away from a policyholder's plant with a similar impression, the policyholder would soon be faced with a hefty increase in premiums. But the NRC, quite appropriately for a government agency, can do nothing without being able to prove specific violations. At the same time, the Price-Anderson Act makes sure the insurance premium remains the same as that of all other nuclear plants of similar size, regardless of anything the management does or does not do about safety.

Another danger inherent in government safety regulation stems from the practice of "freezing" a government-approved equipment design. In order to provide an objectively provable definition of safety violations, a government agency works by approving a certain design and then levying fines for any unauthorized modification of certified equipment. This means that once a design is certified, design errors cannot be corrected without a lengthy recertification process, during which the regulated industry is forced to operate faulty equipment.

It also means that the operators—the only people who get to know all the faults and virtues of the system by having to work with it every day—are not permitted to correct even the most obvious hazards without consulting a representative of the regulatory agency first. In one case, reactor operators, worried about the dangerous similarity of levers for raising and lowering reactor control rods, substituted handles of proper human factors design—one showing Heineken and the other Michelob. These handles, of course, had originally been manufactured to minimize operator errors in dispensing different brands of draft beer. The management, worried about a possible fine, told the operators to restore the equipment to its old, unsafe but certified condition. Fortunately, an NRC inspector was persuaded to approve the modification.

Problems created by freezing certified designs exist wherever the government presumes to know more about the safety of equipment than the people who actually work with it. According to a recent report in The New York Times, the Airline Pilots Association has complained to the Federal Aviation Administration about a stall-warning alarm "louder than a supersonic aircraft." The FAA has ignored the complaints and threatened to fine any airline which modifies its aircraft without authorization.

Again, we see a very common design fault instituted and perpetrated by a government agency that is responsible for safety. Engineers without adequate training in human factors often warn of abnormal conditions with loud auditory alarms. Yet, as engineering psychologists know (and every engineer should), any loud noise will interfere with the operator's decision-making capability. And an abnormal situation is a situation in which the operator most needs to have his wits about him.

It may sound plausible enough that the various deficiencies in the safety practices of the nuclear power industry are the results of government takeover of safety regulation in that industry.…But isn't nuclear power a uniquely dangerous and complex industry, so that no comparison with the experience of any other field could possibly be relevant? Not at all. Many processes used in the chemical industry are more complex than nuclear power reactors. Some of the reactions used in those processes are much more sensitive to operator error; if an error is made, an explosive situation can develop a hundred times faster than in a nuclear power plant.

Moreover, the consequences of an accident in a chemical plant are likely to be far more grave. It is biochemical processes that regulate activities within and among human cells; a single molecule of the wrong chemical may be enough to cause mutation or cancer. A radioactive nuclide, on the other hand, can only cause biological damage indirectly: with some small probability a radioactive particle may turn a normal molecule into a mutagenic or carcinogenic one, but most radioactive particles will pass through a cell without doing any damage whatever. So chemical plant accidents are capable of causing disasters on a scale way beyond anything a nuclear plant is likely to cause.

Yet there are few demonstrations against the construction of chemical plants. Nobody has ever suggested that private insurance companies cannot provide chemical plants with adequate liability coverage. Does the explanation have anything to do with the nonexistence of a Chemical Regulatory Commission?

Nor have chemical companies ignored the riskiness of their enterprise. They have always carried large amounts of liability insurance and so have always been on the lookout for ways to improve the safety of their operations and thus reduce their premiums. In 1973, six years before the TMI accident finally convinced the Nuclear Regulatory Commission that human factors had to be taken into account in designing nuclear power plant control rooms, three engineers in the safety and human factors department at Eli Lilly and Company were able to write a historical account of "The Marriage of Human Factors and Safety in Industry."

Their company, a typical chemical concern, had been using human factors in safety engineering since 1960—nearly two decades before TMI. Those were two decades during which Eli Lilly managers were doing their best to reduce insurance premiums by cooperating with safety consultants sent by their insurance company, while nuclear power company executives were doing their best to reduce the likelihood of fines by keeping AEC and NRC inspectors as much in the dark as possible; two decades in which the workers at Eli Lilly were helped and encouraged to make their work stations safer, while workers at nuclear power plants were forbidden to change anything without permission from Washington.

No wonder nobody ever bothers to demonstrate at the site of a pharmaceutical plant, regardless of how lethal the stuff inside its pipes. The likelihood of the stuff getting out has been made as small as feasible. That feasibility is constantly improved upon via development and application of new technologies. And if the public doesn't know these facts directly because it doesn't much worry about chemical plants, it does have a deep-seated knowledge that the people in charge of a chemical plant's safety are a rather more competent lot than the folk who brought them the US Postal Service.

The effects of NRC regulation and of the Price-Anderson Act are particularly ironic because they affect a technology that is, inherently and objectively, far safer than any other technology for producing power on a similar scale (for a complete argument see Petr Beckmann, The Health Hazards of Not Going Nuclear). The most dangerous nuclear plant ever built will not kill as many people in everyday operation as the very safest coal-fired generator nor threaten as many lives as a similarly scaled, similarly situated hydroelectric dam. Yet the combined effect of the Price-Anderson Act and the actions of the Atomic Energy/Nuclear Regulatory Commission has so lowered public confidence in the safety of nuclear technology that the proportion of electricity generated by nuclear plants is now falling rather than rising, and people are dying as a result (see box). State intervention has perverted the safest large-scale energy technology ever devised by the human mind into a symbol of danger. Let us learn.

Adam Reed is a member of the graduate faculty at the New School for Social Research. He has degrees in electrical engineering and experimental psychology and works as an independent industrial consultant and a researcher in cognitive psychology.


SOVEREIGN IMMUNITY AND DAMS THAT GO BOOM IN THE NIGHT

Under the doctrine of "sovereign immunity," a government cannot be sued for civil damages unless it makes itself liable through special legislation. Most governments let themselves be sued for small-scale damages, but not for major catastrophes. Sovereign immunity extends to nuclear power plants in countries where their operation is a government monopoly—that is, everywhere except the United States, Canada, and West Germany.

In the United States, sovereign immunity is the reason why hydroelectric dams built and operated by the Army Corps of Engineers are not insured, and if a dam breaks the victims are left "holding the bag." During the past decade, the United States was fortunate enough to avoid the massive casualties (over 3,000 deaths per dam break) of the largest dam losses in India and Italy; but the number of major dam breaks, and dam casualties, is nevertheless high.

The largest dam break, that of the Van Norman Dam during the San Fernando Valley earthquake of 1971, resulted in no casualties because the reservoir of the dam happened to be empty. It is estimated that, had it been full, the dam would have killed between 50,000 and 100,000 people. Other US dam breaks of the past decade include Rapid City, South Dakota, 1972: over 200 dead; Buffalo Creek, West Virginia, 1972: 125 dead; Newfound, North Carolina, 1976: 4 dead; Teton, Idaho, 1976: 14 dead; Toccoa, Georgia, 1977: 35 dead.

All death figures above are for direct casualties only; secondary casualties are not included. In the case of the Teton Dam, property damage was estimated between $400 million and $1 billion; Congress ultimately appropriated $200 million for "disaster relief."

According to a recent estimate in Science, one existing California dam could kill 250,000 people, with a failure probability between 1 in 100 and 1 in 5,000 per year. It is customary not to mention the exact location of the latter dam so as not to tempt terrorists.


THE VICTIMS OF POWER PLANT ACCIDENTS

Some people are fond of saying that "nobody died at Three Mile Island." However true this might be with respect to the direct effects of the TMI accident, every accident that disables a power plant can kill indirectly, by reducing reserve generating capacity and by replacing a clean energy source with a "dirty" one such as coal.

Any reduction in reserve generating capacity increases the probability that electricity will become unavailable at a time of peak demand, such as the height of a heat wave. This will cause considerable excess mortality from heat prostration, especially among the aged and the already sick. The exact figures depend on the age distribution in the population, the duration of the power failure, ambient heat and humidity, and so on.

The replacement of a nuclear generator by a coal-fired one brings in all the various sources of excess mortality in the coal cycle, including mining (for example, mine accidents and black lung mortality), transportation, and wastes (the cumulative carcinogenicity of waste due to the production of equal amounts of electricity is about a thousand times higher for coal than for nuclear). The volume of wastes from fossil fuel operations is too high to permit effective sequestration, and so coal waste is usually dealt with by dispersion into the environment. Thus, the replacement of TMI with power from coal-fired plants may be expected to cost approximately 62 lives per year of operation (see Beckmann, The Health Hazards of Not Going Nuclear, pref. to revised edition).

And there are additional casualties from the political process. In the panic that followed TMI, government agencies closed or delayed the opening of nearly two dozen nuclear plants, with similar secondary excess mortality in each case.