Science & Technology

The End of Humanity: Nukes, Nanotech, or God-Like Artificial Intelligences?

Closing dispatch from the Oxford Catastrophic Risks Conference

|

Oxford, England—The Global Catastrophic Risks conference sponsored by the Future of Humanity Institute concluded on Sunday. Participants were treated to a series of presentations describing how billions of people could potentially bite the dust over the next century. The possible megadeath tolls of both natural and biotech pandemics were considered. The chances that asteroids, comets, or gamma ray bursts from a nearby supernova could wipe out humanity were calculated. The old neo-Malthusian threats of overpopulation, resource depletion, and famine were trotted out. But these risks to future human well-being paled in comparison to one main menace—malicious human ingenuity.

Human ingenuity forged the still massive arsenals of nuclear weapons held by the United States and Russia. And as the conference participants made an argument that human ingenuity is on track to craft nanotech fabricators that can make essentially any product, including weapons of mass destruction, at essentially no cost, not to mention a self-improving artificial intelligence possessing god-like powers to pursue its own goals.

First, let's consider the nuclear threat. Joseph Cirincione of the Ploughshares Fund pointed out the good news that the world's nuclear arsenals have been cut in half-down from 65,000 to 26,000 since the height of the Cold War. However, the U.S. retains 10,685 nuclear bombs and Russia is estimated to have around 14,000. Of those, 4,275 in the U.S. and 5,192 in Russia are active. Both countries maintain 2,000 weapons on hair-trigger alert, ready for launching in 15 minutes or so. Cirincione offered a couple of scary scenarios, including one in which there is an unauthorized launch of all 12 missiles from a Russian submarine containing 48 warheads with about 5 megatons total destructive power. Such an attack would kill 7 million Americans immediately. A retaliatory American attack aimed at several hundred Russian military assets would kill between 8 and 12 million Russians.

With regard to the possibility of an accidental nuclear war, Cirincione pointed to the near miss that occurred in 1995 when Norway launched a weather satellite and Russian military officials mistook it as a submarine launched ballistic missile aimed at producing an electro-magnetic pulse to disable a Russian military response. Russian nuclear defense officials opened the Russian "football" in front of President Boris Yeltsin, urging him to order an immediate strike against the West. Fortunately, Yeltsin held off, arguing that it must be a mistake.

A global nuclear war scenario in which most of both Russian and American arsenals were fired off would result in 105 to 230 million immediate American deaths and 28 to 56 million immediate Russian deaths. One of the effects of such an attack would be a rapid cooling of global temperatures as sunlight was blocked by dust and smoke. Cirincione argued that even a nuclear war limited just to bitter enemies India and Pakistan could produce enough dust and smoke to lower global temperatures by one half to two degrees Celsius, plunging the world back to the Little Ice Age.

The good news is that Cirincione sees an opening for negotiations to totally eliminate nuclear weapons. He pointed to an initiative by the "Four Horsemen of the Un-Apocalypse"; that is, by former Secretaries of State Henry Kissinger and George Schultz, former Sen. Sam Nunn (D-Ga.), and former Secretary of Defense William Perry that aim to eliminate nuclear weapons completely. In fact, both of the presumptive major party presidential candidates, Sen. John McCain (R-Ariz.) and Sen. Barack Obama (D-Ill.), have explicitly endorsed the idea of global nuclear disarmament. Cirincione argued that a commitment by the declared nuclear powers would have the effect of persuading countries like Iran that they did not need to become nuclear powers themselves.

Cirincione danced around the question of what to about Iran's pursuit of nuclear weapons, pointing out that its nuclear facilities are hardened, dispersed, and defended. Cirincione asserted that the U.S. has 5-day and 10-day plans for taking out Iran's nuclear facilities, but he noted that such plans don't end the matter. Iran has moves too, including trying to block oil shipments through the Straits of Hormuz, revving up terrorist attacks in Iraq, and even aiding terrorist attacks in the U.S. Cirincione claimed that that the Iranians are still five to ten years away from making a nuclear bomb. On a side note, Cirincione admitted that he initially did not believe that the Syrians had constructed a nuclear weapons facility, but is now convinced that they did. The Syrians hid it away in a desert gully, disguising it as an ancient Byzantine building.

Terrorism expert Gary Ackerman from the University of Maryland and William Potter from the Monterey Institute of International Studies evaluated the risks from two types of nuclear terrorism—the theft of nuclear material and the construction of a crude bomb and the theft of an intact nuclear weapon. They set aside two lower consequence attacks: the dispersal of radiological material by means of a conventional explosion and sabotage of nuclear facilities. Could non-state actors, a.k.a., a terrorist group, actually build a nuclear bomb? Potter cited an article by Peter Zimmerman in which he estimated that a team of 19 terrorists (the same number that pulled off the September 11 atrocities) could build such a bomb for around $6 million. Their most challenging task would be to acquire 40 kilograms of highly enriched uranium (HEU). There are 1700 tons of HEU in the world, including 50 tons stored at civilian sites. Potter acknowledged that intact weapons are probably more secure than fissile material.

Ackerman noted that only a small subset of terrorists has the motivation to use nuclear terrorism. "So far as we know only Jihadists want these weapons," said Ackerman. Specifically, Al Qaeda has made ten different efforts to get hold of fissile material. Ackerman told me that Al Qaeda had been defrauded several times by would-be vendors of nuclear materials. Just before the September 11 atrocities, two Pakistani nuclear experts visited Osama bin Laden in Afghanistan, apparently to advise Al Qaeda on nuclear matters. One possibility is that if Pakistan becomes more unstable intact weapons could fall into terrorist hands. Still, the good news is that intercepted fissile material smugglers have actually been carrying very small amounts. Less reassuringly, Potter did note that prison sentences for smugglers dealing in weapons grade nuclear material have been less than those meted out for drunk driving.

One cautionary case: Two groups invaded and seized the control room of the Pelindaba nuclear facility in South Africa in November, 2007. They were briefly arrested and then released without further consequence. Both Ackerman and Potter agreed that it is in no state's interest to supply terrorists with nuclear bombs or fissile material. It could be easily traced back to them and they would suffer the consequences. Ackerman cited one expert estimate that there is a 50 percent chance of a nuclear terrorist attack in the next ten years.

While nuclear war and nuclear terrorism would be catastrophic, the presenters acknowledged that neither constituted existential risks; that is, a risk that they could cause the extinction of humanity. But the next two risks, self-improving artificial intelligence and nanotechnology, would.

The artificial intelligence explosion?

Singularity Institute for Artificial Intelligence research fellow Eliezer Yudkowsky began his presentation with a diagram of the space of possible minds. Among the vast space of possible minds, a small dot represented human minds. His point is that two artificial intelligences (AIs) could be far more different from one another than we are from chimpanzees. Yudkowsky then described the relatively slow processing speeds of human brains, the difficulty in reprogramming ourselves, and other limitations. An AI could run 1 million times faster, meaning that it could get a year's worth of thinking done in 31 seconds. An "intelligence explosion" would result because an AI would have access to its source code and could rapidly modify and optimize itself. It would be hard to make an AI that didn't want to improve itself in order to better achieve its goals.

Can an intelligence explosion be avoided? No. A unique feature of AI is that it can be a "global catastrophic opportunity." Success in creating a friendly AI would give humanity access to vast intelligence that could be used to mitigate other risks. But picking a friendly AI out of the space of all possible minds is a hard and unsolved problem. According to Yudkowsky, the unique features of a superintelligent AI as a global catastrophic risk are: There is no final battle, or an unfriendly AI just kills off humanity. And there is nowhere to hide because the AI can find you wherever you are. There is no learning curve since we get only one chance to produce a friendly AI. But will it happen? Yudkowsky pointed out that there is no way to control the proliferation of "raw materials," e.g., computers, so the creation of an AI is essentially inevitable. In fact, Yudkowsky believes that current computers are sufficient to instantiate an AI, but researchers just don't know how to do it yet.

What can we do? "You cannot throw money or regulations at this problem for an easy solution," insisted Yudkowsky. His chief (and somewhat self-serving) recommendation is to support a lot of mathematical research on how to create a friendly AI. Of course, former Sun Microsystems chief scientist Bill Joy proposed another solution: relinquishment. That is, humanity has to agree to some kind of compact to never try to build an AI. "Success mitigates lots of risks," said Yudkowsky. "Failure kills you immediately." As a side note, bioethicist James Hughes, head of the Institute for Ethics and Emerging Technologies, mused about how much longer it would be before we would see Sarah Connor Brigades gunning down AI researchers to prevent the Terminator future. (Note to self: perhaps reconsider covering future Singularity Institute conferences.)

The menace of molecular manufacturing?

Next up was Michael Treder and Chris Phoenix from the Center for Responsible Nanotechnology. They cannily opened with a series of quotations claiming that science will never be able to solve this or that problem. Two of my favorites were: "Pasteur's theory of germs is a ridiculous fiction" by Pierre Pachet in 1872, and "Space travel is utter bilge," by Astronomer Royal Richard Woolley in 1956. Of course, the point is that arguments that molecular manufacturing is impossible are likely to suffer the same predictive failures. Their vision of molecular manufacturing involves using trillions of very small machines to make something larger. They envision desktop nanofactories into which people feed simple raw inputs and get out nearly any product they desire. The proliferation of such nanofactories would end scarcity forever. "We can't expect to have only positive outcomes without mitigating negative outcomes," cautioned Treder.

What kind of negative outcomes? Nanofactories could produce not only hugely beneficial products such as water filters, solar cells, and houses, but also weapons of any sort. Such nanofabricated weapons would be vastly more powerful than today's. Since these weapons are so powerful, there is a strong incentive for a first strike. In addition, an age of nanotech abundance would eliminate the majority of jobs, possibly leading to massive social disruptions. Social disruption creates the opportunity for a charismatic personality to take hold. "Nanotechnology could lead to some form of world dictatorship," said Treder. "There is a global catastrophic risk that we could all be enslaved."

On the other hand, individuals with access to nanofactories could wield great destructive power. Phoenix and Treder's chief advice is more research into how to handle nanotech when it becomes a reality in the next couple of decades. In particular, Phoenix thinks that it's urgent to study whether offense or defense would be the best response. To Phoenix, offense looks a lot easier—there are a lot more ways to destroy things than to defend them. If that's true, we should narrow our future policy options.

This concluion left me musing on British historian Arnold Toynbee's observation: "The human race's prospects of survival were considerably better when we were defenseless against tigers than they are today when we have become defenseless against ourselves." I don't think that's right, but it's worth thinking about.

Ronald Bailey is reason's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

Disclosure: The Future of Humanity Institute is covering my travel expenses for the conference; no restrictions or conditions were placed on my reporting.