In Frank Herbert's Dune books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Butlerian Jihad, because they felt the machines controlled them. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" is immediate death.
Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence: Paths, Dangers, Strategies (Oxford University Press). Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (A.I.) will likely destroy us all.
Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. In 1965, the mathematician I.J. Good observed that technology arises from the application of intelligence. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop-an intelligence explosion-in which self-improving intelligence would bootstrap its way to superintelligence. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.
About 10 percent of A.I. researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Nearly all think it will be accomplished by century's end. Since the new A.I. will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an A.I. could do a year's worth of thinking every 31 seconds.
Bostrom charts various pathways toward achieving superintelligence. One approach involves using brain/computer interfaces to augment human intelligence by machine intelligence. Bostrom more or less dismisses this cyborgization pathway as being too clunky and too limited, although he acknowledges that making people smarter could help to speed up the process of developing true superintelligence in machines. Bostrom's dismissal may be too hasty, as technological advances could in time overcome his reasons for skepticism.
In any case, for Bostrom there are two main plausible pathways to superintelligence: whole brain emulation and machine A.I.. Whole brain emulation involves deconstructing an actual human brain down to the synaptic level and then digitally instantiating the three-dimensional neuronal network of the trillions of connections in a computer. The aim is to make a digital reproduction of the original intellect, with memory and personality intact. Bostrom explores one pathway in which an emulation is uploaded into a sufficiently powerful computer such that the new digital intellect embarks on a process of recursively bootstrapping its way to superintelligence.
In the other pathway, researchers combine advances in software and hardware to directly create a superintelligent machine. One proposal is to create a "seed A.I.," somewhat like Turing's child machine, which would understand its own workings well enough to improve its algorithms and computational structures, enabling it to enhance its cognition to achieve superintelligence. A superintelligent A.I. would be able to solve scientific mysteries, abate scarcity by generating a bio-nano-infotech cornucopia, inaugurate cheap space exploration, and even end aging and death. It could do all that, but Bostrom fears it will much more likely regard us as nuisances that must be swept away as it implements its values and achieves its own goals. And even if it doesn't target us directly, it could simply make the Earth uninhabitable as it pursues its ends-say, by tiling the planet over with solar panels or nuclear power plants.
Bostrom argues that it is important to figure out how to control an A.I. before turning it on, because it will resist attempts to change its final goals once it begins operating. In that case, we'll get only one chance to give the A.I. the right values and aims. Broadly speaking, Bostrom looks at two ways developers might try to protect humanity from a malevolent superintelligence: capability control methods and motivation selection.
An example of the first approach would be to try to confine the A.I. to a "box" from which it has no direct access to the outside world. Its handlers would then treat it as an oracle, posing questions to it such as how we might exceed the speed of light or cure cancer. But Bostrom thinks the A.I. would eventually get out of the box, noting that "Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader."
Alternatively, developers might try to specify the A.I.'s goals before it is switched on, or set up a system whereby it discovers an appropriate set of values. Similarly, a superintelligence that began as an emulated brain would presumably have the values and goals of the original intellect. (Choose wisely which brains to disassemble and reconstitute digitally.) As Bostrom notes, trying to specify a final goal in advance could go badly wrong. For example, if the developers instill the value that the A.I. is supposed to maximize human pleasure, the machine might optimize this objective by creating vats filled with trillions of human dopamine circuits continually dosed with bliss-inducing chemicals.
Rather than directly specifying a final goal, Bostrom suggests that developers might instead instruct the new A.I. to "achieve that which we would have wished the A.I. to achieve if we had thought long and hard about it." This is a rudimentary version of Yudkowsky's idea of coherent extrapolated volition, in which a seed A.I. is given the goal of trying to figure out what humanity-considered as a whole-would really want it to do. Bostrom thinks something like this might be what we need to prod a superintelligent A.I. into ushering in a human-friendly utopia.
In the meantime, Bostrom thinks it safer if research on implementing superintelligent A.I. advances slowly. "Superintelligence is a challenge for which we are not ready now and will not be ready for a long time," he asserts. He is especially worried that people will ignore the existential risks of superintelligent A.I. and favor its fast development in the hope that they will benefit from the cornucopian economy and indefinite lifeÂspans that could follow an intelligence explosion. He argues for establishing a worldwide A.I. research collaboration to prevent a frontrunner nation or group from trying to rush ahead of its rivals. And he urges researchers and their backers to commit to the common good principle: "Superintelligence should be developed only for the benefit of all humanity and in the service of widely shared ethical ideals." A nice sentiment, but given current international and commercial rivalries, the universal adoption of this principle seems unlikely.
In the Dune series, humanity was able to overthrow the oppressive thinking machines. But Bostrom is most likely right that once a superintelligent A.I. is conjured into existence, it will be impossible for us to turn it off or change its goals. He makes a strong case that working to ensure the survival of humanity after the coming intelligence explosion is, as he writes, "the essential task of our age."