Science & Technology

Is Skynet Inevitable?

Artificial intelligence and the possibility of human extinction

|

Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, Thomas Dunne, 322 pages, $26.99

In the latest Spike Jonze movie, Her, an operating system called Samantha evolves into an enchanting, self-directed intelligence with a will of her own. Samantha makes choices that do not harm humanity, though they do leave viewers feeling a bit sadder.

In his terrific new book, Our Final Invention, documentarian James Barrat argues that visions of an essentially benign artificial general intelligence (AGI) like Samantha amount to silly pipe dreams. Barrat believes artificial intelligence is coming, but he thinks it will be more like Skynet.

In the Terminator movies, Skynet is an automated defense system that becomes self-aware, decides that human beings are a danger to it, and seeks to destroy us with nuclear weapons and terminator robots. Barrat doesn't just think that Skynet is likely. He thinks it's practically inevitable.

Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. Once an AGI comes into existence, it will seek to improve itself in order to more effectively pursue its goals. AI researcher Steve Omohundro, president of the company Self-Aware Systems, explains that goal-driven systems necessarily develop drives for increased efficiency, creativity, self-preservation, and resource acquisition.

At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)-or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, "a god in a box." And this new god will not want to stay in the box.

The emergence of super-intelligent machines has been dubbed the technological Singularity. Once machines take over, the argument goes, scientific and technological progress will turn exponential, thus making predictions about the shape of the future impossible. Barrat believes the Singularity will spell the end of humanity, since the ASI, like Skynet, is liable to conclude that it is vulnerable to being harmed by people. And even if the ASI feels safe, it might well decide that humans constitute a resource that could be put to better use. "The AI does not hate you, nor does it love you," remarks the AI researcher Eliezer Yudkowsky, "But you are made out of atoms which it can use for something else."

Barrat analyzes various suggestions for how to avoid Skynet. The first is to try to keep the god in his box: The new ASI could be guarded by gatekeepers, who would make sure that it is never attached to any networks out in the real world. But Barrat convincingly argues that an intelligence millions of times smarter than people would be able to persuade its gatekeepers to let it out.

The second approach is being pursued by Yudkowsky and his colleagues at the Machine Intelligence Research Institute, who hope to control the intelligence explosion by making sure the first AGI is friendly to humans. A helpful AI would indeed be humanity's final invention, in the sense that all scientific and technological progress would happen at machine computation speed. The result could well be a superabundant world in which disease, disability, and death are just bad memories.

Unfortunately, as Barrat points out, most cutting-edge research organizations are entirely oblivious to the problem of unfriendly AI. In fact, much of the research funded by the Defense Advanced Research Project Agency (DARPA) aims to produce weaponized AI. So again, we're more likely to get Skynet than Samantha.

A third idea is that the first AIs, constrained at first but still highly intelligent, would themselves help researchers to create increasingly more intelligent versions. Each new iteration would have to be proved safe before being unleashed to help make subsequent generations. Or perhaps AIs could be built with components that are programmed to die by default, like the Replicants in Blade Runner. Thus any runaway intelligence explosion would be short-lived. With safety needing to be proved at each step, some components programmed to expire would be replaced, enabling further self-improvement.

The most hopeful possible outcome is that we will gently meld over the next decades with our machines, rather than developing ASI separate from ourselves. Augmented by AI, we will become essentially immortal and thousands of times more intelligent than we currently are.

Ray Kurzweil, Google's director of engineering and the author of The Singularity Is Near, is the most well-known proponent of this benign scenario. Barrat counters that many people will resist AI enhancements and that, in any case, an independent ASI with alien drives and goals of its own will be produced well before the process of upgrading humanity can take place.

To forestall Skynet and other tech terrors, Sun Microsystems co-founder Bill Joy has argued for a vast technological relinquishment in which whole fields of research are abandoned. Barrat correctly rejects that notion as infeasible. Banning open research simply means that it will be conducted out of sight by rogue regimes and criminal organizations.

Barrat concludes with no grand proposals for regulating or banning the development of artificial intelligence. Rather he offers his book as "a heartfelt invitation to join the most important conversation humanity can have." His thoughtful case about the dangers of ASI gives even the most cheerful technological optimist much to think about.