Policy

Will Our Robot Overlords Be Friendly?

Notes from the Singularity Summit in New York City.

|

NEW YORK—The singularity grows nigh. A happy band of technophiles, futurists, transhumanists, and, yes, singulatarians gathered in New York City this past weekend to talk about prospects for life before and after the technological creation of smarter-than-human intelligence. The phenomenon gets its name from science fiction writer Vernor Vinge, who analogized a future full of super-smart artificial intelligence(s) (AIs) to the way black holes work. Black holes are singularities—surrounded by event horizons past which outside observers simply cannot see. Self-improving super-smart AIs will so radically speed up the pace of technological change that it is simply impossible to describe what the future would look like afterwards.

But that doesn't stop people from trying to peek beyond the event horizon into the post-singularity future. Convened by the Singularity Institute for Artificial Intelligence (SIAI) for the first time on the East Coast, this fourth annual meeting attracted about 900 participants. The SIAI was created to address the urgent problem of how to create super-smart AIs that are friendly to human beings. The worry is that, unless we are very careful, AIs might evolve value systems that treat us as annoying organic matter that should be more usefully turned into computronium. As the Singularity Institute's Anna Salamon explained in her opening presentation at the summit, smarter intelligences might choose to get rid of us because our matter is not optimally arranged to achieve their goals.

One way to wind up with human-friendly AIs is to build them on top of uploaded human brains. Anders Sandberg, a research fellow at the Future of Humanity Institute at Oxford University, offered a technical roadmap for whole brain emulation. "If artificial intelligence does not get here first, we're going to end up with uploaded humans," maintained Sandberg. He argued that it is possible to foresee how developments in neuroscience, and software and hardware development are leading to emulating specific people's brains. Sandberg believes that emulating a human brain is only 20 years away. Sandberg did observe that we do not know if a one-to-one emulation would produce a mind or not. A huge advantage of an uploaded mind is that it would no longer be constrained by the speed at which organic brains can process information. He offhandedly noted that he would not be the first volunteer for an emulation experiment.

Randal Koene, the director of the Department of Neuroengineering at Fatronik-Tecnalia Foundation in Spain, argued that the time is now to go after mind uploading. Koene argued that radically increasing human longevity solves a few problems, but doesn't deal with our dependence on changeable environments and scarce resources. Nor does it deal with our intellectual limitations, death by traumatic accidents, or disease. Koene's "inescapable conclusion" was that we must free the mind from its single fragile substrate. How might you copy a brain? Perhaps one could use a more advanced version of the knife edge scanning microscope that currently enables the reconstruction of mouse brain architecture. This is what is known among uploading cognoscenti as destructive scanning. It had better work because the old organic brain is all sliced up in order to produce the emulation. To solve the problem of maintaining a sense of continuity, Koene suggested that one pathway might be to use molecular nanotechnology to replace parts of the brain bit by bit over time.

Philosopher David Chalmers, who directs the Centre for Consciousness at the Australian National University, argued that personal identity would be maintained if the functional organization of the upload was the same as the original. In addition, gradual uploading might also be a way to maintain personal identity. Chalmers also speculated about reconstructive uploading in which a super-smart AI would scour the world for information about a person, say articles, video, audio, blog posts, restaurant receipts, whatever. The AI would then instantiate that information in the appropriate substrate. "Is it me?," asked Chalmers. Maybe. On the optimistic view, being reconstructed from the informational debris you left behind would be like waking up from a long nap.

Inventor Ray Kurzweil, author of The Singularity is Near, envisions an intimate integration between humans and their neural prosthetics. Over time, more and more of the neural processing that makes us who we are will be located outside our bodies and brains so that "uploading" will take place gradually. Our uploaded minds will function much faster and more precisely than our "meat" minds do today. In a sense, we will become the singularity as the artificial parts of our intelligences become ascendant.

If the AIs aren't made of people, though, guaranteeing that they will be human-friendly is much more difficult. Chalmers suggested that perhaps we could launch a super-smart self-improving AI inside a leak-proof computing environment and see how it evolves. If it turns out to be nice, then we let it out and the singularity takes off and we're all happy. Kurzweil objected that a leak-proof singularity is impossible. In order to determine whether or not the AI was friendly we would have to look inside its computing environment and, since it is so much smarter than we are, it would necessarily become aware of us and then manipulate us into letting it out. In other words, it could pretend to be friendly and then zap us once it's present in our world.

Given these dangers, why is everyone so excited about the singularity? Co-founder of Paypal, venture capitalist, and supporter of the Singularity Institute, Peter Thiel began his talk on the economics of the singularity by asking the audience to vote on which of seven scenarios they are most worried about. (See Reason's interview with Thiel here.) The totals below are my estimates from watching the audience as they raised their hands:

A. Singularity happens and robots kill us all, the Skynet scenario, (5 percent)
B. Biotech terrorism using something more virulent than smallpox and Ebola combined (30 percent)
C. Nanotech grey goo escapes and eats up all organic matter (5 percent)
D. Israel and Iran engage thermonuclear war that goes global (25 percent)
E. A one-world totalitarian state arises (10 percent)
F. Runaway global warming (5 percent)
G. The singularity takes too long to happen (30 percent)

Thiel argued that the last one—that the singularity is going to take too long to happen—is what worries him. "The good singularity is the most important social, political, economic, and technological issue confronting us," declared Thiel. Why? Because without rapid technological progress, economic growth in already developed countries like the U.S., Western Europe, and Japan is not going be enough to address looming needs. Without fast economic growth producing more wealth, Americans might be driven to saving 40 percent of their incomes and retiring at age 80.

So what if Thiel's fears are realized and the singularity takes a while to arrive? If you want to be alive when it gets here, Aubrey de Grey, a biomedical gerontologist and chief science officer of the SENS Foundation, outlined his proposals for anti-aging research. In his view, progress in regenerative medicine could achieve longevity escape velocity in which researchers develop anti-aging therapies faster than a person approaches death from aging.

Singularity Institute fellow Anna Salamon argued during the summit that an intelligence explosion, or singularity, can't be controlled once it starts. She likened the situation to trying to put a leash on an exploding atomic bomb. Creating a super-smart friendly AI is the "hardest goal humans have ever tried."

Ronald Bailey is Reason magazine's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.