Culture

Superhuman Imagination

Vernor Vinge on science fiction, the Singularity, and the state

|

A few decades ago, the most popular science fiction epics were works like Isaac Asimov's Foundation trilogy or Frank Herbert's Dune series—stories that were set thousands or even tens of thousands of years in the future but involved human beings more or less like us and societies more or less like our own, but with more advanced technology. Today, by contrast, many of the genre's top writers are unwilling to speculate more than 20 years ahead. The acceleration of technological advance, they argue, has begun to make traditional visions of far-future humanity look increasingly myopic and parochial.

One increasingly popular vision of that rapidly accelerating progress is called the Technological Singularity (or, sometimes, just the Singularity)—a concept evoked not just in science fiction novels by the likes of Charles Stross and Bruce Sterling but in works of speculative nonfiction, such as the futurist Ray Kurzweil's popular 2005 book The Singularity Is Near. No name is linked more tightly to the idea of the Singularity than that of Vernor Vinge, 63, who for four decades has written stories about the ways humanity and its technologies are building a future that may be impossible for us even to imagine. "It seems plausible," Vinge says, "that with technology we can, in the fairly near future, create or become creatures who surpass humans in every intellectual and creative dimension. Events beyond such a singular event are as unimaginable to us as opera is to a flatworm."

Vinge, who was also one of the first science fiction writers to conceive of cyberspace, formalized these ideas in an essay written for NASA in 1993 and published later that year in the Whole Earth Review. The article noted several trends that together or separately might lead to the Singularity: artificial intelligence, which could lead to "computers that are 'awake' and superhumanly intelligent"; computer networks that achieve such intelligence; human-computer interfaces that "become so intimate that users may reasonably be considered superhumanly intelligent"; and biological improvements to the human intellect. "Within thirty years, we will have the technological means to create superhuman intelligence," Vinge predicted, adding somewhat ominously that "shortly after, the human era will be ended."

A number of Vinge's novels, including Marooned in Real­time (1986) and A Fire Upon the Deep (1992), have dealt obliquely with the concept, typically by telling the stories of human beings who have escaped, one way or another, the Singularity's explosive transformations. But in his most recent book, last year's Rainbows End (just out in paperback from Tor), Vinge comes toe to toe with imminent change. Humanity, within a couple of decades of the present day, is on the brink
of something transformational—or else on the brink of destruction—in large part because almost everyone is connected, usually via wearable computers, to tomorrow's In- ternet. The result is a struggle between those who would hobble our independence to make us safer and those who are willing to risk skirting the edge of destruction to see where the Singularity takes us. It's just the quality of speculation you'd expect from an author whose previous novel, A Deepness in the Sky (1999), won not only a Hugo Award but also a Prometheus Award for the best libertarian novel of the year.

Vinge is a mathematician and computer scientist as well as a novelist; he is now retired from his faculty position at San Diego State University to a life of writing, lecturing, and consulting. Contributing Editor Mike Godwin interviewed him, in part via email, in the weeks following his appearance last year as a guest of honor at the 16th annual Computers, Freedom & Privacy conference in Washington, D.C. At that conference Vinge spoke about both the Singularity and a "convergence" of technological trends that threaten to drastically limit individual freedom.

Reason: In your speech you foresaw efforts to build ubiquitous monitoring or government controls into our information technology. What's more, you suggested that this wasn't deliberate—that the trend is happening regardless of, or in spite of, the conscious choices we're making about our information technology.

Vernor Vinge: I see an implacable government interest here, and also the convergence of diverse nongovernmental interests—writers unions, Hollywood, "temperance" organizations of all flavors, all with their own stake in exploiting technology to make people "do the right thing."

Reason: Do you believe this pervasive monitoring and/or control might stall the Singularity?

Vinge: I think that if the Singularity can happen, it will. There are lots of very bad things that could happen in this century. The Technological Singularity may be the most likely of the noncatastrophes.
Except for their power to blow up the world, I think governments would have a very hard time blocking the Singularity. The possibility of governments perverting the Singularity is somewhat more plausible to me. (Who wrote the story with the newspaper headline "Today Parliament Met and Abolished the People"?) In A Deepness in the Sky the Singularity didn't happen, but not because of governments. On the other hand, A Deepness in the Sky showed how government could use technology to create a whole new level of tyranny.

But in my speech, I also wanted to raise the possibility that these abuses may turn out to be irrelevant. There is a national interest, and not just in America, in providing the illusion of freedom for the millions of people who need to be happy and creative to make the economy go. Those people are more diverse and distributed and resourceful and even coordinated than any government.

That's a power we already have in free markets. Computer networks, supporting data
and social networks, give this trend an enormous boost. In the end that illusion of freedom may have to be more like the real thing than any society has ever achieved in the past, something that could satisfy a new kind of populism, a populism powered by deep knowledge, self-interest so broad as to reasonably be called tolerance, and an automatic, preternatural vigilance.

Reason: It's now more than 20 years after you first started writing about the Singularity and more than a dozen since you presented your ideas in a paper about it. Are we still on track?

Vinge: I think so. In 1993 I said I'd be surprised if the Technological Singularity happened before 2005—I'll stand by that!—or after 2030. It's also possible the Singularity won't happen at all.

Reason: What kinds of things might prevent the Singularity from happening?

Vinge: First, physical disasters—the usual range of ugly, violent threats to civilization—could intervene. I think the most likely existential threat is simply nuclear warfare between nation-states, most especially MAD [mutual assured destruction] strategy wars. We are so distracted and (properly!) terrified about nuclear terrorism these days that we tend to ignore the narrow passage of 1970 to 1990, when tens of thousands of nukes might have been used in a span of days, perhaps without any conscious political trigger. A return to MAD is very plausible, and when combined with the various likely natural hardships (climate, fresh water issues), it's a civilization killer. Perhaps a human race killer. If the nation-states don't blow us all up (and again, if there is no Singularity), then eventually terrorism becomes an existential threat, the reductio ad absurdum example being when technology puts the ability to devastate continents at the fingertips of anyone having a bad hair day.

A second possibility, less likely in my opinion, is that maybe it will turn out that whatever the hardware power of our new computers, we simply never figure out how to connect the parts. In one form or another, I think this is where most thoughtful skepticism re­­sides.

Still less likely, but possible: Maybe we have drastically misestimated the raw hardware power of what we carry between our ears. Hans Moravec had a very nice estimate of our "raw hardware power" back in 1990 in his book Mind Children. His estimate puts us humans way ahead of contemporary machines (and makes predictions of contemporary failures of A.I. [artificial intelligence] quite plausible). On the other hand, his estimate, together with the possibility that Moore's Law [the observation that the computing power of new microprocessors doubles every 18 months to two years] continues for a decade or two, makes it plausible that very interesting A.I. developments might occur before 2030.

On the other hand, if one looks inside an individual neuron, one could argue that it is much more computationally competent than a microprocessor. This is without invoking quantum or mystical hocus-pocus. There was a researcher at the Thinking Machines meeting in 1992 who saw the possibility of computation taking place via the neuron's microtubules.

Certainly if one looks at all the stuff going on inside any cell, there is very significant computation. The question is, how much of that is needed to support the brain's ensemble behavior? The high-end estimates of neuron computational competence could push the Moore's Law crossover point significantly further into the future.

Reason: What has changed since 1993? What has turned out different or slower or faster than you expected?

Vinge: In the 1993 essay, I categorized approaches to the Singularity into four groups. As time passes, some of these paths seem more likely than others, though that could change again and again, and the ultimate outcome will probably be some combination of approaches. The most intriguing trend over the last few years has been the interactions between people, networks, computers, and databases, perhaps an ensemble critter very much like [biophysicist and science writer] Gregory Stock's "metaman" [Stock's name for a "superorganism" consisting of humanity plus its technology].

At the same time, we are at the beginning of an era where computer power and storage is plausibly comparable to that of some animals. Before, this coming parity was simply a talking point. But now, questions about awareness in machines—and in people—should be subject to new insights. Thus, I'd expect these sorts of discussions to become increasingly substantive.

Reason: I increasingly encounter the assumption that consciousness is going to emerge from the growth of processing power and networks.

Vinge: I don't want to make the assumption that it would just naturally happen. Mucho human genius will likely be necessary to "connect the parts" for the first time, even if the connections are simply the setup for intellectual growth. However, hardware parity between humans and computers is at least a reason for being more optimistic about the possibility of success.

Reason: Some philosophers have suggested that you need to have an actual body, with feelings, to have an emergent consciousness. Is this view just meat-space parochialism?

Vinge: Possibly. But in fact, this has been a selling point for mobile-robot research. Maybe emotions arise most naturally out of specializing broadly defined goals to threats and promises in the complexity of the real world. Of course, this is not to imply that the computer itself must be in the mobile.

Reason: You speculated about the Singularity in your novel Marooned in Realtime, the sequel to The Peace War (1984). Did it inform your writing of The Peace War as well?

Vinge: It's informed most of my writing back to my short story "Bookworm, Run!" [1965], though I didn't use the term Singularity until a panel discussion at AAAI '82 [the 1982 conference of the American Association of Artificial Intelligence]. In the case of The Peace War, I think it's there, but far offstage.

Reason: What are the sources of your vision of the Singularity?

Vinge: It's a truism that science fiction is always about the present. That is, the stories are simply a reflection of the concerns of the era in which they are written. That's a good insight, but imprecise: Science fiction is almost always a reflection of the author's present. Looking back, I see how I was immersed in stories that pointed in this direction, including stories by Olaf Stapledon, Poul Anderson, and John W. Campbell Jr. Entire generations of science fiction writers had enchanted me with visions of how different the future could be. Many of these writers had speculated on the consequences of superintelligence. The notion that those consequences might be in the near future was often missing, but by the time of my childhood it was obvious to anyone of overweening optimism.

When I was a 10-year-old reading science fiction, there had already been essays by people like [scientist and sociologist] J.?D. Bernal and [engineer and computing pioneer] Vannevar Bush. And through the late 1940s and 1950s there was [statistics and probability theorist] I.?J. Good. His 1965 paper (which I didn't find until probably the 1990s) "Speculations Concerning the First Ultraintelligent Machine" includes almost exactly the notion of the Technological Singularity, though without using that terminology.

Reason: What is the role of science fiction in helping us cope with a transformation you believe many of us will live to see?

Vinge: I think science fiction can have all the power of conventional literature, but with the added potential for providing us with vivid, emotionally grounded insights into the future and into alternative scenarios. Speaking grandiosely, science fiction might be taken as having the role for humanity that sleep dreaming has for the individual. Sleep dreams are mostly nonsense, but sometimes we wake up with the stark realization that we have underestimated a possibility or a goodness or a threat.

Reason: Who are the most inspiring writers you've been reading lately?

Vinge: In connection with these topics, David Brin, Greg Egan, Karl Schroeder, Bruce Sterling, Charles Stross —and I fear I am missing others. In nonfiction, Ray Kurzweil and Hans Moravec. Hans is awesome, a more radical explorer than just about any science fiction writer.

Reason: You dedicate Rainbows End "to the Internet-based cognitive tools that are changing our lives—Wikipedia, Google, eBay, and the others of their kind, now and in the future." What's the story behind this dedication?

Vinge: I regard the current Internet as a test bed for the cognitive coordination of people and databases and computers. Tools such as Google, eBay, and Wikipedia are—I hope—harbingers of much more spectacular developments.

Reason: I notice that people of every age group are now reflexively using search engines—not just to answer questions but to find people, form groups of like-minded folks, and so on. It now feels like such a reflex that, on the rare occasions when I'm not connected to the Internet, I feel sort of hobbled or cut off.

Vinge: Me too!

Reason: In Marooned in Realtime and in Rainbows End you have couples that get together, break up, and then live long enough and change enough that they find each other again. On one hand, that doesn't seem itself too terribly science fictional, but on the other hand we used not to live long enough, most of us, for such change and rapprochement to happen. It seems to me that learning how to live a long time is going to be a major task for us as individuals and as a society.

Vinge: Yes, and I hope it will be mainly a happy task!

Reason: It seemed like an interesting metaphor about how other long-term relationships in our lives can change or go sour.

Vinge: It would be interesting to see, if people could live in energetic good health for an additional few decades, how many would go back and undo mistakes that in past eras might only be the subject of pointless regret.

Reason: There doesn't seem to be much of a guarantee in your works, or in Kurzweil's, that the Singularity is going to be very pleasant or happy-making.

Vinge: Speaking for myself, that's true. No guarantees. But if the Singularity were in prospect for 1,000 years from now, I think that many, including the likes of Ben Franklin, would regard it as the meliorist outcome of all the human striving down the centuries. It's the possibility that it could happen in the next 20 years that's scary!

Contributing Editor Mike Godwin is a research fellow at Yale University and a research scientist for the PORTIA Project.

Discuss this article online.