A few decades ago, the most popular science fiction epics were works like Isaac Asimov’s Foundation trilogy or Frank Herbert’s Dune series—stories that were set thousands or even tens of thousands of years in the future but involved human beings more or less like us and societies more or less like our own, but with more advanced technology. Today, by contrast, many of the genre’s top writers are unwilling to speculate more than 20 years ahead. The acceleration of technological advance, they argue, has begun to make traditional visions of far-future humanity look increasingly myopic and parochial.
One increasingly popular vision of that rapidly accelerating progress is called the Technological Singularity (or, sometimes, just the Singularity)—a concept evoked not just in science fiction novels by the likes of Charles Stross and Bruce Sterling but in works of speculative nonfiction, such as the futurist Ray Kurzweil’s popular 2005 book The Singularity Is Near. No name is linked more tightly to the idea of the Singularity than that of Vernor Vinge, 63, who for four decades has written stories about the ways humanity and its technologies are building a future that may be impossible for us even to imagine. “It seems plausible,” Vinge says, “that with technology we can, in the fairly near future, create or become creatures who surpass humans in every intellectual and creative dimension. Events beyond such a singular event are as unimaginable to us as opera is to a flatworm.”
Vinge, who was also one of the first science fiction writers to conceive of cyberspace, formalized these ideas in an essay written for NASA in 1993 and published later that year in the Whole Earth Review. The article noted several trends that together or separately might lead to the Singularity: artificial intelligence, which could lead to “computers that are ‘awake’ and superhumanly intelligent”; computer networks that achieve such intelligence; human-computer interfaces that “become so intimate that users may reasonably be considered superhumanly intelligent”; and biological improvements to the human intellect. “Within thirty years, we will have the technological means to create superhuman intelligence,” Vinge predicted, adding somewhat ominously that “shortly after, the human era will be ended.”
A number of Vinge’s novels, including Marooned in
Realtime (1986) and A Fire Upon the Deep (1992),
have dealt obliquely with the concept, typically by telling the
stories of human beings who have escaped, one way or another, the
Singularity’s explosive transformations. But in his most recent
book, last year’s Rainbows End (just out in paperback from
Tor), Vinge comes toe to toe with imminent change. Humanity, within
a couple of decades of the present day, is on the brink
of something transformational—or else on the brink of destruction—in large part because almost everyone is connected, usually via wearable computers, to tomorrow’s In- ternet. The result is a struggle between those who would hobble our independence to make us safer and those who are willing to risk skirting the edge of destruction to see where the Singularity takes us. It’s just the quality of speculation you’d expect from an author whose previous novel, A Deepness in the Sky (1999), won not only a Hugo Award but also a Prometheus Award for the best libertarian novel of the year.
Vinge is a mathematician and computer scientist as well as a novelist; he is now retired from his faculty position at San Diego State University to a life of writing, lecturing, and consulting. Contributing Editor Mike Godwin interviewed him, in part via email, in the weeks following his appearance last year as a guest of honor at the 16th annual Computers, Freedom & Privacy conference in Washington, D.C. At that conference Vinge spoke about both the Singularity and a “convergence” of technological trends that threaten to drastically limit individual freedom.
Reason: In your speech you foresaw efforts to build ubiquitous monitoring or government controls into our information technology. What’s more, you suggested that this wasn’t deliberate—that the trend is happening regardless of, or in spite of, the conscious choices we’re making about our information technology.
Vernor Vinge: I see an implacable government interest here, and also the convergence of diverse nongovernmental interests—writers unions, Hollywood, “temperance” organizations of all flavors, all with their own stake in exploiting technology to make people “do the right thing.”
Reason: Do you believe this pervasive monitoring and/or control might stall the Singularity?
Vinge: I think that if the Singularity can
happen, it will. There are lots of very bad things that could
happen in this century. The Technological Singularity may be the
most likely of the noncatastrophes.
Except for their power to blow up the world, I think governments would have a very hard time blocking the Singularity. The possibility of governments perverting the Singularity is somewhat more plausible to me. (Who wrote the story with the newspaper headline “Today Parliament Met and Abolished the People”?) In A Deepness in the Sky the Singularity didn’t happen, but not because of governments. On the other hand, A Deepness in the Sky showed how government could use technology to create a whole new level of tyranny.
But in my speech, I also wanted to raise the possibility that these abuses may turn out to be irrelevant. There is a national interest, and not just in America, in providing the illusion of freedom for the millions of people who need to be happy and creative to make the economy go. Those people are more diverse and distributed and resourceful and even coordinated than any government.
That’s a power we already have in free markets. Computer
networks, supporting data
and social networks, give this trend an enormous boost. In the end that illusion of freedom may have to be more like the real thing than any society has ever achieved in the past, something that could satisfy a new kind of populism, a populism powered by deep knowledge, self-interest so broad as to reasonably be called tolerance, and an automatic, preternatural vigilance.
Reason: It’s now more than 20 years after you first started writing about the Singularity and more than a dozen since you presented your ideas in a paper about it. Are we still on track?
Vinge: I think so. In 1993 I said I’d be surprised if the Technological Singularity happened before 2005—I’ll stand by that!—or after 2030. It’s also possible the Singularity won’t happen at all.
Reason: What kinds of things might prevent the Singularity from happening?
Vinge: First, physical disasters—the usual range of ugly, violent threats to civilization—could intervene. I think the most likely existential threat is simply nuclear warfare between nation-states, most especially MAD [mutual assured destruction] strategy wars. We are so distracted and (properly!) terrified about nuclear terrorism these days that we tend to ignore the narrow passage of 1970 to 1990, when tens of thousands of nukes might have been used in a span of days, perhaps without any conscious political trigger. A return to MAD is very plausible, and when combined with the various likely natural hardships (climate, fresh water issues), it’s a civilization killer. Perhaps a human race killer. If the nation-states don’t blow us all up (and again, if there is no Singularity), then eventually terrorism becomes an existential threat, the reductio ad absurdum example being when technology puts the ability to devastate continents at the fingertips of anyone having a bad hair day.