Science Fiction A Conversation with Author Vernor Vinge


Vernor Vinge is a former San Diego State University math professor and a Hugo award-winning science fiction novelist. In Vinge's 1993 essay "The Coming Technological Singularity" Vinge wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

We sat down with Vinge to learn more about his influences, his novels and the coming singularity.

Vinge's latest novel, The Children of the Sky, will be released in October 2011.

Produced by Paul Feine, Alex Manning and Zach Weissmueller.

Approximately 7 minutes.

Go to for downloadable versions and subscribe to's YouTube Channel to receive notifications when new material goes live.

NEXT: If the Sheet Fits, You Must Use It By California Law

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Within thirty years, we will have the technological means to create superhuman intelligence

    That’s a rather bold statement considering we haven’t even gotten remotely close yet to anything resembling an AI.

    1. All we need is fusion. With fusion, we’ll get AI.

      Actually, I think the nut will be cracked to some extent in this century, but I’m dubious that it will be in the first half.

      1. What does fusion have to do with artificial intelligence? In fact, it is much more likely that if we do achieve AI, that is what will give us fusion, not the other way around.

        1. Fusion is twenty years away.
          True AI is twenty years away.
          Therefore, we’ll have one when we have the other. QED.

          Also, flying cars.

          1. I assume you are referring to so-called “cold” fusion?

            1. Hell i would even go for hot fusion that is commercially viable for generating electricity.

          2. Flying cars are five years away.
            Flying away from us.

            1. Cars already exist. So do airplanes. So do helicopters. How would the libertarian dream (or clich?) of a flying car be an improvement over a simple helicopter, that can and does go places that a flying car never could or will?

              1. The idea is that your average person could afford a flying car. A helicopter… not so much.

          3. fusion is much closer than anyone is expecting.

            There’s a suprising amount of small-scale hot fusion research out there by small startup groups. It’s becoming rapidly clear that large-scale tokamaks were basically just a huge gov’t-sponsored boondoggle.

      2. All we need is fusion. With fusion, we’ll get AI.

        …then you get the women

      3. All we need is fusion. With fusion, we’ll get AI.

        Will humans be used with a special kind of Fusion to generate power for AI?

        Anyone else totally lose their suspension of disbelief when this bullshit was revealed in The Matrix?

        1. Yes. IIRC, I groaned loudly in the theater at this.

      4. AI is a matter of developing / encountering an effective algorithm or set of algorithms — no more than that. Consequently, arrival will be a surprise to everyone, with the possible exception of the initial developer / encountering individual(s.) Though I suspect they’ll be just as surprised as the rest of us will be.

        There’s no observable “almost there” curve in this; either you have AI, with free will and imagination and inference and induction, or you don’t. It’s a binary change in state, not a progression up a ramp. This algorithm produces AI; this one does not.

        Further, once AI is discovered, invented or encountered, we will find that it can be implemented on much lesser hardware than the rank and file have been led to believe; this is because the answer, appropriate induction, insightful inference is just as correct — or not — when delivered at a slower rate as it is when done in time comparable with how fast we do it.

        Likewise, when AI exceeds the rates at which we can do things, it is no less AI… just faster. All this does is impact the usability of the information — not its relationship to intelligence.

        Vinge’s prediction of 30 years is no more accurate than those which say “never” or those that say, as here, “latter half of the century” or “in a thousand years.” Or “tomorrow.”

        It’ll happen when it happens. And just like human intelligence, throwing a baseball, or juggling, we won’t have to understand the details of it implicitly or explicitly to invent or discover it, to use it, or to be affected by it. We might not even have to be trying; it may arise as a consequence of some other effort.

        Computing is replete with examples of sonorous claims of “that’s too hard to solve” and then (usually elsewhere) finding that not only is the issue solvable, but sometimes even trivially. It is quite common to encounter attempts to solve problems that use “ivory tower” academic approaches that are almost entirely ineffective; walking robots is a good example of this. The problem was first solved — and very well — with about 10k of z80 assembler and a data technique called fuzzy logic. Turns out that differential equations weren’t the right path at all, or at least, not with a practical amount of computing power.

        Chess; walking; speech recognition; speech synthesis; computer vision; (not vision interpretation… just vision issues like segmentation, motion detection, etc.); context sensitive / associative / auto-structured memory; expert systems; all were thought too hard to solve at one point. All fell to one or more algorithms. Traditional objections of the “NP complete” (or not) form almost always fall to a different approach, rather than proof that the claim was wrong.

        I’ll give you a simple example. Initially, chess was deemed hard to solve because the game offers an astronomical number of possible moves from move one (late game is easier, as it turns out.) So the thinking was, “we can’t figure all those out in realtime, ergo we can’t solve the problem.” But, as it turns out, many (most?) of those moves are really, really bad moves, and obviously and trivially determined to be so, and this means that most move chains don’t have to be further analyzed. This led directly to solutions, that is, high performance chess playing algorithms.

        Basically, things look really hard… until they aren’t any longer. And that’s what you’ll see with AI, and most other computing challenges of note. AI, however, will be a game changer for many obvious reasons.

        Take care you know your pundit when you pick one. For instance, Minsky, at MIT, is known to have pulled a complete fail with neural net analysis; wrote (co-authored with Seymour Papert) an entire book (“Perceptrons”) calling them out as unable to do this, that and the other thing… and was proved wrong in exactly the usual way — the approach they envisioned wasn’t the one that worked. So when he says AI is “too hard” and “we were on the wrong track”, you might want to take that with a grain (or a shaker) full of salt.

        The bottom line today is that we don’t know what “I” is; to go from this state to authoritative declarations of how difficult it will (or won’t) be to develop in another form is ludicrous. It’ll come when it’s discovered or invented, and the one thing I *am* willing to say is we won’t see it coming.

    2. That’s a rather bold statement considering we haven’t even gotten remotely close yet to anything resembling an AI.

      Cars that can drive themselves seems like pretty good AI.

      Also Google and search engines can identify and sort information pretty damn well.

      I do like the Car driving as the best example. For like 10 years there was a competition to build a car that could drive across several miles of a dirt road and for like 9 years all cars failed….until the 10th year in which two cars completed it.

      Both winners said it was the advance of computing power that allowed them to complete the course…not some new insight into AI…or some super AI software…just raw processing power.

      The implication being that the only item needed for AI is not some new insight or super smart programmer…all that is needed is for processors to get faster.

      1. I’m super pro-robo-car myself, but General AI and Robo-car AI are two completely different beasts. Building a general AI system I’d think is at least 40 years away from today. We are so far away from a solution; merely stitching together specific-purpose AI algorithms is not going to get us there.

  2. Episiarch must be assimilated.

    1. I don’t think standard USB inputs would suffice for an assimilated Episiarch.

      1. The Borg are wireless now. It’s all done through the cloud.

  3. Two things. One, I can’t wait for the new book to come out and Two:

    I owe robc an apology. I reread Deepness and it is far more explicitly libertarian than I recalled. I can’t remember which sci-fi thread I showed my ass on about that, but I’ll admit it in the Vinge thread.

    1. It’s kind of hard to find, but try look up his short story “The Ungovern” which takes a look at very high tech anarchy. It’s in between the novels “The Peace War” and “Marooned in Real Time”.

    2. No problem, I didnt feel insulted.

      OT: GOP women have one useful purpose, they can run a county clerks office. 20+ years of them running it and its a smooth, quick and painless process now.

      1. Hey, it didn’t hurt my feelings re-reading it, either.

    3. Related to Deepness and the singularity, AI and the singularity are some of the “failed dreams”.

      1. AI and the singularity are some of the “failed dreams”.

        You’re writing off something that can still occur. Great predictive power.

        “I think there is a world market for maybe five computers.”
        –Thomas Watson, chairman of IBM

        1. Ummm….Deepness is set millenia from now, hence it being called the “failed dreams”.

          1. sorry, bonking on that you were referring to the book.

            /must be in the unthinking depths

        2. And also, one of the main alien characters agreed with you, he thought humans were silly for thinking these things were impossible because they had failed to achieve them.

  4. The basis for a lot of the “singularity” theories rests upon the idea behind Moore’s law, the “law” stating that “the quantity of transistors that can fit on an integrated circuit will double every two years”.

    The problem is that this law is reaching its limits, and if anything it’s beginning to plateau. This should add a healthy dose of skepticism regarding any predictions for “when” the singularity will be reached considering one of the fundamental laws supporting the prediction has its own current limitations.

    1. the problem with the law is that there are a lot of curves that appear exponential at the first which are totally not, for example, certain sigmoidal curves. Or, more ominously, the bell curve.

    2. Well, sort of. That apparently exponential trend is where you start, but you need another ingredient.

      The basis for the singularity is the idea that we achieve super-human intelligence (either by way intelligence enhancement, or AI, or uploading, or whatever) and that allows the technology curve to continue upward until things cease to be comprehensible to our weak little un-enhanced brains.

      Without some kind of super-human intelligence things will–by definition–remain comprehensible to humans, even if they get pretty strange. Which pretty much implies that the exponential must turn over into a logistical curve or something similar.

      1. It’s sigmoids all the way up.

  5. Vinge: “Within thirty years, we will have the technological means to create superhuman intelligence”

    Episiarch: “That’s a rather bold statement considering we haven’t even gotten remotely close yet to anything resembling an AI.”

    It’s not that bold a claim, considering the extremely low threshold of normal human intelligence. To square the IQ (or better yet, intellectual productivity) of someone inhabiting our current idiocracy is no complex feat.

    1. it wasn’t bold then. It just turned out to be wrong. You would be bold to try and say it’s still happening.

  6. This is conjecture, and possibly right. However, there may be no free will in a computer, and we would have to program them to want to survive. Hopefully no human will be that stupid.

    1. there may be no free will in a computer

      If a computer chooses to not calculate, has it still made a choice?

    2. That’s a concept of the singularity–at one point AI is achieved, then no one can predict what will happen next. They could program themselves.

    3. I’m much smarter than a stupid human. I say we program the robots to want to destroy things. Especially anyone or anything trapping liquidity. Then we can sit back and get rich rebuilding everything!

      1. WIRED quoted some Krugman article about how interstellar empires are economically impossible. It showed the a. Krugman is not a physicist and b. Krugman is not very good at thinking outside the box.

      2. “The Midas Plague” IIRC.

    4. define “free will” and explain why it is impossible in a computer.

      1. explain why it is impossible in a computer.

        Cuz human free will is cheaper…and no one wants that at any price anyway.

        Who the fuck wants a servant to tell you no? and even if you did free will is not even required to produce such a result. Just write “no” on a post-it note and read it to yourself.

      2. define “free will”

        The ability to desire something you cannot have or is impossible.

  7. Interesting. Always thought the ‘e’ in his name was silent.

    I thought his last name was the same as Ving Raimes first name.

    1. Damning with faint praise award of the day.

    2. it’s like MiNGe.

  8. Vinge has always been a proponent of an early singularity. He talks about it a bit in the forward to Marooned in Realtime and other places where he says that part of the point of the The Peace in The Peace War was to delay the singularity so he could write a near future novel.

    Later he wrote (intro to his 2002(?) collection the name of which escapes me just now) that he started to wonder if he’d made a mistake in believing in a early singularity, and that train of thought lead to his inventing the zones of thought which are so important in A Fire Upon the Deep.

    1. I remember when I first started reading “A Fire Upon the Deep” & it took me a while to get the concept figured out. I really liked it: a lot of sci-fi novels are blurbed as presenting a “new” way of seeing the university/galaxy whatever but “Fire” was one of the few that really delivered on the “different” promise.

  9. “there are makers and there are breakers, and breakers break for all sorts of reasons, INCLUDING simply wanting to break things”

    I am very disappointed that the Washington Monument has not come down.

    An earthquake and a hurricane!!!
    Nature is not even on my side when I want it to fuck shit up.

  10. All this talk of makers and breakers is making me think of Objectivism. Has this guy ever read Atlas Shrugged?

  11. Who gives a crap about the singularity? I watched Back To The Future II this last weekend. We only have four years in which to develop fusion for use in cars and have them flying, just to keep on schedule. Where’s that damn flux capacitor?

  12. Winter is coming, we prepared tory burch a rich style, high inventory, high discount prices, attentive service to prepare for 2011 Christmas and New Year 2012, what are you waiting.

Please to post comments

Comments are closed.