Will Our Robot Overlords Be Friendly?

Notes from the Singularity Summit in New York City.

NEW YORK—The singularity grows nigh. A happy band of technophiles, futurists, transhumanists, and, yes, singulatarians gathered in New York City this past weekend to talk about prospects for life before and after the technological creation of smarter-than-human intelligence. The phenomenon gets its name from science fiction writer Vernor Vinge, who analogized a future full of super-smart artificial intelligence(s) (AIs) to the way black holes work. Black holes are singularities—surrounded by event horizons past which outside observers simply cannot see. Self-improving super-smart AIs will so radically speed up the pace of technological change that it is simply impossible to describe what the future would look like afterwards.

But that doesn't stop people from trying to peek beyond the event horizon into the post-singularity future. Convened by the Singularity Institute for Artificial Intelligence (SIAI) for the first time on the East Coast, this fourth annual meeting attracted about 900 participants. The SIAI was created to address the urgent problem of how to create super-smart AIs that are friendly to human beings. The worry is that, unless we are very careful, AIs might evolve value systems that treat us as annoying organic matter that should be more usefully turned into computronium. As the Singularity Institute's Anna Salamon explained in her opening presentation at the summit, smarter intelligences might choose to get rid of us because our matter is not optimally arranged to achieve their goals.

One way to wind up with human-friendly AIs is to build them on top of uploaded human brains. Anders Sandberg, a research fellow at the Future of Humanity Institute at Oxford University, offered a technical roadmap for whole brain emulation. "If artificial intelligence does not get here first, we're going to end up with uploaded humans," maintained Sandberg. He argued that it is possible to foresee how developments in neuroscience, and software and hardware development are leading to emulating specific people's brains. Sandberg believes that emulating a human brain is only 20 years away. Sandberg did observe that we do not know if a one-to-one emulation would produce a mind or not. A huge advantage of an uploaded mind is that it would no longer be constrained by the speed at which organic brains can process information. He offhandedly noted that he would not be the first volunteer for an emulation experiment.

Randal Koene, the director of the Department of Neuroengineering at Fatronik-Tecnalia Foundation in Spain, argued that the time is now to go after mind uploading. Koene argued that radically increasing human longevity solves a few problems, but doesn't deal with our dependence on changeable environments and scarce resources. Nor does it deal with our intellectual limitations, death by traumatic accidents, or disease. Koene's "inescapable conclusion" was that we must free the mind from its single fragile substrate. How might you copy a brain? Perhaps one could use a more advanced version of the knife edge scanning microscope that currently enables the reconstruction of mouse brain architecture. This is what is known among uploading cognoscenti as destructive scanning. It had better work because the old organic brain is all sliced up in order to produce the emulation. To solve the problem of maintaining a sense of continuity, Koene suggested that one pathway might be to use molecular nanotechnology to replace parts of the brain bit by bit over time.

Philosopher David Chalmers, who directs the Centre for Consciousness at the Australian National University, argued that personal identity would be maintained if the functional organization of the upload was the same as the original. In addition, gradual uploading might also be a way to maintain personal identity. Chalmers also speculated about reconstructive uploading in which a super-smart AI would scour the world for information about a person, say articles, video, audio, blog posts, restaurant receipts, whatever. The AI would then instantiate that information in the appropriate substrate. "Is it me?," asked Chalmers. Maybe. On the optimistic view, being reconstructed from the informational debris you left behind would be like waking up from a long nap.

Inventor Ray Kurzweil, author of The Singularity is Near, envisions an intimate integration between humans and their neural prosthetics. Over time, more and more of the neural processing that makes us who we are will be located outside our bodies and brains so that "uploading" will take place gradually. Our uploaded minds will function much faster and more precisely than our "meat" minds do today. In a sense, we will become the singularity as the artificial parts of our intelligences become ascendant.

If the AIs aren't made of people, though, guaranteeing that they will be human-friendly is much more difficult. Chalmers suggested that perhaps we could launch a super-smart self-improving AI inside a leak-proof computing environment and see how it evolves. If it turns out to be nice, then we let it out and the singularity takes off and we're all happy. Kurzweil objected that a leak-proof singularity is impossible. In order to determine whether or not the AI was friendly we would have to look inside its computing environment and, since it is so much smarter than we are, it would necessarily become aware of us and then manipulate us into letting it out. In other words, it could pretend to be friendly and then zap us once it's present in our world.

Given these dangers, why is everyone so excited about the singularity? Co-founder of Paypal, venture capitalist, and supporter of the Singularity Institute, Peter Thiel began his talk on the economics of the singularity by asking the audience to vote on which of seven scenarios they are most worried about. (See Reason's interview with Thiel here.) The totals below are my estimates from watching the audience as they raised their hands:

A. Singularity happens and robots kill us all, the Skynet scenario, (5 percent)
B. Biotech terrorism using something more virulent than smallpox and Ebola combined (30 percent)
C. Nanotech grey goo escapes and eats up all organic matter (5 percent)
D. Israel and Iran engage thermonuclear war that goes global (25 percent)
E. A one-world totalitarian state arises (10 percent)
F. Runaway global warming (5 percent)
G. The singularity takes too long to happen (30 percent)

Thiel argued that the last one—that the singularity is going to take too long to happen—is what worries him. "The good singularity is the most important social, political, economic, and technological issue confronting us," declared Thiel. Why? Because without rapid technological progress, economic growth in already developed countries like the U.S., Western Europe, and Japan is not going be enough to address looming needs. Without fast economic growth producing more wealth, Americans might be driven to saving 40 percent of their incomes and retiring at age 80.

So what if Thiel's fears are realized and the singularity takes a while to arrive? If you want to be alive when it gets here, Aubrey de Grey, a biomedical gerontologist and chief science officer of the SENS Foundation, outlined his proposals for anti-aging research. In his view, progress in regenerative medicine could achieve longevity escape velocity in which researchers develop anti-aging therapies faster than a person approaches death from aging.

Singularity Institute fellow Anna Salamon argued during the summit that an intelligence explosion, or singularity, can't be controlled once it starts. She likened the situation to trying to put a leash on an exploding atomic bomb. Creating a super-smart friendly AI is the "hardest goal humans have ever tried."

Ronald Bailey is Reason magazine's science correspondent. His book Liberation Biology: The Scientific and Moral Case for the Biotech Revolution is now available from Prometheus Books.

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  • The Man||

    "hardest goal humans have ever tried."

    Perhaps composing a literate sentence is equally difficult.

  • Craig||

    Chalmers suggested that perhaps we could launch a super-smart self-improving AI inside a leak-proof computing environment and see how it evolves.

    How can we be sure the environment is leak-proof? The self-improving AI will be a lot smarter than the people who designed the cage.

  • Kevin||

    "hardest goal humans have ever tried."

    Perhaps composing a literate sentence is equally difficult.

    Perhaps "The Man" is a douchebag who quibbles with minor details of sentence structure because to compensate for the fact that his penis is so small.

  • Kevin||

    Anyone who quibbles with my failed syntax is also a douchebag.

  • Art-P.O.G.||

    At the moment of the singularity, all extant preview buttons will be visible.

  • The Man||

    Kevin, I will admit to being a douchebag with an enormous penis.

    But more substantively, AI advocates have been promising/threatening for decades that super sophisticated AI is just around the corner. A bit more horsepower, a bit of algorithm tweaking and we'll be there. Does anyone remember Adaline, fuzzy logic, neural networks, prolog, genetic algorithms? All useful technologies, they were going to usher in superduper AI, any day now. After more than 40 years what do we have? No autonomous, intelligent agent capable of understanding real world speech, making real world decisions on real world information or maintaining itself in the real world (Roomba not withstanding).

    In 40 years we have seen technology, approaches and algorithms come and go but there has not been any progress toward thinking machines. We did not know and we do not know how to do this. Will we, in the future, have a better understanding of the technological, cognitive, neurological and physiological issues involved to allow us to "upload a human brain?" Maybe, but right now we have no idea how and the last 40 years do not indicate that we will be able to do this in 20 years or 40 years or 60 years. Maybe some Einstein will stumble across an understanding of human cognition but we can't predict that.

    If Moore's law continues to hold then 20 years from now the performance of digital technology will have increased by a factor of 2000. Is there any physical law that says that increasing the speed or the density of PN junctions (or their future analogs) by this amount will result in a smart machine? In Computer Science classes in the late '60s and early '70s is was commonplace to hear that memory cost a "buck a byte." My latest cell phone cost less than a hundred bucks and came with a micro-SD card with a billion bytes. How smart is my phone? I can make phone calls, listen to MP3s, FM radio stations, take pictures and videos, I can even talk to it but most of the time it doesn't understand me. The phone is a useful gadget but stupid.

    The AI community has always been enthusiastic but they've never delivered anything even cockroach smart. So, when an AI researcher tells me, breathlessly and ungramatically, that there's going to be cake tomorrow I stop listening. She can call me when she has HAL's great grandfather in her lab. Or better yet, get him to call me.

  • The Man||

    My mistake. She's not an AI researcher she's something else ... but I don't know what. Maybe a singularist?

  • Diesel jeans||

    Your article is very interesting, I have introduced a lot of friends look at this article, the content of the articles there will be a lot of attractive
    people to appreciate, I have to thank you such an article.

  • FrBunny||

    This is going on Reason's critical praise page.

  • Bill||

    All your spam are belong to us!

  • Van Rijn||

    A.I.

    The fusion power of the computing world- forever just around the corner.

    Just build the sexbot already, and stop wasting time on 3.14 in the sky.

  • Van Rijn||

    I have introduced a lot of friends look at this article, the content of the articles there will be a lot of attractive people to appreciate,

    Engrish.com is over that way, buddy.

  • ||

    Is it just me, or does this whole singularity thing seem a bit, well, overly dramatic?

  • Untermensch||

    It's not just you. I find it amusing that many of the anti-religious libs here seem to accept the singularity as, well, an article of faith.

  • ||

    Koene's "inescapable conclusion" was that we must free the mind from its single fragile substrate.

    200 quatloos on the newcomers.

  • ||

    Why worry if robots would be human friendly, humans are not human friendly. We kill humans, enslave humans, and otherwise treat humans as disposable waste all the time to fit our current religious and political needs. And why do we need robots anyway, just because we can build them does not mean we should or have to do so.

  • Philip K. Dick||

    And you all thought I was just schizophrenic and a drug addict. Asses . . .

  • PKD||

    It's nice to see someone beat me to the PKD reference.

  • Mad Max||

    ‘Self-improving super-smart AIs will so radically speed up the pace of technological change that it is simply impossible to describe what the future would look like afterwards.’

    So, in the future, we will be unable to predict the future through merely scientific means? I preferred the old system, under which scientists could infallibly predict the future.

    ‘The worry is that, unless we are very careful, AIs might evolve value systems that treat us as annoying organic matter that should be more usefully turned into computronium.’

    ‘Philosopher David Chalmers, who directs the Centre for Consciousness at the Australian National University, argued that personal identity would be maintained if the functional organization of the upload was the same as the original. In addition, gradual uploading might also be a way to maintain personal identity. Chalmers also speculated about reconstructive uploading in which a super-smart AI would scour the world for information about a person, say articles, video, audio, blog posts, restaurant receipts, whatever. The AI would then instantiate that information in the appropriate substrate. "Is it me?," asked Chalmers. Maybe. On the optimistic view, being reconstructed from the informational debris you left behind would be like waking up from a long nap.’

    Or, to paraphrase Saint Paul:

    ‘In a moment, in the twinkling of an eye, at the Singularity: for the AI shall upload, and the dead shall be raised without any bugs, and we shall be changed into super-robots.’

    ‘Thiel argued that the last one—that the singularity is going to take too long to happen—is what worries him. "The good singularity is the most important social, political, economic, and technological issue confronting us," declared Thiel. Why? Because without rapid technological progress, economic growth in already developed countries like the U.S., Western Europe, and Japan is not going be enough to address looming needs.’

    Be reassured – to paraphrase the famous rabbi Joseph Maimonides:

    ‘I believe with a full heart in the coming of the Singularity, and even though it may tarry, I will wait for it on any day that it may come.’

  • Mad Max||

    ‘The worry is that, unless we are very careful, AIs might evolve value systems that treat us as annoying organic matter that should be more usefully turned into computronium.’

    Many present-day humans regard some of their fellow-humans as 'annoying organic matter' without any rights; why expect higher standards of the super-evolved robots?

    On the up side, maybe the super-evolved robots will turn on each other as humans have turned on each other.

    Maybe cynical robots will reflect that 'robots are a wolf to robots.'

  • Shane||

    [i]He offhandedly noted that he would not be the first volunteer for an emulation experiment.[/i]

    No one want's to be the one to wake up and realize that they are the virtual brain in a jar...

  • Anders Sandberg||

    I don't mind being a virtual brain in a jar. But I better be a *functional* brain in a jar that I own and is cared for by people I trust. I don't want to be my own buggy beta version.

  • ||

    One think I think will help the singularity move along: baby boomers, the first of which I think are entering their 60's. They are rich, selfish, and they don't want to die.

    A couple of other thoughts...

    Once you are uploaded, what would it take to get stoned? Would it be just an algorithm? Would their be an LSD algorithm? A heroin?

    Also, once you get uploaded, can you get downloaded? I don't hear much talk of this issue. But I suppose wold you want to? I would sure make World of Warcraft fun.

  • ||

    Because there will be more anonymous names that may or may not be girl players to flirt/cyber with?

    I just want my very own Twiki.

  • Zeb||

    The way our minds work (particularly in terms of emotions and the more visceral urges) is so tightly linked to our embodied experience that I have to wonder if being uploaded will leave anything recognizable of the being one was before.

    I tend to think that we are all a lot less rational than we like to think. Uploading may well change a lot more than the longevity and speed of our minds. But who knows if this is a good or bad thing?

  • Van Rijn||

    Your mind isn't getting uploaded anywhere. This idea that it's just a bunch of discrete bits that can be copied and somehow run like a program is silly geek fantasy. The brain is not digital. If consciousness involves quantum effects as some feel it might, all bets are off. It's nigh impossible to get past even the basic fact that computers process discretely while the brain processes in a continuous manner.

  • Anders Sandberg||

    The fact that brains are not digital is not an argument against emulation: there are plenty of continuous systems that can be emulated at any desirable precision. In addition, there is plenty of noise in biological systems, hiding their continuous nature. The real issue might be whether there is a scale separation: if activity on smaller scales always matter to activity on larger scales (like in turbulent fluids) then emulation may not be possible. But if there is (for example at the neural compartment level), then emulation may be possible.

    Quantum effects would be a serious problem if they actually matter. However, this is very much a minority view among neuroscientists - there are no signs that they do influence neural activity in any biologically meaningful way.

  • Anders Sandberg||

    A proper brain emulation would necessarily involve a body simulation, since that is how we interact with the world. Seeing is not just a projection of visual data onto the visual cortex, but active saccades of the eyes around the scene. Similarly the brain involves plenty of neurochemistry that modulates not just emotions but all kinds of activity. Leaving out the body simulation or neurochemistry will at best produce a deranged and sensory deprived mind, and perhaps no mind at all.

    Brain emulations will not be more rational than we are, but they do have practical advantages if they are feasible.

  • PersistentVegetativeStatesman||

    I just want a sexbot who will do housekeeping and lawn care.

  • ||

    The singularity will never happen and is a silly concept. These guys nailed down exactly why:

    http://www.somethingawful.com/.....nology.php

  • ||

    Yeah yeah, Nerd Rapture. There are only two things you need to know about the future:

    (1) We're never getting off this planet.
    (2) Life will continue pretty much as it always has, but with cooler gadgets.

  • Chad||

    I find this crowd of people to be wildly optimistic.

    Science and technology are progressing, but at a more-or-less constant rate. Yet the money and resources we are putting into them are increasing exponentially. The experiments that led to the discovery of the electron would have cost about $30,000 in today's dollars. We are spending north of $20,000,000,000 in an attempt to discover the Higg's Boson.

    It seems clear to me that we have two counter-acting trends - science is getting more difficult and the discoveries smaller, but we have more resources to throw at it. The first trend is bound to continue, and likely get worse. I seriously doubt the latter trend will be able to keep up.

  • Van Rijn||

    I read a while ago that to prove some aspects of string theory you need a particle accelerator with the same diameter as the Milky Way.

    Hey, it's stimulus.

  • Throckmorton J. Scribblemonger||

    Wasn't Higg's Bosun in a Melville novel?

  • The Man||

    No. Higgs Boson was a character in the classic Firesign Theater album "We're All Bosons on This Bus."

  • Paul||

    Sandberg believes that emulating a human brain is only 20 years away

    Yeah. Haven't they been saying roughly the same thing since the sixties?

  • Anders Sandberg||

    Well, I haven't been saying that, since I wasn't around in the sixties :-) I think the classic "20 years away" prediction has been AI.

    Seriously, my "prediction" is far more involved and fuzzy, essentially giving different dates depending on what level of resolution is needed, how Moore's law develops, and some big unknowns in how neuroscience develops. For those who want the details, google "Whole brain emulation roadmap".

  • ||

    FU Reason, you G-D anti-singularity Bastards!

  • Aurini||

    I'd just like to commend Bailey on a detailed, and brief description of the event. I'm a minor contributor to the transhumanist movement, and there's nothing here that I would criticize (though I do disagree with Sandberg's assessment of 20 years, it's still accurate reporting on Bailey's part). I can't think of the last time I saw a reporter report honestly on something.

  • ||

    Ok, just 3 things we need to know. Is your name Michael Anissimov? Do you know Michael Anissimov? Were you involved in any way with the conspiracy to block Libertarians from posting on this thread? Well, were you?

  • Jeff||

    Why would singularity require the AI to be smarter than humans? The true requisite is that it can improve itself--it will then eventually surpass humans regardless of how intelligent it started out. The AI's being smarter than humans would only prove:
    a.) that it is possible for something (us) to create something smarter (the AI) than itself
    and
    b.)that humans are not the cap of possible intelligence.

    And we all already know b isn't true........

    So--- why the obsession with starting with an AI smarter than ourselves? Just build a self-improving one and we're all set.

  • ugg boots||

    nice post really , and also wanna share some nice ugg boots ,especially ugg classic tall boots.

  • links of london Ring||

    It was a very nice idea! Just wanna say thank you for the information you have shared. Just continue writing this kind of post. I will be your loyal reader. Thanks again.

  • under25wantstoknow||

    Am I the only person on here who is more concerned about the "people in developed nations will need to save 40% of their income and work til they're 80" than about the onset of a robot-instigated genocide?

  • abercrombie milano||

    My only point is that if you take the Bible straight, as I'm sure many of Reasons readers do, you will see a lot of the Old Testament stuff as absolutely insane. Even some cursory knowledge of Hebrew and doing some mathematics and logic will tell you that you really won't get the full deal by just doing regular skill english reading for those books.

  • cheap ghd f outlet||

    In his view, progress in regenerative medicine could achieve longevity escape velocity in which researchers develop anti-aging therapies faster than a person approaches death from aging.

  • nike shox||

    is good

  • قبلة الوداع||

    thank u

GET REASON MAGAZINE

Get Reason's print or digital edition before it’s posted online

  • Video Game Nation: How gaming is making America freer – and more fun.
  • Matt Welch: How the left turned against free speech.
  • Nothing Left to Cut? Congress can’t live within their means.
  • And much more.

SUBSCRIBE

advertisement