Policy

I, For One, Welcome Our New Robot Overlords

|

The Future of Humanity Institute (FHI) at Oxford University is holding its Winter Intelligence Conference in a couple of weeks to try to figure out how to prevent a future robot uprising that would destroy humanity.

In a recent article fellows from Center for the Study of Existential Risk (CSER),* Huw Price, Bertrand Russell Professor of Philosophy, Cambridge, and Jaan Tallinn, Co-founder of Skype, laid out what they see as the dangers that will come with the rise of general artificial intelligences that can write their own software: 

It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.

By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.

The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen….

By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable – things such as life and a sustainable environment.

If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

The conference will feature leading researchers in the field of artificial intelligence and some of the deepest thinkers about the ethical, economic, and existential implications of the development of super-intelligent machines.

You doubt that indifferent robot overlords will be a problem? Keep in mind that just last week my Reason colleague J.D. Tuccille warned that we should "Forget Drones, Beware of Killer Robots." The folks meeting at Oxford argue that that is just the beginning.

Back in 2008, I covered the FHI's conference on Catastrophic Risks to humanity. As background, see my reporting, "The End of Humanity: Nukes, Nanotech, or God-Like Artificial Intelligences,"; "Will Humanity Survive the 21st Century?,"; and "TEOTWAWKI!"

*Correction: In my initial post I wrote that the conference was being jointly held by the FHI and CSER. in fact, the conference is entirely run by the FHI. I apologize for any confusion that I may have caused.