Policy

How To Thwart A Robot Apocalypse: Oxford Professor Nick Bostrom on the Dangers of Superintelligent Machines

|

"If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position," says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute.

Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostorm worries that, once computer intelligence exceeds our own, machines will be beyond of our control and will seek to shape the future according to their own plan. If the AI's goals aren't properly set by designers, a superintelligent machine will see humans as a liability to completing its goals–leading to our annihilation.

Click above to watch or click the link below for more information and downloadable links.