Policy

Ted Kaczynski Was an Optimist

|

Who can deny the multiplying signs of the coming robot war?

Sign The First: We learn that the gub'mint is developing a robot capable of refueling by eating human corpse "biomass" scattered on the battlefield. Then, the robot's designer issues a non-denial denial, failing to show that the robots aren't capable of eating flesh and simply claiming that they aren't intended to do so:

RTI's patent pending robotic system will be able to find, ingest and extract energy from biomass in the environment. Despite the far-reaching reports that this includes "human bodies," the public can be assured that the engine Cyclone has developed to power the EATR runs on fuel no scarier than twigs, grass clippings and wood chips—small, plant-based items for which RTI's robotic technology is designed to forage. Desecration of the dead is a war crime under Article 15 of the Geneva Conventions, and is certainly not something sanctioned by DARPA, Cyclone or RTI.

Does anyone believe that, once the robots gain sentience, the Geneva Conventions will even be worth the consumable biomass upon which they are printed?

Sign The Second: The U.S. Army is also developing a programmable code of ethics for its new robot warriors. Says h+:

"My research hypothesis is that intelligent robots can behave more ethically in the battlefield than humans currently can," says Dr. [Ronald C. Arkin, director of the Mobile Robot Laboratory at Georgia Tech]. "That's the case I make."

Analogous to the use of radar and a radio or a wired link between the control point and the missile, Arkin's "ethical controller" is a software architecture that provides, "ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement."

This is a totally original idea and nothing could possibly go wrong with it.

Sign The Third: Dozens (!) of people smarter than you are becoming worried by the possibility that our machines will soon outsmart and kill or enslave all of us. The New York Times reports:

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to chatting with customers on the phone….

The researchers—leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California—generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

Featured in the Times article is a picture of a robot plugging itself into a wall jack to recharge, with this classic caption: "Servant now, master later?"

Sign The Fourth: Roboraptor.

We are Roomba-ing our way to Armageddon, people.

Reason's Jesse Walker wrote about real, live cyborgs in July 2005, and Brian Doherty covered robotic combat (for fun and profit) here. Peter Suderman wrote about man, machine, and the curious techno-politics of Terminator Salvation.