If remotely guided drones don’t have you worried, what about completely autonomous weapons? Losing Humanity: The Case Against Killer Robots, a report co-published in November by Human Rights Watch and Harvard Law School’s International Human Rights Clinic, warns that “some military and robotics experts have predicted that ‘killer robots’—fully autonomous weapons that could select and engage targets without human intervention—could be developed within 20 to 30 years.”
Although the technology of “fully autonomous weapons” is highly speculative, the report says military researchers are already moving beyond drones to develop weapons that select their own targets. As an example of a deployed system with some autonomy, the authors cite the U.S. Navy’s MK 15 Phalanx Close-In Weapons System, which can shoot down incoming missiles and aircraft without oversight. Israel’s missile-intercepting Iron Dome, which played a major role during the 2012 Gaza conflict, is also largely automatic. South Korea deploys a robotic sentry system along the demilitarized zone separating it from North Korea, although it requires human permission to fire weapons.
Losing Humanity warns that we might give weapons the ability to hunt and kill, but we don’t know how to give them moral judgment. It says “the development of autonomous technology should be halted before it reaches the point where humans fall completely out of the loop.” But international efforts to stop (rather than merely delay) development of new weapons do not have an impressive track record. While robot overlords may or may not be part of our future, semi-autonomous killer robots look increasingly likely.