Robots

Autonomous Warbots Could Be More Moral Than Human Soldiers

A preemptive ban risks being a tragic moral failure rather than an ethical triumph.

|

WarbotsYorkberlinDreamstime
Yorkberlin/Dreamstime

Should lethal autonomous war robots be banned? On Monday the United Nations will convene a meeting in Geneva to consider this question.

A group of 116 artificial intelligence and robotics tech luminaries, including Tesla's Elon Musk and DeepMind's Mustafa Suleyman, sent the U.N. an open letter in August urging such a ban. This week a group of artificial intelligence researchers from Canada and Australia joined the chorus. "Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line," the Canadians said.

Don't be so hasty.

In my 2015 article "Let Slip the Robots of War," I cited law professors Kenneth Anderson of American University and Matthew Waxman of Columbia, who insightfully pointed out that an outright ban "trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it."

I further argued:

While human soldiers are moral agents possessed of consciences, they are also flawed people engaged in the most intense and unforgiving forms of aggression. Under the pressure of battle, fear, panic, rage, and vengeance can overwhelm the moral sensibilities of soldiers; the result, all too often, is an atrocity.

Now consider warbots. Since self-preservation would not be their foremost drive, they would refrain from firing in uncertain situations. Not burdened with emotions, autonomous weapons would avoid the moral snares of anger and frustration. They could objectively weigh information and avoid confirmation bias when making targeting and firing decisions. They could also evaluate information much faster and from more sources than human soldiers before responding with lethal force. And battlefield robots could impartially monitor and report the ethical behavior of all parties on the battlefield.

I concluded:

Treaties banning some extremely indiscriminate weapons—poison gas, landmines, cluster bombs—have had some success. But autonomous weapon systems would not necessarily be like those crude weapons; they could be far more discriminating and precise in their target selection and engagement than even human soldiers. A preemptive ban risks being a tragic moral failure rather than an ethical triumph.

That's still true. Let's hope that the U.N. negotiators don't make that tragic mistake next week.