Autonomous Warbots Could Be More Moral Than Human Soldiers
A preemptive ban risks being a tragic moral failure rather than an ethical triumph.

Should lethal autonomous war robots be banned? On Monday the United Nations will convene a meeting in Geneva to consider this question.
A group of 116 artificial intelligence and robotics tech luminaries, including Tesla's Elon Musk and DeepMind's Mustafa Suleyman, sent the U.N. an open letter in August urging such a ban. This week a group of artificial intelligence researchers from Canada and Australia joined the chorus. "Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line," the Canadians said.
Don't be so hasty.
In my 2015 article "Let Slip the Robots of War," I cited law professors Kenneth Anderson of American University and Matthew Waxman of Columbia, who insightfully pointed out that an outright ban "trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it."
I further argued:
While human soldiers are moral agents possessed of consciences, they are also flawed people engaged in the most intense and unforgiving forms of aggression. Under the pressure of battle, fear, panic, rage, and vengeance can overwhelm the moral sensibilities of soldiers; the result, all too often, is an atrocity.
Now consider warbots. Since self-preservation would not be their foremost drive, they would refrain from firing in uncertain situations. Not burdened with emotions, autonomous weapons would avoid the moral snares of anger and frustration. They could objectively weigh information and avoid confirmation bias when making targeting and firing decisions. They could also evaluate information much faster and from more sources than human soldiers before responding with lethal force. And battlefield robots could impartially monitor and report the ethical behavior of all parties on the battlefield.
I concluded:
Treaties banning some extremely indiscriminate weapons—poison gas, landmines, cluster bombs—have had some success. But autonomous weapon systems would not necessarily be like those crude weapons; they could be far more discriminating and precise in their target selection and engagement than even human soldiers. A preemptive ban risks being a tragic moral failure rather than an ethical triumph.
That's still true. Let's hope that the U.N. negotiators don't make that tragic mistake next week.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The real question is, can cop bots be programmed to e plait children and minorities?
Actually, this is the real question. Sure, "warbots" can be programmed to minimize casualties and monitor everybody's ethics and all those cool responses. They can just as easily be programmed to find the closest large crowd of civilians and open fire.
Picture North Korean warbots.
That's easy. They already parade hundreds of thousands of them every year.
Wouldn't it be simpler just to ban war? Or ban using human soldiers?
From now on all wars will be fought on the moon by robot soldiers.
Nah, in simulations.
The simulation says that your home was bombed last night - report to the nearest disintegration booth citizen. For the greater good.
https://www.youtube.com/watch?v=f_XPgrdX7gk
YouTube is asking for age confirmation.
For a Star Trek episode?
Sure, a bot could execute the war more humanely than human soldiers ever could, but they remove the one major check that holds back all warmongers. How to convince thousands of people to go kill for you.
We've seen how military drones have lowered the barriers for violence for the US to the extent that we use them to kill people on a whim nowadays. Just because a super-advanced bot would be good enough to only kill its target with no collateral doesn't have any effect on the moral choice of selecting that target in the first place.
a bot could execute the war more humanely than human soldiers ever could, but they remove the one major check that holds back all warmongers. How to convince thousands of people to go kill for you.
Very well said and compelling. But does this carry less weight when we acknowledge that states often are able to rally millions for terrible wars.
For instance, would robots have raped Nanking any harder than the Japanese soldiers did?
I think you can look at is as a dilemma. With or without death-bots, Japan, Germany, etc. would have done their horrible wars of conquest complete will all kinds of atrocities. But with death bots, a stable, safe country that isn't looking to take over the world with military force will be more likely to get involved in any foreign conflict because of what Agammamon says.
It sounds horrible to say, but it does also feel like the less risk there is for our people at war, the easier it is for us to keep going to war. We are going on 17 years of war abroad, and it's barely thought of anymore. The consequences of it are too distant to feel on a day to day basis.
I feel like robots like this will further reduce the cost of war to us, making it increasingly easy to kill people abroad on whim.
But with death bots, a stable, safe country that isn't looking to take over the world with military force will be more likely to get involved in any foreign conflict
Good point Zeb, that's not even up for debate, just look at our willingness to fly some drones over to the other side of the world, with only minor consideration. Compared to the hesitation, debate, caution that happens if it is still men in cockpits.
Dilemma indeed.You could even see a 1984 war scenario playout. Perpetual war but no causalities and no real outcomes. But each side gets to deliver the news to their populous they destroyed 20,000 of the enemy this week and are on verge of winning the war.
Ever heard of the Banana Wars? Spanish-American and Phillipino Wars? Panama, Grenada?
America has been convincing its people to go to war in faraway places ever since we grew large and strong enough to not have to worry about getting casually squashed by one of the European powers for going too far. And if you think the American people have actually matured enough to resist that call to arms in the absence of casualty-reducing automation, I would refer you to Niger, Afghanistan proper (eg the +/- 10k troops we still have there on the ground), the Yemeni raid, and every other casualty producing incident that got attention for the 5 seconds in between Trumpweets and not a second more.
Technology changes what war looks like, but it doesn't change what it *is*, or how we feel about it, very often.
ever hear of missiles?
ohhh I like this one.
A quick consideration leaves me feeling there is no obvious answer and it really should come down to pros v cons.
One major risk I didn't see in the article is that a robot soldier could be hacked.
I'ma mull this over whilst I *cough* work...
Real world soldiers can be hacked too. In fact, most wars require you to hack people in order to get them to fight in the first place.
What do you think all this 'king and country' meme bullshit is? Its people exploiting vulnerabilities in your mental architecture.
Indeed, I don't disagree. But the 'hacking' of humans, i.e., brain washing, instilling nationalist fervor etc. seems more resource demanding and less absolute than full control over a robot, no?
And human hacking can wear off. How much "king and country" support is there when the body bags have been coming home for a couple of years with no end in sight?
And the hacking of humans generally doesn't happen the must of battle in any significant numbers. Robots hacked en masse would turn the tide easily.
In some ways - its easier in others as people don't get patches for million year old security vulnerablities.
Bowe Bergdahl was framed imo.
HAHA they would be hacked and subverted so quickly it's not even funny. I mean seriously, what government would you entrust these to? On the other hand, I wonder what it would be like to be a robot's sex slave.
If you can't figure out why the other side wants to kill you then robots will only make the problem worse.
Once robots start making robots, your petty mortal rules will no longer matter.
When self-driving buses go AI, will they commit terrorist attacks?
You can call me Al.
Maybe, but they will be more humane terrorist attacks
So self-driving buses are like the IRA?
Hmm, Bailey is an Irish name. Isn't it?
War bots could be programmed to speak the native tongue of the land in which they are fighting. Just a thought.
How many times have battlefield situations escalated because only one guy in the squad is an acting translator using broken sentences. Trying to issue orders to civilians in a warzone or demand the surrender of enemy combatants?
You will pants head dance puppy, or color!
Bans always work, right? I mean, all they need to do is say magic words and everyone stops using them. It's like with drugs and 'illegal guns'.
"Should lethal autonomous war robots be banned? On Monday the United Nations will convene a meeting in Geneva to consider this question."
Awww! They're so cute when they dress up and pretend they actually matter.
What if they had a war and nobody came?
"Since self-preservation would not be their foremost drive, they would refrain from firing in uncertain situations. Not burdened with emotions, autonomous weapons would avoid the moral snares of anger and frustration. They could objectively weigh information and avoid confirmation bias when making targeting and firing decisions. They could also evaluate information much faster and from more sources than human soldiers before responding with lethal force. And battlefield robots could impartially monitor and report the ethical behavior of all parties on the battlefield."
So, IOW, they:
-will not avoid enemy fire or manuever as dedicatedly as a human soldier
-will continue to attempt to solve problems at the same speed regardless of previous failures or urgency
-will have a supernatural algorithm to weigh relative values at speed that the Soviet Union's economists would've killed for
And
-would be hated and mistrusted by any human soldiers on the field, who would then take pains to keep said bots out of their line of sight when interacting with civilians and render them useless
Or did you think that 3.8 billion years of evolution gave us "emotion", "fear", "frustration", "confusion" and "loyalty" for no reason?
Who says they wouldn't care about self preservation?
Level that village, just to be sure.
In which case self-preservation would be one of their "foremost drives", wouldn't it?
So, like, true economic warfare in the sense that dollars will be shooting at other dollars?
Cool story.
I for one can't wait for our autonomous warbot overlords, I mean it couldn't be worse than it is now, could it?
Robot soldiers, controlled by Skynet.
What could possibly go wrong?
Too many sequels.
Another perfect job for Democracy. Let's all vote to decide whether to allow weapons technology that only a few of us are wealthy enough to benefit from.
Populate the world with libertarian-oriented terminators. They ignore humans unless someone violates the nonaggression principle...
"You are in violation of this jurisdiction's lawn sanctity ordinances. You have 20 seconds to comply."