Killer Robots: Protectors of Human Rights?
Why a ban on the development of lethal autonomous weapons is premature

"States should adopt an international, legally binding instrument that prohibits the development, production, and use of fully autonomous weapons," declared Human Rights Watch (HRW) and the International Human Rights Clinic (IHRC) in an April statement. The two groups issued a report titled "Killer Robots and the Concept of Meaningful Human Control," as experts in weapons and international human rights were meeting in Geneva to consider what should be done about lethal autonomous weapon systems (LAWS). It was the third such meeting, conducted under the auspices of the Convention on Conventional Weapons.
What is a lethal autonomous weapons system? Depends on who you ask, but the U.S. definition provides a good starting point: "A weapon system that, once activated, can select and engage targets without further intervention by a human operator." Experts typically distinguish among technologies where there is a "human in the loop" (semi-autonomous systems, in which a person controls the technology as it operates), a "human on the loop" (human-supervised autonomous systems, in which a person can intervene and alter or terminate operations), and a "human out of the loop" (fully autonomous systems that operate independently).
The authors of that April statement want to ban fully autonomous systems, because they believe a requirement to maintain human control over the use of weapons is needed to "protect the dignity of human life, facilitate compliance with international humanitarian and human rights law, and promote accountability for unlawful acts."
HRW and IHRC argue that killer robots would necessarily "deprive people of their inherent dignity." The core argument here is that inanimate machines cannot understand the value of individual life and the significance of its loss, while soldiers can weigh "ethical and unquantifiable factors" while making such decisions. In addition, the groups believe that LAWS could not comply with the requirements of international human rights law, specifically the obligations to use force proportionally and to distinguish civilians from combatants. They further claim that killer robots, unlike soldiers and their commanders, could not be held accountable and punished for illegal acts.
Yet it may well be the case that killer robots could better protect human rights during combat than soldiers using conventional weapons do now, according to Temple University law professor Duncan Hollis in a January 2016 article in the Temple International and Comparative Law Journal.
Hollis notes that under international human rights law, states must conduct a legal review to ensure that any armaments, including autonomous lethal weapons, are not strictly speaking unlawful—that is, they are neither indiscriminate nor employ disproportionate force. To be lawful, a weapon must be capable of distinguishing between civilians and combatants. Also, it must not by its very nature cause unnecessary suffering or superfluous injury. A weapon would also be unlawful if its deleterious effects cannot be controlled.
Considerations like these have persuaded most governments to sign treaties outlawing the use of such indiscriminate, needlessly cruel, and uncontrolled weapons as antipersonnel land mines and chemical and biological agents. If killer robots could better discriminate between combatants and civilians and reduce the amount of suffering experienced by people caught up in battle then they would not be per se illegal.
Could killer robots meet these international human rights standards? Ronald Arkin, a roboticist at the Georgia Institute of Technology, thinks they could. In fact, Arkin argues in the journal Communications of the Association for Computing Machinery, LAWS could have significant ethical advantages over human combatants. For example, killer robots do not need to protect themselves, and so could refrain from striking when in doubt about whether a target is a civilian or a combatant. Warbots, he contends, could assume "far more risk on behalf of noncombatants than human war-fighters are capable of, to assess hostility and hostile intent, while assuming a 'First do no harm' rather than 'Shoot first and ask questions later' stance."
LAWS, Arkin suggests, would also employ superior sensor arrays, enabling them to make better battlefield observations. They would not make errors based on emotions—unlike soldiers, who experience fear, fatigue, and anger. They could integrate and evaluate far more information faster in real time than could human soldiers. And they could objectively monitor the ethical behavior of all parties on the battlefield and report any infractions.
Under the Geneva Convention, the principle of proportionality prohibits "an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated." Human soldiers may take actions where they knowingly risk, but do not intend, harm to noncombatants. In order to meet the requirement of proportionality, autonomous weapons could be designed to be conservative in their targeting choices: When in doubt, don't attack.
In justifying their call for a ban, HRW and IHRC argue that soulless warbots cannot be held responsible for their actions, creating a morally unbridgeable "accountability gap." Under the current laws of warfare, a commander is held responsible for an unreasonable failure to prevent a subordinate's violations of international human rights laws. The organizations that oppose the deployment of autonomous weapons argue that a commander or operator of a LAWS "could not be held directly liable for a fully autonomous weapon's unlawful actions because the robot would have operated independently." Hollis counters that since states and the people who represent them are supposed to be held accountable when the armed forces they command commit war crimes, they could similarly be held accountable for unleashing robots that violate human rights.
Peter Margulies, a professor of law at Roger Williams University, makes a similar argument. Holding commanders responsible for the actions of lethal autonomous weapons systems, he writes in the Research Handbook on Remote Warfare, is "a logical refinement of current law, since it imposes liability on an individual with power and access to information who benefits most concretely from the [system's] capabilities in war-fighting."
In order to augment command responsibility for warbots' possible human rights infractions, Margulies suggests that states and militaries create a separate lethal autonomous weapons command. The officers that head up this dedicated command would be required to have a deep understanding of the limitations of the killer robots they have the authority to deploy. In ambiguous situations, a killer robot should also have the capability to request a review by its human commanders of its proposed targets.
"The status quo is unacceptable with respect to noncombatant deaths," Arkin argues cogently. "It may be possible to save noncombatant lives through the use of this technology—if done correctly—and these efforts should not be prematurely terminated by a preemptive ban."
This article originally appeared in print under the headline "Killer Robots: Protectors of Human Rights?."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
More to the point, any such ban is unenforceable. Governments might hold to the idea, but computers are already capable of being the guts of crude autonomous weapons. If what they are afraid of is weapons so autonomous that they can repair, refuel, and rearm themselves, then they are silly, because just as no-one can build a pencil by themselves, so no robot could resupply itself from scratch. Where do you draw the line between fully independent and sufficiently independent? Are current or past warships fully autonomous, since they depend on ports and resupply ships? If a computer-driven off-road vehicle, or one of those legged robots, can only be independent for the several hours its fuel tank and ammo supply last, is that autonomous? And if it is, then they are already too late and will always be too late, because they'd have to ban computers absolutely to stop it now.
Silly statist idealists. Just because words scare them doesn't make them scary.
So I guess my drone that attacks the source when it hears the words "Allahu ackbar!" is right out, then.
I am making $89/hour working from home. I never thought that it was legitimate but my best friend is earning $10 thousand a month by working online, that was really surprising for me, she recommended me to try it. just try it out on the following website.
??? http://www.NetNote70.com
There will be a continuous cry for bans on autonomous weapons. The reality is that there will be autonomous weapons as soon as the technology makes them practical. An agreement to ban such weapons would carry the same weight as the Outer Space Weapons Treaty which we can all be certain has been violated more times than interstate speed limits.
Autonomous weapons have been with us for a while. There were some Fulda Gap weapons that could be deployed, listen for tanks (seismic sensors) and fire a shaped charged at the tank.
Mines are also effectively autonomous weapons.
I'm making over $9k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life. This is what I do.... Go to tech tab for work detail..
CLICK THIS LINK=====>> http://www.earnmax6.com/
3"I quit my 9 to 5 job and now I am getting paid 98usd hourly. How? I work-over internet! My old work was making me miserable, so I was forced to try-something NEW. After two years, I can say my life is changed-completely for the better! Check it out what i do.
>>>>>>>>> http://www.Today70.com
4"I quit my 9 to 5 job and now I am getting paid 100usd hourly. How? I work-over internet! My old work was making me miserable, so I was forced to try-something NEW. After two years, I can say my life is changed-completely for the better!Learn More From This Site...
======> http://www.Today70.com
I've made $76,000 so far this year working online and I'm a full time student.I'm using an online business opportunity I heard about and I've made such great money.It's really user friendly and I'm just so happy that I found out about it.
Open This LinkFor More InFormation..
??????? http://www.Reportmax20.com
my roomate's step-mother makes 60 each hour on the internet and she has been out of work for seven months but last month her check was 14489 just working on the internet for 5 hours a day, look at ..
Read more on this web site..
>>>>>>>>>>>>>>> http://www.maxincome20.com
If You Have Any Problem And You Are Not Get Out From Your Problem Than It May Be Someone Black Magic On You So If You Want Remove Black Magic Than Contact Our Black Magic Specialist .