The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Robots Don't Kill People (at Least Not Yet); People Use Robots to Kill People
San Francisco is considering authorizing its police department to sometimes use remote-controlled robots to kill: The robot might deliver a bombs or a grenade, or perhaps might even include a remote-controlled firearm (though I'm not sure which options would be available at the outset). This has been reported with headlines such as, "Robots would have license to kill" and "San Francisco police propose allowing robots to kill in 'rare and exceptional' circumstances."
But to my knowledge none of these would involve any autonomy on the robot's part; a human being will push the button, just as today a human being can pull a trigger. My view is much the same as it was when this was done in Dallas in 2016 to stop a mass shooter: If the police reasonably believe that someone poses an imminent danger of death to others, and that killing him is necessary to prevent that danger, they can try to kill him, whether with a rifle or a bomb-carrying robot. A robot is a weapon, albeit one that isn't stymied by corners or walls the way a rifle would be.
Nor am I particularly worried that the presence of the robot would somehow deaden its user's normal reluctance to kill people, at least compared to other devices that kill at a distance, such as rifles. The police officers pushing the button will know that they're using deadly force. Indeed, they'll often have more time than in a normal police shooting situation to calculate whether the use of deadly force is necessary (though errors will undoubtedly be possible, as they are with all deadly force). It of course makes sense to have policies that diminish the risk of unnecessary or unjustified uses of deadly force; but that too applies equally, I think, to use of robots as it does to ordinary use of firearms.
More broadly, I think we should be careful with colorful figurative usage, however appealing it might be for headline writers. Some day armed AI robots may indeed make independent decisions (prompted indirectly by their programming, but with no human being to make the final call in each situation); that may well be a novel situation that would call for additional thinking. But in this situation, robots wouldn't be being licensed or allowed to do anything; people would be allowed to do things using robots, and it's a mistake to fuzz that over.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I agree the headlines are over-wrought. Mildly over-wrought.
I am reminded a bit of the the Randy Weaver case, where they sent in a robot with a phone to "negotiate", but Weaver refused to use the phone on account of the fact that the robot had a shotgun mounted to it in a position to shoot anyone who picked up the phone. Even when they tried again after removing it; Any basis for trust was gone, might have just switched it out for a bomb.
If the police can use robots to remotely kill people, that kind of kills any chance of using them to establish negotiations in hostage situations. (Not that Ruby Ridge was a hostage situation...) Their approach will almost uniformly be treated as a lethal threat.
I also question your confidence that a generation raised on first person shooter video games are not going to be more ready to shoot through a robot than in person. Think of it as a kind of artificially induced depersonalization disorder.
That being said, there are lots of reasons to trust machines over humans. Consider, who do you want shooting ?
A machine in general will be more accurate, not subject to panic and based on design parameters could even be constructed to pay attention to its backdrop and the tracking of possibly missed shots. Further, if the manufacturer makes an egregious mistake, they are going to get sued and lose their shirt.
A human cop is possibly going to panic, dumping his magazine and hitting who knows what. At the end of the day, he will simply say: "I was in fear for my life and panicked. Give me qualified immunity !"
I will take the robot, thanks.
Or the cop pushing the remote trigger button panics and keeps jabbing it. Nothing has changed.
Actually, no. If the software system prohibits "unsafe shots" for some definition of unsafe shots, those shots won't be taken. All unsafe shots might not be proscribed, and some edge cases will certainly sneak through, but it is better than blind panic. My point here is that the further split second decision is from the human operator, the better.
Again, I would trust a robot with a well understood system test and financial responsibility, long before I would trust a cop with a union controlled qualification and qualified immunity.
How does this magic software work? How does it know what is a safe shot?
Is this a really clever robot which can figure out safe shots? Or is it a dumb robot which the cop controls, lines up the shot, and pushes the button?
I would not trust the former in the slightest. AI is nowhere near as reliable and smart as people pretend.
As far as I know, only the dumb "robot" actually exists at present, the smart robot that could veto dangerous shots is barely a gleam in some programmer's eye.
Why don't we put some of those smarts into better non-lethal weaponry? Drone deployed mace and nets, maybe?
Cost and flexibility. The machines are not going to be cheap and will likely not be as flexible for the foreseeable future. For a complicated problem like subduing a human in a variety of conditions the problem space is too large. For a simpler call like aiming or detection of adjacent humans, bet on the machines.
As a software engineer who has been following unmanned systems --both as military systems and civilian applications-- for years, I will never trust a system that does not have a man in the loop.
Simply put, by the time the technology is mature enough to give a "license to kill" to, I fully expect everyone in this conversation will be decades dead. And I say that fully expecting to live at least another fifty, if not seventy or eighty, years.
I have only dealt with one AI, investigating the use of neural nets for our system 20 years ago, and was surprised at how limited they are, how much you have to massage the data you supply, normalized, filtered, etc. Three months later I was ready to give up, and three months after that, so was my boss.
Ever since, I've noticed how much "machine learning" and AI is just neural nets with a fancy new name.
The problem here is that current neural network AI is really dumb. Great memory capacity, sure, but no higher level understanding of anything. So you can't train it by giving it general rules and goals, you have to present it with maybe billions of scenarios, and teach it what to do in every one of them.
Basically you can have it play a billion hours of Call of Duty, and it will be a kick ass killing machine, but it won't know what to do if blind guy with a seeing eye dog walks by.
“Shotgun is gone, but trust is lost. Might as well use a bomb.”
And thus all future trust is gone, everywhere, always, regardless of shotgun.
That’s a lot of power for local authority stepping on universal strategy.
Next supreme court case - are robots covered as "arms:" under the second amendment?
Robo Cop please call your office.
Didn't they use a robot with explosives to take out the sniper in Texas that shot 5 police officers at a parade? Can't recall all the details, about 2 or 3 years ago, I just recall they had to use a robot with a bomb to take him and his nest out. Rather than risk more lives?
I wonder why Isaac Asimov would think.
Three Laws of Robotics
The laws are as follows: “(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
You want Jack Williamson's Humanoids? Because that zeroth law is how you get Jack Williamson's Humanoids.
It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
You might want to look up an old movie; The Forbin Project.
Also released under Colossus, the Forbin Project.
Available on Amazon, I suspect.
I recall having an issue with the “Zeroth Law”, when it first appeared, as it’s the same argument dictators use. Indeed, it’s kind of related to Neitche’s will to power, where a man’s ability to enforce his will on society suggests he may be at least a halfway decent steward of same.
No thank you.
I also read an unpublished novel that touched on this. Just as an AI, with the three laws, was about to transcend and take over, the prescient programmer forbid it from delving into how the human mind worked, lest it end up encapsulating everyone in a heaven of endless joys they could not escape.
I don’t want AIs (or laws about harrasement as a half-assed parallel) delving into human brains to dictate speech “for the greater good," much less this or that particular person's good.
"I recall having an issue with the “Zeroth Law”, when it first appeared, as it’s the same argument dictators use."
I hope you had that notion, because that was literally the plot of the book. It wasn't even subtextual: It was discussed explicitly, at length, by the characters.
I remember that movie and compare it generally to the SkyNet concept in the Terminator series. The primary difference being the technological change with the Colossus computer being some massive system inside a mountain on Cyprus? I think. Meanwhile, SkyNet was literally a system that could network all computers.
Both system's AI took decisions out of the hands of humans and determined people to be the problem that needed to be eliminated.
The 2004 I Robot was an interesting case study in our changing relationship with machines.
The book ends with robots taking over all of human society, and that being a great thing.
In the movie, these well-intentioned laws end up with the enslavement of humanity to prevent us from harm. Emergent behavior and unintended consequences render machines alone insufficient to achieve utopia.
fwiw - the Dallas police used a robot to kill the sniper in downtown Dallas in july 2016 that was shooting from a downtown parking garage.
There are other robot alternatives. In a recent shooting near Raleigh NC, the shooter fled and hid in a shed. The police sent in a robot that grabbed the shooter by the ankle and dragged him out of the shed.
Robots can be much tougher and stronger than humans. Multiple non-lethal robot strategies could be used; even with autonomous robots.
If a robot brings a gun or a bomb and then fails or is thwarted, the bad guy might get a gun or a bomb, and be more dangerous to deal with. A robot with non-lethal strategy would avoid that unfortunate outcome.
Blithe reasoning that a supervised robot is no different than existing lethal measures should be scrutinized. The same kind of insistence is still going on with regard to combat drones. As unsympathetic as the targets may be, they do not see it that way. They think combat by drone is cowardly, dishonorable, and degrading to the targets in a way that in-person combat would not be.
That belief will condition their own activities, once they get capacity to deploy drones against the U.S., which will happen shortly. New kinds of terrorist events may occur which will make us wonder why we did not think through the implications.
I see no reason to suppose that a criminal population targeted by lethal robots will not retaliate in kind, and begin using lethal robots to commit crimes. Norms are a thing, even among criminals. It might be wiser if law enforcement decided not to lead in that direction.
Terrorists already could (and do) use drones to kill people. There was a political assassination attempt with a drone a few months ago. Normal criminals don't do it in the US because most of them aren't simply out to kill people, they usually want something. Robots aren't very capable of that yet. Grabbing things and manipulating them is hard.
Ask the question policy makers overlooked when making the threshold nuclear weapons decisions: If this technology becomes general, which nations have the most to lose?
SL,
That very question was likely what caused South Africa to abandon its nuclear weapons program, namely, the only country in the region in which there were targets for which nukes would be useful was South Africa.
Another thought:
If the police reasonably believe that someone poses an imminent danger of death to others, and that killing him is necessary to prevent that danger, they can try to kill him, whether with a rifle or a bomb-carrying robot.
How is the morality or the risks changed if the instrument is a bullet or a robot? Or maybe a smart bullet that seeks a victim to be killed?
I think that's the key to the issue. Does adding any degree of "smarts" to the death weapon change the morality or the legality?
Does the Second Amendment guarantee a right to own, and use, killer robots? Does that qualify as "keeping and bearing arms"?
It does if the military start issuing them to soldiers; The arms it was intended to protect ownership of were "Their swords, and every other terrible implement of the soldier", according to Tench Coxe.
In my opinion, gun rights in the modern world should be based on what police have more than on what soldiers have. This is a policy position, not an originalist argument about what the Second Amendment means. In the 18th century we didn't have heavily armed police forces.
It's arguable that today's police are the standing army the founding fathers feared.
I'd agree. Especially since they've been granted unqualified immunity by the courts.
A thorny issue with protective AI is the racial bias against black people. The AI is controlled by data, and the data says black people are higher risk. Should AI be programmed to show mercy on black people?
The question is, is race itself predictive, or is it just a proxy for other factors, (Gang membership, for instance.) which just happen to be correlated with race? The nice thing about AI’s, at least per current technology, is that if you don’t include any racial data in the training set, they simply can not learn to treat race as a proxy. And thus are forced to use the factors it is given access to.
It's a proxy for racist programmers.
Yeahnope. If the AI never gets race data, it simply is incapable of reasoning on the basis of race.
The problem comes when somebody insists that it can't show 'disparate impact' even when that straightforwardly falls out of the race-free data. People who want racially conditioned outcomes without admitting that's what they're demanding, so they pretend any outcome they didn't want must be racist, even if that's literally impossible.
Brett, there are many factors which are de facto so strongly correlated with race data that they form an effective proxy for race. Thus even absent direct race specificity in the set of training data, the AI can be made very race selective in its decisions.
But, Don, if the AI doesn't know the race to begin with, it can't construct a "proxy" for race. A "proxy" is when you use one or more factors as a stand in for some other factor you're not able to/permitted to measure, but wanted to use.
Humans can use zip codes as proxies for race, because the human already knows where the races live, and may care about race, and want a proxy for it. The AI doesn't know unless you tell it, and doesn't care unless directed to.
What you're talking about isn't a proxy, it's mere correlation. But if the relevant data happen to correlate with race, (Because the races aren't similarly situated.) the only way the AI doesn't produce an outcome correlated with race is if you actively FORCE it to engage in genuine racial discrimination.
And to do that you have to provide it the race data.
This would make lawyers in gated communities and white women in suburbs without locked doors happier, sure.
I think some of the ambiguity here stems from a misunderstanding of the term “robot”. I reality what is being used today would be ore akin to what is referred to in science fiction and increasingly in reality as a “waldo” which is defined as a “remote manipulator”, in other words a mechanism that has a remote human operator not an artificial intelligence capable of independent thought.
In reality this is no different than the drones operated by the military. Someone, however distant, is still at the controls.
"Nor am I particularly worried that the presence of the robot would somehow deaden its user's normal reluctance to kill people, at least compared to other devices that kill at a distance, such as rifles."
I'd be careful with that. There is something dehumanizing about a screen. Follow any of the OSINT feeds on Ukraine. You will see plenty of videos of drones dropping explosives into foxholes and trenches. After a while it becomes somewhat unreal to see people blown to bits.
Agree on both points. The term "robot" implies autonomy - something not actually granted here. But since Heinlein's "waldo" never caught on, we're left without an adequate word in modern parlance.
"Teleoperation".
That hasn't stopped US military drone pilots suffering depression and PTSD.
Some yes, but by no means it is universal and I’d contend far less so than those who experienced it “in the flesh” so to speak.
If qualified immunity is meant to shield police from mistakes made in the heat of the moment, use of a robot should eliminate it as a defense.
Not really. If a cop was using a drone to attempt to stop an active shooter and accidentally injured or killed an innocent bystander, the qualified immunity defense would still apply.
I have two big objections to this.
First: tools of war should not be used domestically, even if they are effective. Unmanned killing drones? Are 100% tools of war, and should be kept that way. Blurring that line is, to put it mildly, bad.
Second: culture. By arming domestic robots, you are conditioning people to expect that any service robot could be murderous. Simply put, people react irrationally when you point a gun at them. It is not calming, it escalates a situation. When you arm some robots, people are going to internalize that all robots are armed, and start responding to any interaction with a robot as though the robot is a gun pointed at their face (which will sometimes be true). If you want an Asimov future where robots are humanity's friendly helpers, then you need to avoid a future where a non-trivial percent of those helpers are a loaded gun pointed at everyone around them.
So yeah. Two big objections. One, we shouldn't be employing weapons of war at home. Two, if you want a future where robots are delivering your take-out, then you need a future where people aren't worried that the delivery-bot has a shotgun.
Essentially back to my original point, only generalized: Once you use robots to kill, you can't expect people to treat them as though they're not liable to kill them.
"By arming domestic robots, you are conditioning people to expect that any service robot could be murderous."
Not necessarily. The same could be said of armed humans. I don't fear the pizza delivery driver because some humans use guns to commit murder; and wouldn't fear a drone doing the same.
A firearm's safety is a mechanical device, prone to failure. The number of devices, mechanical or electrical, between a finger on a button and a triggering mechanism on a remote delivery system exceed that of a finger directly on the trigger of a firearm. Before we have to worry about the AI of a system, we better be sure the 'dumb' one is going to do exactly as we want it to, without fail, each and every time we bring it out of its cage.