The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Robots Don't Kill People (at Least Not Yet); People Use Robots to Kill People
San Francisco is considering authorizing its police department to sometimes use remote-controlled robots to kill: The robot might deliver a bombs or a grenade, or perhaps might even include a remote-controlled firearm (though I'm not sure which options would be available at the outset). This has been reported with headlines such as, "Robots would have license to kill" and "San Francisco police propose allowing robots to kill in 'rare and exceptional' circumstances."
But to my knowledge none of these would involve any autonomy on the robot's part; a human being will push the button, just as today a human being can pull a trigger. My view is much the same as it was when this was done in Dallas in 2016 to stop a mass shooter: If the police reasonably believe that someone poses an imminent danger of death to others, and that killing him is necessary to prevent that danger, they can try to kill him, whether with a rifle or a bomb-carrying robot. A robot is a weapon, albeit one that isn't stymied by corners or walls the way a rifle would be.
Nor am I particularly worried that the presence of the robot would somehow deaden its user's normal reluctance to kill people, at least compared to other devices that kill at a distance, such as rifles. The police officers pushing the button will know that they're using deadly force. Indeed, they'll often have more time than in a normal police shooting situation to calculate whether the use of deadly force is necessary (though errors will undoubtedly be possible, as they are with all deadly force). It of course makes sense to have policies that diminish the risk of unnecessary or unjustified uses of deadly force; but that too applies equally, I think, to use of robots as it does to ordinary use of firearms.
More broadly, I think we should be careful with colorful figurative usage, however appealing it might be for headline writers. Some day armed AI robots may indeed make independent decisions (prompted indirectly by their programming, but with no human being to make the final call in each situation); that may well be a novel situation that would call for additional thinking. But in this situation, robots wouldn't be being licensed or allowed to do anything; people would be allowed to do things using robots, and it's a mistake to fuzz that over.
Show Comments (52)