AI Warfare Is Boring but Deadly
Bureaucrats in cubicles will kill more people than Terminator robots will.


Everyone knows what the AI apocalypse is supposed to look like. The movies WarGames and The Terminator feature a superintelligent computer taking control of weapons in a bid to end humankind. Fortunately, that scenario is unlikely for now. U.S. nuclear missiles, which run on decades-old technology, require a human being with a physical key to launch.
But AI is already killing people around the world in more boring ways. The U.S. and Israeli militaries have been using AI systems to sift through intelligence and plan airstrikes, according to Bloomberg News, The Guardian, and +972 Magazine.
This type of software has allowed commanders to find and list targets far faster than human staff could by themselves. The attacks are then carried out by human pilots, either with manned aircraft or remote control drones. "The machine did it coldly. And that made it easier," an Israeli intelligence officer said, according to The Guardian.
Going further, Turkish, Russian, and Ukrainian weapons manufacturers claim to have built "autonomous" drones that can strike targets even if their connection to the remote pilot is lost or jammed. Experts, however, are skeptical about whether these drones have made truly autonomous kills.
In war as in peace, AI is a tool that empowers human beings to do what they want more efficiently. Human leaders will make decisions about war and peace the same way they always have. For the foreseeable future, most weapons will require a flesh-and-blood fighter to pull a trigger or press a button. AI allows the people in the middle—staff officers and intelligence analysts in windowless rooms—to mark their enemies for death with less effort, less time, and less thought.
"That Terminator image of the killer robot obscures all of the already-existing ways that data-driven warfighting and other areas of data-driven policing, profiling, border control, and so forth are already posing serious threats," says Lucy Suchman, a retired professor of anthropology and a member of the International Committee for Robot Arms Control.
Suchman argues that it's most helpful to understand AI as a "stereotyping machine" that runs on top of older surveillance networks. "Enabled by the availability of massive amounts of data and computing power," she says, these machines can learn to pick out the sorts of patterns and people that governments are interested in. Think Minority Report rather than The Terminator.
Even if human beings review AI decisions, the speed of automated targeting leaves "less and less room for judgment," Suchman says. "It's a really bad idea to take an area of human practice that is fraught with all sorts of problems and try to automate that."
AI can also be used to close in on targets that have already been chosen by human beings. For example, Turkey's Kargu-2 attack drone can hunt down a target even after the drone has lost its connection to its operator, according to a United Nations report on a 2021 battle in Libya involving the Kargu-2.
The usefulness of "autonomous" weapons is "really, really situational," says Zachary Kallenborn, a policy fellow at George Mason University who specializes in drone warfare. For instance, a ship's missile defense system might have to shoot down dozens of incoming rockets and has little danger of hitting anything else. While an AI-controlled gun would be useful in that situation, Kallenborn argues, unleashing autonomous weapons on "human beings in an urban setting is a terrible idea," due to the difficulties distinguishing between friendly troops, enemy fighters, and bystanders.
The scenario that really keeps Kallenborn up at night is the "drone swarm," a network of autonomous weapons giving each other instructions, because an error could cascade across dozens or hundreds of killing machines.
Several human rights campaigners, including Suchman's committee, are pushing for a treaty banning or regulating autonomous weapons. So is the Chinese government. While Washington and Moscow have been reluctant to submit to international control, they have imposed internal limits on AI weapons.
The U.S. Department of Defense has issued regulations requiring human supervision of autonomous weapons. More quietly, Russia seems to have turned off its Lancet-2 drones' AI capabilities, according to an analysis cited by the military-focused online magazine Breaking Defense.
The same impulse that drove the development of AI warfare seems to be driving the limits on it too: human leaders' thirst for control.
Military commanders "want to very carefully manage how much violence you inflict," says Kallenborn, "because ultimately you're only doing so to support larger political goals."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
… unleashing autonomous weapons on "human beings in an urban setting is a terrible idea," due to the difficulties distinguishing between friendly troops, enemy fighters, and bystanders.
Pfft. It’s all done by skin color.
That's racist! Don't you know that facial recognition technology can't tell black people apart? What does the kill algorithm do then?
If a society is not imprisoning a human being each time a self-driving car there kills its driver or someone else, then that society probably should not be using AI to make final military targeting decisions.
What about other final solutions that AI could he helpful with?
AI will make efficient the solar powered electrushkas delivering dissidents to the reeducation camps.
'Bureaucrats in cubicles will kill more people than Terminator robots will.'
Uh, sure. As long as they are not on union-required breaks and distracted by petty squabbles with that bitch in the next cubicle.
How scared would you be if the DMV ran Skynet?
The Assman drone striking people would be crazy.
“It’s a really bad idea to take an area of human practice that is fraught with all sorts of problems and try to automate that.”
It doesn’t matter if warriors make mistakes during a war. What really matters is whether the war makes sense or not in the first place. During World War Two, for example, our warriors were fighting a just war in self-defense against a truly dangerous and easily identified enemy. They made many, many deadly mistakes during that war but completely destroyed our enemies. It’s possible that artificial intelligence-like technology might have made those mistakes faster and more efficiently, but it’s also possible that, on balance, it might have prevented more of those disasters than it created. If we stop allowing our Fearless Leaders to fight never-ending wars that we have no business conducting, we won’t have to worry about whether we kill innocent children in an aid organization bus by mistake instead of a terrorist leader. Stop marking “enemies” for death and I will stop worrying about the dangers of AI and autonomous drone weapons. Thanks awfully!