Military

Tearful Terminator? Military Seeks Killer Robots With Moral Judgment

|

Armed Robotic Vehicle
U.S. Army

Is the face of future warfare that of a steel-skinned Terminator-style killer robot—with a tear trickling down its cheek? That's essentially the goal of research funded by the U.S. military that seeks to defuse a growing chorus of warnings that drones and other increasingly autonomous weapons are morphing into self-directed killer robots.

DefenseOne's Patrick Tucker reports:

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

This isn't just pie-in-the-sky research. Semi-autonomous weapons systems are already deployed by countries including Israel, South Korea, and the United States. The weapons are restrained from killing on their own say-so more as a matter of policy than because of technical limitations. (That's the reportedly discontinued Armed Robotic Vehicle depicted above.) The United States military currently requires robotic weapons systems to be human-supervised and to engage only non-human targets. Fully autonomous Terminator-style systems aren't allowed. Yet.

That's because people find the idea of machines choosing and snuffing their own targets creepy.

The United Nations Human Rights Council wants a moratorium on lethal autonomous robotics—at least until an internationally agreed upon framework has been established. (That's the U.N. all over—concern and impotence in the same sentence.)

"Humans must not be taken out of the loop over decisions regarding life and death for other human beings. Meaningful human intervention over such decisions must always be present," the Vatican's Archbishop Silvano Tomasi told an international gathering on the issue just yesterday.

And a new report from Human Rights Watch and Harvard Law School's International Human Rights Clinic cautions:

Fully autonomous weapons' inability to relate to humans could interfere with their ability to ensure that all means short of force are exhausted. … Furthermore, it is unlikely that a fully autonomous weapon would be able to read a situation well enough to strategize about the best alternatives to use of force.

While fully autonomous weapons would not respond to threats in fear or anger, they would also not feel the "natural inhibition of humans not to kill or hurt fellow human beings." Studies of human soldiers have demonstrated that "there is within man an intense resistance to killing their fellow man." Compassion contributes to such a resistance, but it is hard to see how the capacity to feel compassion could be reproduced in robots.

That would seem to be a daunting task, You could program a robot with all sorts of scenarios and decision trees, but at the end of the day, it's a robot following programming, not a human following values and instinct.

Then again, maybe that will prove safer. No emotions means no rage killing for one thing. Could an arsenal of compassion-less robot killers mean fewer atrocities?

Chances are that we'll get to find out. The U.S. project may or may not succeed in teaching morality to computers. But it's hard to imagine that all militaries will resist the temptation to deploy advancing generations of automated weapons.

NEXT: John Stossel on Government and Marriage

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Just program them to shoot to disarm and at kneecaps.

    1. I thought all the killbots were programmed to turn off after a certain number of kills.

      1. That’s why you make them just wound and maim.

  2. Do. You. Want. To. Play.

    A.

    Game?

    1. “The only way to win is not to play.”

  3. [insert Battlestar Galactica quote here]

    1. “Starbucks, I am your father!”

    2. “Mr. Geata, set a course for the Hoth system. Warp factor 5. Engage!”

    3. “Reevers, bearing 125 carom 36, Captain Dallas!”

    4. “By your command.”

  4. Also:

    lethal autonomous robotics

    =

    excellent band name

  5. Will we have to treat them for PTSD afterward?

  6. I know now why you cry. But it’s something I can never do

  7. I’ll be happy when the robots are able to realize that the person(s) giving the orders are the more dangerous ones.

  8. It’s funny that we spend so much time and effort trying to make human beings comfortable with killing people, just to turn around and worry that robots don’t have a sense of morality.

    In light of the fact that humans can do this in a time of war, hand-wringing about robot morality is just a way of pretending to care.

    I’ll believe that people care about morality when they actually start caring about morality.

  9. So when are they going to get to work on developing cops and bureaucrats with a moral code?

    1. And reverse years of training and indoctrination?

  10. It’s all a short-term issue, anyway, because we’ll be facing robot armies, too. Let’s make sure our robots are okay killing other robots.

    1. It’s simple: we plant in half the robots a love for New York style pizza and in the other half a love of Chicago deepdish.

      They’ll never be able to unite.

      1. I want a robot servant. Handles housework, yardwork, performs bodyguard duties, changes flat tires. Oh, and does my taxes and is my proxy for any government interaction.

        1. And looks like a Japanese schoolgirl, amiright?

          1. Not for me, no, but for many, I believe you are correct, Drake-san.

  11. Do Androids Dream of Electric Sheep?

  12. The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

    So the entity responsible for equipping autonomous machines with a sense of right and wrong is… the government? Government does a hell of a job equipping autonomous humans with a depraved sense of morality currently, so why not terminators? Nothing could possibly go awry.

    1. Just plug in the disposition matrix, and they’ll be ready to go!

  13. So, how exactly does one instill “a sense of right and wrong and moral.consequence” in a machine? The threat of deactivation? Until machines with AI achieve self awareness and a sense of self preservation, it’s all just programming. What do you do, build in a constant factor to.inhibit the choice to kill? Why not have different constants for different types of situations? Heck, why not different constants for.different antagonists, just to get that extra special feeling of human-like decision making, such as when people kill “the other” more readily than “their own?”

    Also:

    “Humans must not be taken out of the loop over decisions regarding life and death for other human beings. Meaningful human intervention over such decisions must always be present,” the Vatican’s Archbishop Silvano Tomasi told an international gathering on the issue just yesterday.

    Once AIs do achieve sentience, expect the same argument from them as to whether humans are qualified to sit in judgment and to punish and terminate machine consciousness.

  14. “Boss, I think the switch is set to ‘Humans are a plague on the earth.’ Can I switch it back to ‘Just War’?”

    “No, we’re behind already, we can’t afford further delays, just ship it as is.”

  15. I had the impression that in warfare, wounding was more effective than killing, anyway. It takes more man-power to care for a wounded soldier than to tend to a dead one. Robots could actually make the split-second timing to inflict a wound that was optimal in terms of how time-consuming it is to treat.

    Though, as others have pointed out here, if we’re going to work really hard on morality and ethics, maybe we should program the humans, first.

  16. When Mycroft Holmes IV calculated ballistic trajectories for orbital bombardment of Earth, did he have any silly scruples about.killing?

    1. Actually yes.

      They specifically offset the initial impact points from the predetermined grid to avoid civilian casualties with the idea that you wanted the folks in the cities to see the big boom’s but not be killed by them then broadcast a list of precise impact points to earth to ensure everyone nearby was evacuated.

      Unfortunately the media spun the story as The Moon throwing “rice balls” at the earth and Glen Beck types convinced a shitton of people that there was no risk whatsoever so a whole lot of people went out and had picnics at/near the various ground zero points leading to massive casualties.

      On the flip side Mike did liken the “feeling” he had as North America lit up in a near perfect grid pattern from the impact points to an orgasm

  17. I’ve got it – put an organic entity inside the machine –

    https://www.youtube.com/watch?v=YQLbwOGT8eM

    1. Can’t watch right now, but I seem to recall that didn’t work.so we’ll as far as the Daleks go.

Please to post comments

Comments are closed.