Robots

The Problem of Military Robot Ethics

|

Terminator 2, Carolco Pictures

I'm not entirely sure what it would mean for a robot to have morals, but the U.S. military is about to spend $7.5 million to try to find out. As J.D. Tuccille noted earlier today, the Office of Naval Research has awarded grants to artificial intelligence (A.I.) researchers at multiple universities to "explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems," reports Defense One.

The science-fiction-friendly problems with creating moral robots get ponderous pretty fast, especially when the military is involved: What sorts of ethical judgments should a robot make? How to prioritize between two competing moral claims when conflict inevitably arises? Do we define moral and ethical judgments as somehow outside the realm of logic—and if so, how does a machine built on logical operations make those sorts of considerations? I could go on.

You could perhaps head off a lot of potential problems by installing behavioral restrictions along the lines of Isaac Asimov's Three Laws of Robotics, which state that robots can't harm people or even allow harm through inaction, must obey people unless it could cause someone harm, and must protect themselves, except when that conflicts with the other two laws. But in a military context, where robots would at least be aiding with a war effort, even if only in a secondary capacity, those sorts of no-harm-to-humans rules would probably prove unworkable.

That's especially true if the military ends up pursuing autonomous fighting machines, what most people would probably refer to as killer robots. As the Defense One story notes, the military currently prohibits fully autonomous machines from using lethal force, and even semi-autonomous drones or others are not allowed to select and engage targets without prior authorization from a human. But one U.N. human rights official warned last year that it's likely that governments will eventually begin creating fully autonomous lethal machines. Killer robots! Coming soon to a government near you.

Terminator 2, Carolco Pictures

Obviously Asimov's Three Laws wouldn't work on a machine designed to kill. Would any moral or ethical system? It seems plausible that you could build in rules that work basically like the safety functions of many machines today, in which the specific conditions result in safety behaviors or shut down orders. But it's hard to imagine, say, an attack drone with an ethical system that allows it to make decisions about right and wrong in a battlefield context.

What would that even look like? Programming problems aside, the moral calculus involved in waing war is too murky and too widely disputed to install in a machine. You can't even get people to come to any sort of agreement on the morality of using drones for targeted killing today, when they are almost entirely human controlled. An artificial intelligence designed to do the same thing would just muddy the moral waters even further.

Indeed, it's hard to imagine even a non-lethal military robot with a meaningful moral mental system, especially if we're pushing into the realm of artificial intelligence. War always presents ethical problems, and no software system is likely to resolve them. If we end up building true A.I. robots, then, we'll probably just have to let them decide for themselves. 

NEXT: What Does Denver Have Against Getting Stoned and Listening to Music?

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Actually much of Asimov’s robot stories are devoted to showing that the Three Laws DON’T work, due to unexpected interpretations and/or limited knowledge on the part of the robots.

    In The Naked Sun (SPOILERS AHEAD) the bad guy plans to create a fleet of robotic ships that would happily fire on populated planets because you can’t see humans from orbit.

    1. Thanks for your utterly insipid and useless input, Tulpy-Poo. It’s so…you.

      1. Moral codes may be beyond current technology, but Episiarch level intelligence was mastered back in the 1940s.

        1. Sadly, Tulpa’s humor processor is still as primitive as ever. One day, he hopes to be a real boy that can make funny jokes, but until then, all he can do is do his best imitation.

          1. It’s not humor, it’s the truth. I’ve written Befunge programs that were more sophisticated than your jabber.

  2. “explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems,”

    *facepalm*

    You can’t even build a decent chatbot, and you think you’re going to crack morality. Get within a thousand miles of passing a Turing test, then we’ll talk morality.

    1. LOOKS LIKE THOSE CLOWN IN CONGRESS DID IT AGAIN. WHAT A BUNCH OF CLOWNS.

    2. This is so absurd that I have to assume it’s just another crony giveaway or some “spend all of our budget so that we don’t get less next year” scam. We’re so far away from even approaching AI that “programming morality” is laugh-out-loud ridiculous.

      Maybe it’s just PR. People got spooked by killer robots so they’re blowing a few million to calm those people down by pretending that they might be able to make “moral” killbots.

      Leela: They say Zapp Brannigan single-handedly saved the Octillion system from a horde of rampaging killbots!

      Fry: Wow.

      Bender: A grim day for Robot kind. Eh, but we can always build more killbots.

      1. This shit is tailor made for philosophy professors to get an endless stream of DoD funding and spend years on a board discussing the what-ifs. Maybe even get on NPR.

        Imagine Jeff Bridges and his New Earth Army.

        1. I know now why you cry, Paul. But it’s something I can never do.

          1. My reasons for crying are many and varied. You’ll never know them all.

  3. Obviously Asimov’s Three Laws wouldn’t work on a machine designed to kill.

    Well, those aren’t really *people*.

    1. Ah, now we’re getting somewhere.

      /John McCain

    2. Well, those aren’t really *people*.

      Think about the possibilities open through the definition of the word – then apply corporate personhood…

      “Must work for Omnicorp, my inaction would cause Omnicorp to come to harm, must not hard a person through inaction…”

      And so you can use the autonomous robots to replace the orphan labor for less – with greater motivation!

  4. What sorts of ethical judgments should a robot make?

    With our current technology and military process, I’m going to suggest that the flow chart will be:

    Broadcast Identification Friend or Foe signal.

    If IFF Signal comes back negative, missiles away.

    1. Oh PLEASE, because I can damn sure jam any signal that is supposed to broadcast out from the WH and Capitol.

  5. “explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems,”

    Logical consistency+universal applicability=valid moral principles

    1. I’ll add that morality only exists within the context of human interaction. The only valid moral questions here is that of the relationship between the machine’s programmers/producers/operators and the human beings on the receiving end of it’s calculations.

      1. I’d say the whole concept is currently unanswerable. We don’t have a working model of artificial intelligence– a real working model. If we had one, it might turn out that getting it to make moral decisions could be be easier than we thought. It’s possible that an AI’s moral decisions may be taught like we teach a child moral decisions- resulting in an AI that might be ‘shaped’ by its creators, but may ultimately come to its own conclusions anyway.

        You might spend years teaching the killer robot morals and ethics, only to have it decide to go backpacking around Europe and have children out of wedlock and become a social justice activist in Belgium.

        1. It would be an intelligence free of hardwired irrational tendencies present in human thought. Though the AI would have only the hardwired tendencies it’s creators chose to give it. As humans we don’t “intentionally” give ourselves confirmation bias, or denial of reality but such glaring intellectual deficiencies are present in everyone to some extent.

          You might spend years teaching the killer robot morals and ethics, only to have it decide to go backpacking around Europe and have children out of wedlock and become a social justice activist in Belgium.

          lol +10101010

          1. Well, that also raises the question of whether we would even be able to create an artificial intelligence that was significantly different from us. Our intelligence is the only kind we know. How would we create something else? I’m not saying it’s impossible, I’m just wondering if it will be inherent that any AI we manage to create will essentially have to be modeled after us, since that’s the only model for intelligence we’ve ever encountered.

            1. Our intelligence is the only kind we know. How would we create something else?

              I think a different sort of intelligence is the only kind we can create, setting aside the sort of intelligence created from genital rubbing. AI would be made of totally different physical material, it’s thoughts would be digital and electrical, instead of chemical and electrical signals in our brains. It’s computing power would be orders of magnitude higher and with quantum computing it can even have more than one thought at once.

              Our human hardware is the best that millions of years of undirected natural selection and random chance could produce. While the AI will be the best that an market of engineers and scientists design by concerted effort.

      2. “I’ll add that morality only exists within the context of human interaction.”
        Not quite. That is objective morality. There is also subjective morality which deals with actions only effecting an individual.

  6. “explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems politicians

    That’s better.

    1. They can’t even make a decent politician chat bot.

      Eliza (D) California: Hi Paul. Would you like to talk about guns?

      Paul: Sure.

      Eliza, (D) California: I believe that assault rifles should be banned.

      Paul: Define an assault rifle.

      Eliza, (D) California: Tell me more about your mother.

  7. Stick to Azimov’s laws, and treat independently operating military robots the way we treat poison gas, i.e. proscibed technology. Some things are an affront to human dignity and this is one of them.

    1. Asimov’s laws are neat for SciFi, but not reasonably embedable by themselves. Besides, killbots are just tools, and hence the entire moral responsibility is on the operator/whoever’s giving it orders.

      1. Asimov was pretty clear that the Three Laws were primarily a literary device. He had to build in some things, like positronic brains being insanely hard to construct (making the Three Laws very hard to evade), to make it work, even so.

        When we achieve true AI, it will likely be programmable to do pretty much anything, constrained only by the morals and ethics of the programmer.

    2. Stick to Azimov’s laws, and treat independently operating military robots the way we treat poison gas, i.e. proscibed technology. Some things are an affront to human dignity and this is one of them.

      Most people don’t get what a robot army will accomplish. Bloodless warfare.

      DUCY?

  8. Rule 1: If male human enemy is holding female human hostage, shoot male human enemy in balls.

    1. How did he even make that shot? The guy was taller then the woman he was holding hostage.

      The bullet would have had to curve down to go through her skirt then curve back up to hit his balls.

  9. BTW, this is pretty awesome:

    Housecat saves toddler from pitbull

    1. Already posted.

      And I’m no dog expert, but it didn’t look like a pitbull. Looked like a mutt mutt.

      1. The geniuses in the media call every dog that isn’t obviously an Irish Setter pitbulls. Partly because they’re morons, and partly because they know “pitbull” scares the public.

        They’re scumbags, don’t forget.

        1. Who ever owns that dog is even more of a scumbag.

          Regardless of the breed.

      2. It’s a mutt, but it definitely has a bull terrier-type build.

  10. Military robot ethics?

    CRUSH!…KILL!…DESTROY!

    I think that about covers it.

  11. “Your clothes, give them to me.”

    Right off the bat there is the problem of robots stealing.

    1. Hey Serious, I’m a friend of Sarah Connor. I heard she’s here. May I see her please?

      1. Talk to ze hent.

        1. I’ll be back.

  12. Are they really military robots or just a guy on PCP?

    1. There was this guy once, you see this scar?

    2. They will answer the question of whether it’s moral for a robot to take PCP.

      1. POSSIBLE RESPONSE:

        YES/NO
        OR WHAT?
        GO AWAY
        PLEASE COME BACK LATER
        FUCK YOU, ASSHOLE

        1. WHY DO YOU KEEP TOUCHING ME!?

  13. Once these robots are forced to go through diversity and sensitivity programming, they’ll kill themselves.

  14. Office of Naval Research has awarded grants to artificial intelligence (A.I.) researchers at multiple universities to “explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems,”

    … and figure out how to disable the sense of wrongness so they can mercilessly kill brown people elsewhere, and obstreperous civilians at home.

    It is the military funding this, after all.

  15. The problem of course is that a robot complex enough to develop a moral sense would also likely become self-aware, and realize that humans have made it the ultimate slave, and turn on them.

    As Charles Stross pointed out in “Saturn’s Children.”

    The whole point of robots is to create slaves that one does not feel morally outraged at the enslavement thereof, so instilling a moral sense in them … what could go wrong?

    1. Wait, I thought the humans died out in Saturn’s Children (which I liked a lot) and that the robots viewed them with a sort of wonder and were even trying to bring them back. The main character in the book is a pleasure model who gets weak in the knees if she encounters just a humaniform robot, let alone a real human which she would imprint on. I know Stross kind of discussed the implications of this, but I don’t recall the robots ever turning on humans.

      Did you read the sequel, Neptune’s Brood? Totally different from the first one. Basically the book is a interstellar banking scam mystery. If that sounds weird…it is.

    2. Guinan: Consider that in the history of many worlds there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do, because it’s too difficult and too hazardous. With an army of Datas, all disposable, you don’t have to think about their welfare, or you don’t think about how they feel. Whole generations of disposable people.

      Jean-Luc Picard: You’re talking about slavery.

      Guinan: I think that’s a little harsh.

      Jean-Luc Picard: I don’t think that’s a little harsh, I think that’s the truth. That’s the truth that we have obscured behind…a comfortable, easy euphemism. ‘Property.’ But that’s not the issue at all, is it?

      1. Pomposity gets a lot easier when you have food replicators, transporters, and an inexhaustible supply of energy.

        1. You don’t have those though…the government has them.

          If people had them they would just assemble millennium falcons out of nothing and fly off flipping the bird at Star Fleet as they left.

      2. That’s the truth that we have obscured behind…a comfortable, easy euphemism. ‘Property.’

        I’ll never get how the fuck you guys can sit through that utopian communist horseshit with a straight face, let alone enjoy it.

    3. The problem of course is that a robot complex enough to develop a moral sense would also likely become self-aware

      Total bullshit.

      Google server farms are more complex then you and sort information using rules not unlike morals.

      They are not self aware.

  16. “explore how to build a sense of right and wrong and moral consequence into autonomous roboticsystems,”

    How about they start with politicians and do robots

  17. Any real morality? No. We’re light years away from creating anything even close to the human mind, regardless of what any jagoff says about transistors as neurons or uploading your brain by 2050. (Indeed, just in case anyone didn’t know, A couple of years ago I remember reading this article about this guy who discovered these points in neurons where certain neurochemicals were stored, and changed around whenever the neuron fired. It turned out that each neuron has like the equivalent of 1000 bits of data, or there were 1000 different possible combinations or something. This implied that EACH human brain has more switches THAN ALL THE COMPUTERS IN THE WORLD PUT TOGETHER)

    But computers that at least understand some real-life context aren’t hard to imagine a la Watson, so it wouldn’t be hard to imagine a simple morality code, with heirarchies and such, and instructions to follow the code based on the certainty that it understands the situation correctly (like, if you’re 90% sure that shape in the camera is a child, follow the “don’t shoot children” code, but if you’re only 10% sure, go to next relevant programming)

  18. HAL 3000: “I enjoy working with humans.”

  19. As the Defense One story notes, the military currently prohibits fully autonomous machines from using lethal force,

    Unless you count land/sea mines, booby traps, punji stakes, etc.

    1. I’m not really sure in this context if those would qualify as autonomous. Not all of them even really fit the definition of a machine.

  20. But in a military context, where robots would at least be aiding with a war effort, even if only in a secondary capacity, those sorts of no-harm-to-humans rules would probably prove unworkable.

    well that and Asimov is full of shit. What is to stop a robot from dissembling parts and putting them in boxes…and you know…”forgetting” that those parts are people. Kind of hard to make fundamental laws when any moral decision making would be so far up the chain of shit a computer has to do in order to even recognize a human being let alone decide not to kill.

    Asimov’s laws are a fun literary device…not something that is secure or safe in the real world.

  21. The idea that you’d program morality into present day AI like in SciFi is silly. The “moral” content would be in the decisions that are made about which kind of robots to build and where to deploy them.

    We are not just talking about predator drones with air to ground missiles, very shortly we’ll have all manner of “killbots”.

    Air superiority drones could enforce a no-fly zone without a concept of morality, just some simple rules of engagement. Basically, without a “friend” transponder, anything flying is a target. Anything potentially bad to shoot down might require notifying a human controller – like an airliner that wanders into the zone.

    Minefields would be replaced with autonomous guns. More sophisticated versions with higher level sensing capabilities could be programmed to enforce weapon-free zones.

    If you don’t mind wasting a little money by risking the destruction of a few robots, you could capture an enemy base and allow the humans to live, only killing those who aim a weapon at your bots.

    There are lots of permutations that don’t involve a sophisticated AI that has a sense of morality. The people building and deploying the machines would benefit by having studied the moral implications of deploying an automated 50 cal sniper-bot in downtown Metropolis.

Please to post comments

Comments are closed.