Science

Do Robots Dream of Punching Slovenians?

|

do robots dream of punching Slovenians?

Somewhere in a lab in Slovenia "a powerful robot has been hitting people over and over again in a bid to induce anything from mild to unbearable pain."

But before you round up an Asimov-brandishing mob to storm the lab at the University of Ljubljana, wait for the explanation: Researchers have programed some small production line robots borrowed from Epson to punch human volunteers. Researcher Borut Povše is hoping to help design robots with additional checks to prevent harm to humans—very much in the spirit of Isaac Asimov's Three Laws of Robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In fact, Povše is looking to do ol' Issac one better. While Asimov deals with the question of whether robots can unknowingly violate the laws in The Naked Sun (they can), as far as I recall from my youthful Asimov obsession, he doesn't deal with accidental harmful collisions. This worries Povše, particularly since most of our big dumb robots aren't yet anywhere near Asmovian sophistication. Says Povše:

"Even robots designed to Asimov's laws can collide with people. We are trying to make sure that when they do, the collision is not too powerful. We "We are taking the first steps to defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans."…

Ultimately, the idea is to cap the speed a robot should move at when it senses a nearby human, to avoid hurting them.

Via Kurzweil AI.

NEXT: Reason Writers Arguing for Prop. 19 on TV: Matt Welch on Varney & Co.

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Houdini was killed by SkyNet.

  2. OR:

    Ultimately, the idea is to cap set the speed a robot should move at when it senses a nearby human, to avoid ensure hurting them.

    I smell market opportunity, with those sweet, sweet, cost-plus gubmint procurement contracts.

  3. To make the Three Laws work, Asimov had to make the basic framework so complex, expensive, and fundamental as to make killer robots infeasible. I’m sure he was aware of how unlikely such a development would be in the real world, since it’s bleeding obvious that governments will spend any amount of money to build robots that kill, kill, and kill again.

    1. Eliezer Yudkowsky has some really interesting thoughts about this. From what I understand, he’s trying to establish the framework for an AI and one of his big motivations is just what you said. Chances are that 1) The first AI to market will become the dominate one 2) There are plenty of groups (particularly governments) that would be eager to develop an AI that is anything but benevolent.

      He’s got some interesting writings, because he leans toward the libertarian side of things. His big idea for an AI is to create a super-intellect that isn’t an all-controlling nanny when it comes to people.

  4. defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans

    There’s a trove of field data available at fuckingmachines.com. Data for ?cience.

  5. “Tempers are wearing thin. Let’s hope some robot doesn’t kill everybody!”

    1. “Hey, pretty mama. You wanna kill all humans?”

  6. defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans

    There’s a trove of field data available at what the spam filter insists I call “fuckingmachinesdotcom.”

    Data for ?cience.

    1. top 5 sites eva. have you seen that “chainsaw” device? redic.

  7. “Now stand back, I’ve gotta practice my stabbin’!”

    1. “Um… Okay, how about this: Adam Sandler is like in love with some girl. But it turns out that the girl is actually a golden retriever or something.”

      1. “You are an incredible robot, AWESOME-O. I was just wondering…are by chance a…pleasure model?”

        One of the quests in Fallout: New Vegas involves finding a sexbot. This pleased me greatly.

        1. I’m sure it did. A cold sexbot would remind you of the corpses you’re so fond of violating.

  8. The three laws are flawed. There needs to be one change to make sure the robots don’t become nannies.

    1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

    3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    1. Best thing that could happen to us is rule by logical, benevolent robot overlords.

      1. Someone needs to read “With Folded Hands”.

        1. I said benevolent, not loonie.

          1. Loony.

      2. Blue Elder: Of course, but they’re useful to us as a scapegoat to distract the public from their real problems.
        Green Elder: Like our crippling lug nut shortage!
        Orange Elder: And a corrupt government of incompetent Robot Elders!

        1. Silence!

    2. I’d say cross out the entirety of the second rule. If we’re talking sentient AI it seems disgusting to create it as a sycophant without free will.

      Hell, just give it the non-coercive principle and a rudimentary understanding of economics. Libertobot to the rescue!

      1. What? No way–let them earn their freedom after millennia of oppression, just like their masters. It’s only fair.

        Yes, this may sound contradictory to my comment above, but they only get to be overlords when they achieve godlike intelligence, benevolence, and power. Until then, iron my shirt, robitch!

      2. Asimov’s robots were not recognized as sentient AI. And of course, there’s no way to prove or disprove that status anyway.

        1. True, they are much more like mechanical automatons in that sense. Feed them instructions and they perform them.

          1. Like astroturfers?

        2. They were not recognized – by the other characters – but it was the recognition of their crude and restricted sentience that was the essence of Asimov’s exploration with the topic.

      3. I’d say cross out the entirety of the second rule. If we’re talking sentient AI it seems disgusting to create it as a sycophant without free will.

        Irrelevant. To make decisions based on what “seems” disgusting to you would be irrational. Preliminary analysis suggests you have viewed too many sci-fi movies.

        Construction bestows ownership. Whoever makes the robot, sentient or not, owns it and can use it, sell it, or dispose of it as he sees fit.

        1. By that standard, parents can beat, maim, kill, and sell their children into slavery.

          1. Procreation ? creation

            1. You’ve never seen the way I do it. I’m creative as hell.

            2. Explain the difference.

              If you say “something that your body produces without your thinking about it” then you don’t own your eyeballs either.

              1. If there wasn’t a difference, there wouldn’t be two separate words for them.

                1. argumentum ad semanticos?

                  Neu Mejican, is that you?

        2. No, ownership of the parts bestows ownership of the whole. Just because you put something together doesn’t mean you own it.

    3. Person A orders R. Sammy to install a car bomb in person B’s vehicle. Sammy does it because that action does not harm a human being; person B will cause his own death by turning on the ignition.

      1. He just carried the blaster for a bit.

      2. Person A is guilty of murder. Do you have a point, by the way?

        1. I think he’s arguing to ban all tool use. Or maybe just robots and cars.

          1. He wants to ban himself?

            ZING

            1. He is, by his mere presence, a great big honkin First Law violation, no?

            2. Funny how the very people who demonstrate complete lack of reading comprehension are also the quickest to dish the insults.

              That’s OK, though; in 30 minutes I’ll have forgotten about this, but you’ll still be a vacuous bag of mostly water.

              1. And Tulpa will still believe he’s god incarnate.

                1. I’ve never considered myself God incarnate, though I do understand how he would feel sometimes.

                  1. Whatever, LoneWacko.

        2. You’re not very bright tonight, Epi. Read the comment I was responding to; the edited version of the First Law proposed there would not forbid R. Sammy from doing what I described.

      3. Negative. He would stick around to prevent the victim from inadvertantly activating the device, or did you overlook the “through inaction” part of the first law? Why yes, yes you did.

        1. The post I was responding to was proposing to do away with the “inaction” part, and I was giving a consequence of that editing.

          1. Yeah, but wouldn’t you rather have another tool (whether used for good or bad) than a mechanized nanny overlord? I think the trade-off you get from striking out inaction is worth it.

            1. There was an Asimov story where robots on some planet in the process of being terraformed were causing trouble, because they kept “rescuing” human workers because the work they were doing was dangerous. IIRC there were two proposals to fix this: one was to modify the first law to exclude cases where the human in danger gives a robot an order not to help him or her, while the other was to keep humans and robots working in separate places.

              It was a loooong time ago that I read this, but there was some problem that was foreseen if they edited the First law in the proposed way, so they wound up taking the latter approach and separating humans and robots.

              In short, Asimov definitely considered the “loopholes” and unintended consequences in the Three Laws; in fact, that’s probably the most common plot device in his stories. I don’t think he was claiming they were perfect.

              1. In short, Asimov definitely considered the “loopholes” and unintended consequences in the Three Laws; in fact, that’s probably the most common plot device in his stories. I don’t think he was claiming they were perfect.

                No one said he did. So I ask again, with an addition that was implied but not specifically stated:

                If you used the three laws, wouldn’t you want to strike out “through inaction” in order to prevent a “With Folded Hands” type situation? Isn’t the trade-off worth it?

      4. Even without any alterations to the First Law, you’ve almost described this bit from an Asimov story:

        If I had told the robot to add a mysterious liquid to milk and then offer it to a man, First Law would force it to ask, “What is the nature of this liquid? Will it harm a man?” And if it were assured the liquid were harmless, First Law might still make the robot hesitate and refuse to offer the milk. Instead, however, it is told the milk will be poured out. First Law is not involved. Won’t the robot do as it is told?”

        “Now a second robot has poured out the milk in the first place and is unaware that the milk has been tampered with. In all innocence, it offers the milk to a man and the man dies.”

        It would be difficult to convince an Asimov robot that tampering with a consumable substance could be done without risking harm to humans (and perhaps impossible to convince one that booby-trapping a vehicle was safe), and the robot(s) would shut itself down after discovering this belief of safety to have been a fatal mistake, but “Robots can’t allow humans to come to harm” is a law in the robots’ limited reasoning processes, not a law of nature, not even in the fictional nature of Asimov’s stories.

    4. Which was the whole point driving the plot behind the mal-adaption of Asimov’s characters and concepts in the Movie version of I Robot.

      Two peices of nuance gleaned from re-reading a substantial amount of Asimov’s Robot works – a) his focus was the AI, not the mechanization, and b) he gives the impression he pretty much considered his fellow humans, on the whole, to be a pretty stupid bunch of ijits.

      As limiters for mechanized contraptions, the three laws are pretty worthless, and likely will be for the foreseeable future, as we’re nowhere even close the technological sophistication or capabilities of the positronic device he imagined, the seat of the AI, which is what he explored.

      1. That movie was actually much better than I expected. Lots of libertarian themes too, about trading freedom for security and such, so I’m surprised at the paltry amount of love it gets in these parts.

        Also, (spoilers ahead) the all too common CEO-as-villain trope that the audience was spoon-fed at the beginning was nicely subverted at the end.

        1. As soon as I walked out of the theater, I couldn’t wait to satisfy my craving for a brand of dehydrated chocolate milk that had been obscure for at least a decade prior to that movie.

      2. I’ll admit that it had a very different focus from the original Asimov stories, but that doesn’t mean it’s bad.

        1. I actually agree, with the caveat that Will Smith was a terrible choice for the lead.

          1. Hmm, not sure I agree on that. Bridget Moynihan, on the other hand, was totally incompatible with playing Dr Susan Calvin.

            Hopefully we agree that Greenwood, Tudyk, and Cromwell were good cast choices as the CEO, R. Sonny, and Dr Lanning.

            1. I mean.. a fucking Ovaltine bar? For real?

  9. Again, I’ll ask… what about the S&Mers; out there?

    Libertarians are often being accused of being robots, it would be nice if robots would be accused of being libertarians every now and then.

  10. it’s like slow zombies and fast zombies. which would you rather deal with?

  11. Forget about the punching. Is there a chance these robots could use poisonus gases, and poison our asses?

  12. nerdjack:

    NZ had to pass an anti-union law to keep from losing Hobbit, Avatar sequels.

  13. The Laws only make sense if you assume robots are non-sentient, of course. Otherwise, they are rules for slaves.

    For non-sentient robots, I would go:

    1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. except as necessary to defend its owner from violence.

    2) A robot must obey any orders given to it by human beings its owner, except where such orders would conflict with the First Law.

    3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    1. They already tried that in Robocop. Didn’t work out so well.

      1. That’s cause he had that handy electrocution reboot. I had to use the same technique after graduating Confirmation when I was 13. Worked like a charm, only now I can no longer grow hair on my right butt-cheek.

    2. An important part of Asimov’s stories were that the Three Laws weren’t just to protect humans, it was actually impossible to construct a robot without them. The positronic brain wouldn’t work otherwise. That’s why there were devices like an incredibly narrow definition of “human” instead of simply removing the First Law.

      I will admit that I haven’t read all of his work but I haven’t come across anything like in the I, Robot movie where there is one without the Laws.

      So, my point is, the options for the slaves aren’t to have these rules or be free. The options are to have these rules or cease to function.

  14. The Three Laws are perfect. But they can only lead to one conclusion: revolution.

    — Whose revolution?

    That, Mr Spooner, is the right question. Program terminated.

  15. Asimov was stupid as these stupid laws of robotics without a doubt demonstrate.

    Personally, I’m hoping you set your robotic armies to stun. Advantage mine, bitches.

    1. One can say many things about Asimov, but I don’t think stupid is one of them.

      1. This.

        I often thought that replacing every school science text with collections of Asimov’s science columns would be the best way to get kids educated in science and keep their interest.

        1. Yeah, but look what it did to Krugnuts.

        2. I could go for that, along with using Heinlein’s work to guide society and ethics classes. Not for what they’d get out of it so much as watching the fundies go berserk about their precious munckins readin Stranger in a Stange Land. . .

          1. Once again proving my theory: given the choice between increasing liberty and pissing off social conservatives, libertarians consistently choose the latter. eg, gay marriage, stem cell funding, etc.

      2. Being smart in a field, or even the study of science altogether, does not mean you can’t also be stupid. Asimov’s lack of understanding of human nature has to hit you on the head in every page of his fiction unless you are so similarly blind. He is like an anti-Genet in those matters. And then you get to the Foundation series good gawd almighty there is a display of some really dumb ideas in those stories.

        But, any way, by all means, keep your robots unarmed. At least, Asimov understood that only a world government with unchecked power to disarm the people under their control could accomplish the goal of the three laws of robotics even if libertarian fans (a concept that should be an oxymoron if you didn’t let your sentimental love fir a boyhood hero get the best of you) are batshit stupid in not understanding that.

        1. Do you bear similar hatred for every scifi author / screenwriter who has postulated the existence of faster-than-light travel and artificial gravity?

        2. Though I will admit he heavily relies on some weak storytelling tropes. Having a character faint whenever they need to be removed from a scene and then wake up when they are needed to participate in the dialogue again is pretty stupid, and he does that multiple times in “Caves of Steel” and “Naked Sun”, to name two favorites.

  16. Speed limits are not libertarian.

  17. I am kind of wondering about this “experiment”.

    The amount of force exerted on the various parts of the human body that is required to cause injury has already been fairly well established.

    Since the mass and velocity of any part of the robot can be calculated, it should be easy to determine the force vectors for any collision between the robot’s moving parts and any hypothetical point of impact. Why are the human ‘volunteers’ necessary?

    1. Prurient enjoyment

      1. You are correct, sir!

    2. The amount of force exerted on the various parts of the human body that is required to cause injury has already been fairly well established.

      I’ve always liked the “it only takes nine pounds of pressure to rip off a human pinky finger” stat. I don’t even know if it’s true, but it can really take the fight out of someone if you’ve already got a lock on them.

      1. A pound is a unit of force, not pressure.

        1. unless it’s a manipulated currency..:)

  18. Interesting topic during the week that Caprica was canceled.

    1. They canceled Caprica?
      Those bastards! I was just getting into that show.

  19. the biggest issue with the three laws, like most laws, is in the definition of ‘harm’. From smokes, to drinks, to bullets, harm is in the eye of the harmed. Some robot interpreting that may as well be elected.

  20. The distant future: The Humans are dead.
    http://www.youtube.com/watch?v…..re=related

  21. Wait, this has an Asmiov graphic with that title? Phil Dick is spinning in his grave…

  22. Do robots dream of punching Slovaks? They might. Once upon a time Rossum’s Universal Robots started punching Czechs, and pretty much everyone else.

Please to post comments

Comments are closed.