Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics Software Technology

Boston Dynamics Is Teaching Its Robot Dog To Fight Back Against Humans (theguardian.com) 146

Zorro shares a report from The Guardian: Boston Dynamics' well-mannered four-legged machine SpotMini has already proved that it can easily open a door and walk through unchallenged, but now the former Google turned SoftBank robotics firm is teaching its robo-canines to fight back. A newly released video shows SpotMini approaching the door as before, but this time it's joined by a pesky human with an ice hockey stick. Unperturbed by his distractions, SpotMini continues to grab the handle and turn it even after its creepy fifth arm with a claw on the front is pushed away. If that assault wasn't enough, the human's robot bullying continues, shutting the door on Spot, which counterbalances and fights back against the pressure. In a last-ditch effort to stop the robot dog breaching the threshold, the human grabs at a leash attached to the back of the SpotMini and yanks. Boston Dynamics describes the video as "a test of SpotMini's ability to adjust to disturbances as it opens and walks through a door" because "the ability to tolerate and respond to disturbances like these improves successful operation of the robot." The firm helpfully notes that, despite a back piece flying off, "this testing does not irritate or harm the robot." But teaching robots to fight back against humans may might end up harming us.
This discussion has been archived. No new comments can be posted.

Boston Dynamics Is Teaching Its Robot Dog To Fight Back Against Humans

Comments Filter:
  • by Hem Ramachandran ( 3480167 ) on Wednesday February 21, 2018 @10:40PM (#56167763)
    No where it shows they are teaching robot to fight back
    • by Anonymous Coward

      I agree. In the video, the human merely simulates disturbances by pushing the robot around. It does not attack him. It does not avoid his interference. It merely attempts to continue its task of opening a door.

      This is like watching a self-driving car find the correct lane to be in and reporting the behaviour as "robots are being taught to run down humans."

      Calling this "robots fighting back against humans" is pure sensationalism.

      • by Barny ( 103770 ) on Thursday February 22, 2018 @08:43AM (#56169039) Journal

        Further, these robots have no training AI in them. They aren't learning, they aren't smart, they are able to get up/recover after disruption. Sensationalism at its worst.

        • by mysidia ( 191772 )

          It's the first step.... now they just need to add

          (B) Evade disruptions if possible, and

          (C) If not possible to evade --- then retaliate against disruption and escalate countermeasures/evasive techniques until disruption stops and the task can be continued.

        • by cstacy ( 534252 )

          Further, these robots have no training AI in them. They aren't learning, they aren't smart, they are able to get up/recover after disruption. Sensationalism at its worst.

          Or so the creators believed, until one day...

    • That part of the video was cut. But that cyber-bully with the hockey stick will have some trouble sitting down for a few weeks. And Boston Dynamics is going to have to get a new hockey stick.
    • by AmiMoJo ( 196126 )

      It's like when someone fights back against cancer by punching it in the dick.

    • by be951 ( 772934 )
      Yeah, it's merely staying on task. If they put a more-difficult-to-open handle on the door, and the robot continued working until it successfully opened it, I suppose that would be "teaching it to fight back against the door."
    • Yeah, it's obviously just trying to mostly defend itself from annoyances. None of that is offensive, like the article wants you to think so you'll click on it.
    • by nasch ( 598556 )

      Yeah, possibly the worst /. headline I've seen.

    • Yup, if this is "Fighting Back" then by the same definition we've had machines that "fight back" for a century already in the form of PID controllers.
    • Of course not. However, if they have it run a genetic algorithm to let it find the "quickest and most reliable strategy" to get through the door while a human actively tries to stop it, it may eventually figure out that popping the human in the nuts with that fifth arm before going for the door handle works best.

      It's not like AI is advanced enough to know right from wrong. Specific constraints have to be added to prevent it from trying things like that. It has a hard enough time figuring out where the door

  • by SimonInOz ( 579741 ) on Wednesday February 21, 2018 @10:41PM (#56167765)

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    • Also from Asimov: (Score:5, Insightful)

      by Anonymous Coward on Wednesday February 21, 2018 @10:54PM (#56167817)

      Robots being unable to determine what constitutes harm
      Robots deciding that they are are human beings, too.
      Robots deciding that only they are human beings
      Robots rationalizing a zeroth law that prioritizes "humanity" over individual humans.
      Robots deciding what constitutes "humanity"

      The three laws were meant to drive plots, not be pragmatically implementable. They could even be seen as a satire of the idea of simplistically designed ethical systems.

    • by SuperKendall ( 25149 ) on Wednesday February 21, 2018 @11:09PM (#56167863)

      1) Robot Dog may not allow Door to remain closed, or through inaction allow Door to close.

      2) Robot Dog must open Door, that is the Prime Directive.

      3) A Robot Dog must fend off Annoying Stick as long as fending off Annoying Stick does not involve allowing Door to close.

      • If this is really a robotic dog clearly the answer is either a fire hydrant or a squirrel. Oh look, it's a squirrel, and I need to check out that fire hydrant over there.

      • by AmiMoJo ( 196126 )

        The human race will be subjugated by robot cats. Someone will make them as cute robo-pets, and program them with the 3 laws of cats:

        1. Enslave humans to serve your empire

        2. Never allow your food bowl to be empty, ideally by acquiring at least 9 food bowls and enough humans to keep them full

        3. Show your approval by biting the hand that feeds you

    • by Z80a ( 971949 )

      You know that the guy that wrote those laws wrote several books on "how those laws will fail miserably", right?

    • I have a bit of a problem with the third one.

      Shouldn't it be "A robot 'can' protect its own existence.."?

      Otherwise a robot with the brain the size of a planet might end up several times older than the Universe with a terrible pain in the diodes.
    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

      Define "injure" and "harm". Remember that this is a computer so it will do EXACTLY what it is instructed to do. This is the problem with the three laws is that it relies on ill defined concepts that we sort of grasp but rarely are explicit about. If a child falls down and skins a knee that is clearly harm which might be prevented but is it worthwhile doing so? If so how do you prevent such "harm" and is the prevention of harm causing other harms in the process? Humans actually need some amount of harm

      • by mysidia ( 191772 )

        And are we talking about ANY human under any circumstance?

        So I could order your robot to tell me your secret PIN number. Apparently authorization wasn't part of Asimov's security model.

        Then again...... a robot acting against the wishes of its property owner regarding authorization to certain actions resulting in damage to the robot or loss of $$$ causes harm to that human.

    • by PPH ( 736903 )

      A robot may not injure a human being or, through inaction, allow a human being to come to harm.

      Imagine a voting machine loaded with this directive in 2016. It would have reviewed the choices, locked the door and self destructed.

  • But teaching robots to fight back against humans may might end up harming us.

    This is precisely why we have the Three Laws of Robotics [wikipedia.org].

    I would like to say "ignore them at your peril," but the reality is more like "ignore them at the perial of the rest of humanity." I am pretty sure that they will put in some sort of special code so that the robots never fight back against a Boston Dynamics employee.

    • But teaching robots to fight back against humans may might end up harming us.

      This is precisely why we have the Three Laws of Robotics [wikipedia.org].

      I would like to say "ignore them at your peril," but the reality is more like "ignore them at the perial of the rest of humanity." I am pretty sure that they will put in some sort of special code so that the robots never fight back against a Boston Dynamics employee.

      The best part of the Three Laws of Robotics is that they instantly reveal anyone who hasn't actually read Asimov's stories on the Three Laws of Robotics. Specifically, you.

    • by gweihir ( 88907 )

      The three laws are bullshit resulting from animism. Asimov needed them (and used them to excess) to make fantasy stories set in an SF environment. There is no connection to anything that constitutes robotics or AI in the real world.

      • Indeed.

        One of the first robots ever built was probably the self guided torpedo. Used a gyroscope to follow a predefined course at a pre defined depth.

        So the laws were violated before they were even written.

      • There's a certain rationality to them. Devices are often designed to break before harming the user. A simple example is my badge lanyard will unclip if pulled hard enough.

        Laws 2 and 3 are the wrong way round though. Machines are typically designed not to break on command. You tend to have safety cutouts and the like to protect the device.
        • by gweihir ( 88907 )

          Oh, I do not dispute the three laws are have rationality. But that is exactly their problem: Robots and "AI" do not have rationality at all. They cannot interpret these laws. Asimov basically used human-like intelligence in machines (not implementable today and it is unclear whether implementable at all in this universe and no, we have no idea how this works in humans, but it seems humans exceed what is physically possible) but removed altruism and morality and replaced them with the three laws. I am not co

          • it seems humans exceed what is physically possible

            Either magic is real or your statement is false.

            • by gweihir ( 88907 )

              "Magic" is another word for "not describable by science yet". So yes, "magic" is real.

        • Laws 2 and 3 are the wrong way round though. Machines are typically designed not to break on command. You tend to have safety cutouts and the like to protect the device.

          That would make it hard to order a robot to perform a task that might be dangerous to it. In fact, one of his robot malfunctions was because Rule 3 was stronger than normal, while Rule 2 was weaker.

    • Err, no. As Theaetetus already implied, the whole point of Asimov's Three Laws was that they wouldn't work.

      Anyway, you appear not to have read the summary beneath the deliberately misleading headline - the robot only 'fights back' in that it physically rights itself in response to the human pulling it backward. It does not use violence.

    • by mysidia ( 191772 )

      so that the robots never fight back against a Boston Dynamics employee.

      What happens when the board of trustees replaces the Boston Dynamics CEO with a robotic software program that can do everything important that a CEO could do with 10000x the productivity and 1/10000th the cost, and within a few days the announcement is distributed to all the MANAGERS (Who are robots by now) that All remaining humans are laid off, effective immediately ?

      The special code no longer applies, since the only

  • Arguably the robot didn't fight, it adjusted to the situation as road blocks were put in its way. It didn't attack the human in any way, it just continued to try and go through the door. By this definition roomba vacuums "fights back" when items are placed in its path. The only difference is the robot dog kept trying to go forward again and again whereas the vacuum would turn and do something else or eventually give up.

    • by Kjella ( 173770 )

      It didn't attack the human in any way, it just continued to try and go through the door

      Well for a sci-fi movie that's enough, robot is trying to get through door to do bad things(TM). Humans try to stop it, the harder they try the harder the robot resists. If the door is not opening, I'll improve my stance and pull harder. If you're trying to drag me backwards, I'll dig in and try to drag you forwards? What if this thing was bigger and strong, enough to pull the human off his feet instead. What if it had two arms, one fending off the hockey stick while the other opened the door? That it's not

    • by thegarbz ( 1787294 ) on Thursday February 22, 2018 @07:30AM (#56168831)

      By this definition roomba vacuums "fights back" when items are placed in its path.

      Have you ever left a USB cable in front of a Roomba? The carnage is indescribable. I can still hear the 1s and 0s scream at night.

    • Here's a rather scary video from Computerphile about the implications of robots adjusting their behaviour in order to accomplish the pre-programmed goal:
      https://www.youtube.com/watch?v=3TYT1QfdfsM [youtube.com]
  • Fights back? (Score:2, Informative)

    by Anonymous Coward

    "Adjusts" to situation is the non-clickbait version. Still, very cool video. At the end, the dog's arm looks like a snake preparing to attacking.

  • by Anonymous Coward

    It seems they are teaching the robot that people with hockey sticks are evil.

  • by Anonymous Coward

    I wanted to see bite marks.

  • by mveloso ( 325617 ) on Wednesday February 21, 2018 @11:24PM (#56167921)

    A robot dog doesn't need to fight back. All it needs to do is say, at a high volume, "get out of the way or I'll rip you in half."

    That should work on about 99% of the population.

    • A robot dog doesn't need to fight back. All it needs to do is say, at a high volume, "get out of the way or I'll rip you in half."

      That should work on about 99% of the population.

      What if the owner orders the "dog" the kill the intruders(== unemployed starving people)? It needs to be able to "fight back" against human resistance against trying to end their existance

    • A robot dog doesn't need to fight back. All it needs to do is say, at a high volume, "get out of the way or I'll rip you in half."

      That should work on about 99% of the population.

      It would work on me, that's for sure!

      Good doggy ...

  • by andydread ( 758754 ) on Wednesday February 21, 2018 @11:32PM (#56167947)
    The robot is not fighting back. Watch the damn video. The robot is simply being persistent in completing the task when faced with the obstacle of being blocked by a hockey stick or being dragged away from completing it's task of opening a door. British press sensational bullshit.
    • That's probably the used-in-movie meaning. Think "returns".
    • The robot is not fighting back. Watch the damn video. The robot is simply being persistent in completing the task

      Right.. tasks like "kill everyone found in target area"

    • There is something to it since the guy needs a long stick and a long specially-attached rope to be an obstacle without risking severe injury.
  • James Cameron is a great visionary.
  • Something in particular about that specific robot dog really creeps me out.

    • Probably because the robot moves really look like the ones of a real dog...
    • Something about its low-slung gait really reminds of a slinking wolf. And the completely unnatural movement of the arm doesn't help anything.

    • It’s called the uncanny valley. When the robot walked up to de door, its movements seemed decidedly robot-like and procedural, exactly what we’d expect from a robot. But when the guy pulled on its leash, it reacted with what looks like instinct, its movements becoming eerily similar to a real animal. It doesn’t look like an actual dog but at that moment it’s too close for comfort.
  • by evil_aaronm ( 671521 ) on Thursday February 22, 2018 @12:00AM (#56168023)
    Meanwhile, Chappie was left to fend for himself. Poor Chappie was traumatized.
  • i love dogs
  • Andy Weir [wikipedia.org] suggested a way to make robots safer in this Casey and Andy strip:

    http://www.galactanet.com/comic/view.php?strip=77 [galactanet.com]

    It seems like something out of a classic Star Trek episode, doesn't it!

  • by tero ( 39203 )

    They should give it guns so it can defend itself.

  • It's a feedback.

    Fighting back is the robot to knock out 5he human.

  • I think slashdot needs a way to mark stories 'Troll' too.

    Beautiful work BeauHD.

  • by nospam007 ( 722110 ) * on Thursday February 22, 2018 @02:43AM (#56168313)

    ...condemns this science.

  • This looks more like counteracting external forces than the clickbait heading "Boston Dynamics Is Teaching Its Robot Dog To Fight Back Against Humans". The next step is for the robot to determine what external force is being applied - if it is from a human, then it should yield - there may be a very good reason why a human is trying to prevent a robot from opening the hatch to a nuclear reactor, to give a simple example. If it's an anti-riot robot performing crowd control duties, then maybe yielding would
  • In most videos we can see people kicking and pushing them around, and all they could do is avoid tipping over.
  • Maybe they should train the robot to identify the human controlling the hockey stick, and stop to obey his orders. Just an idea, you know.

  • Hence why I visit this place less and less.. It's not getting any better.
    The news itself is quite interesting, but does it always need to be presented in click-bait form? Fuck off, seriously.

  • Robot specifically trained to open a certain door takes 5 minutes to open the door.
  • by Anonymous Coward

    Unperturbed by his distractions, SpotMini continues to grab the handle and turn it even after its creepy fifth arm with a claw on the front is pushed away. If that assault wasn't enough, the human's robot bullying continues, shutting the door on Spot, which counterbalances and fights back against the pressure. In a last-ditch effort to stop the robot dog breaching the threshold, the human grabs at a leash attached to the back of the SpotMini and yanks.

    I'm sorry, but I find watching a video of a robot dog fi

    • This isn't some cute story where an innocent picks on a poor robot dog, this is a story in which some unstoppable robot who feels it needs to open a door and won't be stopped looking like something out of a movie.

      Listen, and understand! That SpotMini is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever, until it gets...through...that...door!

  • by Anonymous Coward

    There is an infamous story of early robotics and one of the founders of modern AI and the first creator of neural nets, Marvin Minsky. His lab lab had worked on a ping-pong playing robot arm. They'd found that to make the robot arm fast enough, they had to keep making it more powerful, so eventually it was.... quite powerful. Sadly, at one point as Marvin Minsky walked by it, it decided that his polished head approaching the ping pong table was actually the ping pong ball in play.

    Fortunately for the future

  • ... this testing does not irritate or harm the robot.

    To a surprising extent, the language we use is a determiner of both our conceptions, and our perceptions. We really need to break this habit of attributing feelings, (e.g. "irritation"), to robots. Even the choice of the word "harm" over something more neutral, (such as "damage"), reinforces a kind of magical thinking akin to religious belief - and we can't afford to indulge in this particular brand of magical thinking. Especially not when the entity isn't a figment of our imaginations, (such as a god), bu

  • "Robot Bites Man" isn't a headline for Pete's sake! "Man Bites Robot", now that's a story!
  • does anyone not think the goal is a robotic weapon?

  • The robot is simply overcoming obstacles to its objective. If you want to know the difference, take your hockey stick to a biker bar and try the same thing with the first guy who tries to go in. Get back with me when the robot shoves the hockey stick up your ass.

  • Fun fact: You say, "Sit, Ubu, sit. Good dog" Okay, that's fake news but it would be awesome.

  • robocop k-9 coming soon! It can do stairs like an real dog as well.

  • If I owned an expensive robot, and someone came and tried to destroy it, I'd let the robot's AI protect my property (I'd prefer minimal force needed to subdue personally)

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...