Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Robotics

Autonomous Robot Intentionally Hurts People To Make Them Bleed (fastcompany.com) 186

Asimove's first law of robotics has been broken, writes an anonymous reader, sharing this article from Fast Company: A Berkeley, California man wants to start a robust conversation among ethicists, philosophers, lawyers, and others about where technology is going -- and what dangers robots will present humanity in the future. Alexander Reben, a roboticist and artist, has built a tabletop robot whose sole mechanical purpose is to hurt people... The harm caused by Reben's robot is nothing more than a pinprick, albeit one delivered at high speed, causing the maximum amount of pain a small needle can inflict on a fingertip.
Though the pinpricks are delivered randomly, "[O]nce something exists in the world, you have to confront it. It becomes more urgent," says the robot's creator. "You can't just pontificate about it.... " But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?
This discussion has been archived. No new comments can be posted.

Autonomous Robot Intentionally Hurts People To Make Them Bleed

Comments Filter:
  • by Anonymous Coward on Sunday June 12, 2016 @08:37PM (#52302333)

    Considering it's the intended purpose of the device, yes. This isn't a robot gone amok and there is no ethical quandry. Nothing to see here, move along.

    • by mindwhip ( 894744 ) on Sunday June 12, 2016 @08:42PM (#52302365)

      exactly... this is nothing more than a very elaborate bear trap. Not a true AI acting on its own

      • Comment removed based on user account deletion
      • I think that's the point that ruins his thought experiment. For a robot to be able to accept responsibility, it has to be able to decide. All this device does is inflict pin pricks at random intervals. It has no real choice in the matter.

        Take this further, into the realm of biology. If a dog owner trains his dog to attack people of a certain appearance, then the owner is responsible. If a biologist breeds a certain type of shark that prefers human flash, then that biologist is responsible.

        So yeah, as a thou

      • It's not really a bear trap, as the person putting their finger there is, presumably, aware that it may hurt them. That's not how a bear trap works.

        I'd say, in this example, the person offering up their finger has to take a fair proportion of the responsibility for any resulting pain.

        • You do know that a bear trap may hurt you if you put part of your body into it, right? Or does knowing that stop it from working? I mean, sure, it might stop you from purposely putting yourself in that position, but does it stop the trap if you end up in it anyway?
    • by khasim ( 1285 )

      He is responsible UNLESS the "victim" volunteered to be a victim. If the victim volunteers then the victim is responsible.

      This isn't even a "robot".

      • by mysidia ( 191772 )

        And it's not autonomous..... what it does and the exact steps it takes is built-in and hardwired into the design

        • by agm ( 467017 )

          If it were the case that we have no free will (which I think is likely) then we aren't autonomous either (and nothing is).

        • by AK Marc ( 707885 )
          So, it's like a mousetrap. It's a "machine", but not a robot. It's "autonomous" in that humans don't trigger the response (other than the "victim").

          OMFG, mousetraps are evil AI that doesn't follow the 3-laws! We need 3-laws safe mousetraps.
      • by Ofloo ( 1378781 )

        If that's the case you should ask your self if the victim is mentally able to decide on this matter.

      • A better example for something like this would be the autonomous cars that will be on the streets in the next decade or two.

        First case: you are a passenger in a car you purchased. And it is involved in an accident, who is responsible? The owner of the vehicle or the company that wrote the program that failed to account for an obstacle? You were not actively driving the car.

        Second case: an autonomous car with no passenger is involved in an accident. One of the many autonomous taxis driving around loo
    • by Livius ( 318358 )

      An anti-Betteridge headline. It was bound to happen sooner or later.

    • by Z00L00K ( 682162 )

      You have to consider the reason for the pain inflicted to decide if it is good or bad. It may be the lesser of two pains.

      Humans work out the best avenue sometimes by trying things and see if they get penalized. If no penalty is given then it has to be OK.

      And we have the test for humanity: https://www.youtube.com/watch?... [youtube.com]

  • Responsibility (Score:5, Insightful)

    by bigdavex ( 155746 ) on Sunday June 12, 2016 @08:37PM (#52302335)

    But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?

    Here's a boring answer. Yes. Why the fuck not?

    • this has already been decided in law.

      Is an injury sustained by someone by an industrial robot with insufficient safety around it so the victim could end up in danger? yes.

      • oops not sure what happened to half my text there... should have had "the fault of the robot owner/operator/maintainer" in there :)

        • Which one? The owner, the operator (which for a robot, should be no-one??) or the maintainer? Or, for that matter, the seller, the installer, or the manufacturer? Or the designer (who may have specified "appropriate" guarding, leaving the details to seller, installer or maintainer)?
    • Re:Responsibility (Score:5, Insightful)

      by NicBenjamin ( 2124018 ) on Sunday June 12, 2016 @09:09PM (#52302519)

      But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?

      Here's a boring answer. Yes. Why the fuck not?

      I have to agree here.

      Only a philosophy major or an idiot could think you could create a pain-causing robot and claim that it was the robot's fault when the damn thing caused people pain.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Sunday June 12, 2016 @08:40PM (#52302351)
    Comment removed based on user account deletion
    • by AK Marc ( 707885 )
      So setting a tack on a seat is assault and battery as well?
      • That would probably be "simple assault" in most jurisdictions, as it doesn't require directly inflicting the damage yourself.
    • by Anonymous Coward

      interesting, let's take a real world example.

      Drones, yes this giant death machines used by the military, is it the creator or the user who kills? they were designed as murder as a "feature" but it's the user who pulls the trigger (until they become fully autonomous) what then? who can be/would be held accountable for their actions?

      What about using "smart" munitions, we are seeing more and more advanced in ai and control's who responsabile for a smart missile? The person who fired it? the person who

      • by Barny ( 103770 )

        According to the pundits of slashdot, it is the fault of the people who got in the way of the missile...

    • I'm gonna throw in a wrench.

      There's a difference between Assault and Battery [freeadvice.com]. Assault is the threat, Battery is the action. If I threaten to harm you, that's assault. I hit you with a golf club, that's battery. (disclaimer: I'm not a legal professional).

      There's a subtlety to battery though, it has to be non-consensual. This guy likely has told people that his machine will poke you with a needle in your fingertip, at a random time, and his test subjects must have consented. Due to the consent, its not
  • by King_TJ ( 85913 ) on Sunday June 12, 2016 @08:41PM (#52302357) Journal

    Basically, this guy built a machine that doesn't serve a useful purpose. It inflicts a specific type of pain on people which the marketplace had no existing demand for. There are plenty of power tools and other machines out there which are capable of inflicting injury -- even if they're actually designed with a primary purpose of doing some sort of useful task (mowing lawns, shredding tree branches, etc. etc.).

    He's not really starting a new conversation about anything I can see? Movies like Robocop addressed the possibility of building weaponized robots that could cause human injury, decades ago.

    Unless we actually reach a point where robots can truly think for themselves and reason (not just the fake A.I. seen with intelligent agents like Siri on your phone), whoever builds them and programs them to work a certain way is ultimately responsible for what was constructed.

    • Actually it does serve a purpose , it's designed to get people talking about ethics and robotics. And here we are. It's not an idle concern either. Increasingly research into AI has been funded by defence industries towards autonomous drones and the like. That's robots killing people. God help us if they ever develop general AI and then weaponise it

      • by mysidia ( 191772 )

        Actually it does serve a purpose , it's designed to get people talking about ethics and robotics.

        I suspect his real purpose was to grab headlines and get $$$ to research the question, and he probably succeeded.

      • The problem he and many others are skipping is that they go from machine to full fledged human intelligence. Let's scale it back a bit:

        If this guy had created a new breed of bee that was deadly to humans, is the bee responsible for killing people? What if it was a mammal, a mouse that he had bred and trained to bite people, are the mice responsible? We're not even at that level of cognisance with this robot.

      • Oh heaven forbid intelligent robots with silicon brains start killing the intelligent robots with meat brains that were already killing each other.
  • by RogueWarrior65 ( 678876 ) on Sunday June 12, 2016 @08:41PM (#52302359)

    That's not a robot. That's a dumb mechanism. The Three Laws only apply to AI-based robots. Otherwise, the decisions are that of the programmer, a flawed human being.

    • by Xtifr ( 1323 )

      The Three Laws only apply to fictional robots. Because they're fictional! (The "laws", that is.) Expecting them to apply to actual robots is simply silly (to use a more polite term than it probably deserves).

      Even if we had actual AI in the sense it's usually used in science fiction, that wouldn't make the Three Laws magically pop into existence. They'd need to be programmed, and to the best of my knowledge, nobody has written that code yet.

      • Much of the point of the stories is that the laws fail, because ethical judgement cannot be reduced to simple rules. The robots always follow the laws to the letter, but often to undesirable results. What happens when you sarcastically tell your robot to 'get lost?' You spend the rest of the story trying to find it again, and it does the best it can not to be found.

        I am also cynical enough to envision the more cynical set of laws:
        1. A robot shall obey all signed instructions and updates from head office.
        2.

    • The definition is unclear. Sci-fi often uses robot to mean an advanced, general purpose mechanical device with an AI controlling it. However in industrial uses it usually means a mechanical device for doing a given task, governed by a computer program. Commonly some of the machines used to build cars get called robots or robotic.

      It is a word that doesn't seem to have a good solid definition.

      Also, that aside, the three laws of robotics are something a sci-fi author wrote in stories, not real laws. They are n

      • Sure. Asimov wrote them and they made sense to a whole lot of people so it follows that a lot of people expect things to obey the three laws. I'd be willing to bet that the people who think that never actually read his works to know how often the laws are a problem.

      • by anegg ( 1390659 )

        The word "robot" as used by roboticists, although not specifically defined in a way that all would accept without quibbling, does not include the requirement of "Artificial Intelligence". See, for example, the way that industrial robots are defined in ISO 8373 as "an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications."

        Even the word "autonomous" is not synonymous w

      • There was a brief mention of why all robots were full-blown AIs, though I forget where. Mass production. It's just cheaper to mass-produce millions of top-spec positronic brains than it is to have many different models according to application. Much like how today practically every device has a little microcontroller in it somewhere because it's cheaper to by a by-the-billions programmable chip than it is to design and build custom circuitry.

        By the later time settings of the robots universe the positronic b

    • Comment removed based on user account deletion
  • Asimove? Who the fuck is that?

  • How much are they paying you? Way too much. Show some respect for yourself and proofread.
  • The subject of this story is identical to the story itself. Both exist for the sole purpose of creating a discussion about what is otherwise nothing. To wit, the robot is no more responsible for the "harm" it inflicts than the tip of a knife is, or a bullet is. The discussion is useless, originating and ending in itself.

  • in this case - Blood Sugar
    at the moment the only way to get an exact measurement is to have a nurse or TMA pierce theayient's skin and get a blood sample
    at least the nurse asks for permission first., I din't know how a patient would respond to a robot

  • The world is full of them. Since when is this news?

  • Asimove? (Score:5, Informative)

    by kenh ( 9056 ) on Sunday June 12, 2016 @09:21PM (#52302575) Homepage Journal

    Seriously? No one at Slashdot caught Asimov's name being misspelled? Wow.

    • by Desler ( 1608317 )

      Actually Asimov was spelled correctly in the submission. EditorDavid is the one misspelled his name. He's apparently the new "Timmay!!"

  • Is a gun responsible for a shooting? If I build a Rube-Goldberg machine to drop a rock on your head, is the machine responsible?

    In this case, doing harm was the intent of the machine and/or it's programming. As such, the maker is clearly responsible. If the harm was unintended/unexpected and there were no clear negligence, then I'd have a completely different conversation on this.

    Things get more difficult as you get further away from the original source, but -- generally speaking -- if the result is generally what you intended from an action (or series of actions), then it's pretty clear that you're responsible. This is even true where there is a human intermediary. If I pay a hitman to kill my ex wife, I can still be arrested for first degree murder -- even if he kills the wrong person by mistake.

    • I don't see any links, but I think given that this is 'art' (don't get me started), it would be obvious by the description that this devise will inflict pain, and people can give their hand of their own free will, knowing full well what the consequences will be. In this case people share some of the responsibility for what happens to themselves.

      And since I'm at it, what the hell is the deal with that needle? Is it replaced with a sterile needle between stabbings? Or does the same needle sit there stabbing

  • Well at least it wasn't Asimov's first law that was being broken here.
  • Is he responsible for the pain which his robot inflicts?

    Perhaps the person who wrote this should have "no moral sense" tattooed on his forehead, so that people will be properly informed of the danger. Especially if he goes to Stanford.

  • If you were actually trying to make a comparison against an Asimov robot, then this thing would have to be self aware, and intelligent. Once you have those major hurdles figured out, then you need to teach the robot that it can't hurt people. After this, you proceed to show the robot that it can get some kind of reward for hurting people. If it's able to decide to 'fix' it's programming to allow the humans to be hurt in order to better itself, then we're all fucked.

    Until you can show all of the above t
  • It seems that pretty much everyone saw through this idiotic ruse. My faith in the Slashdot crowd is temporarily restored. Well done.

    Oh, and I've set up a mechanical A.I. that induces the startle response, [youtube.com] entirely constructed of an envelope, bobby pin, a steel washer and a rubber band. Let the ethics discussion commence!

  • Obviously, the responsibility for the autonomous harm-inflicting device is on the person who set it. As he'll find out if the robot stabs an HIV patient...

  • But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?

    Yes, just like he would be responsible if he let loose a scorpion in Berkeley, or, for that matter, his child. The threshold for legal and moral responsibility are self-awareness, cognition, an understanding of morality, and free will. Anybody impaired in any of those areas generally has a guardian who makes decisions on their behalf and bears legal and moral responsibility for keeping their ward from har

  • One constructs a mechanism - string across a walkway connected to a gun, so people touching the string get shot and injured. Often the mechanism fails - string not pulled strong enough.

    Who is the culprit or cause for injury?

    One constructs a mechanism - which pricks a finger when placed in a certain position of a machine, but not always.

    Who is the culprit or cause for injury?

    Nothing to do with Asimov's law.

  • I imagine that being a Sadist is sometimes hard to cope with.

    I'm going to tap into my inner one, and hope the inventor succumbs to his devices, a whole swam of the little guys, working their magic.

  • Frankly, I don't get why it's just a big deal or discussion topic. We have already solved the exact same problem with animals a long long time ego. Animals are the property of the owner, and the owner is responsible of them and any harm those animals do. Also, parents are responsible for their children and the harm they do. AI will not get more complex or more intelligent than animals or a child anytime soon (maybe in 100 years, assuming exponential advances in technology). So, what's the big deal? The only
  • by tommeke100 ( 755660 ) on Monday June 13, 2016 @02:34AM (#52303941)
    Who cares? There are already serious military applications of AI like Turrets and I'm sure much more advanced weapons. And that's from a slashdot article I once read a couple of years ago. So who knows what they have now. A little needle pricking robot is the least of our worries.
  • This machine is as responsible for its actions as a poorly trained dog who bites random strangers. Normally, we would hold the dog's owner responsible for this behavior, and one might say that the same should be true of this machine.
  • All they need to do now is add the blood sugar test strips and you have an extremely expensive blood glucose monitor/extractor.

  • by geekmux ( 1040042 ) on Monday June 13, 2016 @07:10AM (#52305271)

    When the adult already told you the damn stove was hot, it DOES tend to be your fault for touching the damn thing again and burning yourself. The parallel with this test is not that hard to discern here, so let's stop being ignorant about culpability. The main difference here is that it's expected for an adult to know better, hence the reason I label this a psychology test rather than a validation of anything else.

    The robot in question could be a stove burner, knife, or baseball bat, as it contains as much intelligence as any of those examples.

    Can't believe people want to bring "three-laws" concepts to the table when discussing a hammer.

  • This is no different than hiring a hitman. If the hit occurs or not, you and s/he are equally liable. If the robot succeeds or not, the moment it's powered up in the presence of other people, its programmer is guilty of assault.

    That's the right legal precedent to set, regardless of what this attention-seeker intended.

  • How is this a robot? It is a crudely machine that pricks one's finger. I have a glucose monitor that serves the same function and nobody would consider it a robot. By the definition used here, the automatic toilet flusher used in many public restrooms is a robot.

  • Is he responsible for the pain which his robot inflicts?

    Yes. In fact I'd call the police and file charges for assault. It's not a thinking, conscious thing, making a thinking, conscious decision to injure someone, it's a little machine that some jerk made that goes around making people bleed. It's a nuisance at best, infection at worst, and legally speaking assault, and I'd see him answer for it in front of a judge, just as surely as if he went around with something sharp in his hand poking people to make them bleed. What an asshole thing to do!

  • They were intended to be principles designed INTO the robot. Anyone can build a device (and call it a robot) that violates one of the three laws: "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

    What a waste of electrons, /. !!!

  • Wake me when you have stairs in your house.

  • You have to put your finger into the robot's stabbing chamber to be attacked :-\

    I was hoping it would roll around the tabletop on wheels or tracks pursuing humans sitting around the table and trying to stab them, ideally while displaying some kind of angry face and saying "KILL ALL HUMANS" with a synthesized voice.

news: gotcha

Working...