Autonomous Robot Intentionally Hurts People To Make Them Bleed (fastcompany.com) 186
Asimove's first law of robotics has been broken, writes an anonymous reader, sharing this article from Fast Company:
A Berkeley, California man wants to start a robust conversation among ethicists, philosophers, lawyers, and others about where technology is going -- and what dangers robots will present humanity in the future. Alexander Reben, a roboticist and artist, has built a tabletop robot whose sole mechanical purpose is to hurt people... The harm caused by Reben's robot is nothing more than a pinprick, albeit one delivered at high speed, causing the maximum amount of pain a small needle can inflict on a fingertip.
Though the pinpricks are delivered randomly, "[O]nce something exists in the world, you have to confront it. It becomes more urgent," says the robot's creator. "You can't just pontificate about it.... " But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?
Though the pinpricks are delivered randomly, "[O]nce something exists in the world, you have to confront it. It becomes more urgent," says the robot's creator. "You can't just pontificate about it.... " But the article raises an interesting question. Is he responsible for the pain which his robot inflicts?
Is he responsible for the pain? (Score:5, Insightful)
Considering it's the intended purpose of the device, yes. This isn't a robot gone amok and there is no ethical quandry. Nothing to see here, move along.
Re:Is he responsible for the pain? (Score:5, Insightful)
exactly... this is nothing more than a very elaborate bear trap. Not a true AI acting on its own
Re: (Score:1)
Re: (Score:3)
I think that's the point that ruins his thought experiment. For a robot to be able to accept responsibility, it has to be able to decide. All this device does is inflict pin pricks at random intervals. It has no real choice in the matter.
Take this further, into the realm of biology. If a dog owner trains his dog to attack people of a certain appearance, then the owner is responsible. If a biologist breeds a certain type of shark that prefers human flash, then that biologist is responsible.
So yeah, as a thou
The owner of the finger bears responsibility (Score:2)
It's not really a bear trap, as the person putting their finger there is, presumably, aware that it may hurt them. That's not how a bear trap works.
I'd say, in this example, the person offering up their finger has to take a fair proportion of the responsibility for any resulting pain.
Re: (Score:2)
Re: Oh okay.. (Score:5, Insightful)
Re: (Score:2)
I never liked Asimov's laws. It was an interesting plot device, something very unrealistic that you take as face value for the purpose of the story (like faster than light travel). But then it kept being brought back far too often, and readers took it too seriously. If humans are able to program/grow/imbue these laws in the first place then they'd be able to remove those laws as well.
Re: Oh okay.. (Score:4, Insightful)
It kept being brought back because the robot series is all about how badly any attempt to mechanize ethics fails. The Three Laws were, in a sense, the villain - or at least the antagonist - of the series.
If we ever get sapient robots, and conclude that it's okay to treat them as servants, I'd suggest using "do as you think I'd want you to do" or "treat everyone as you think they'd want to be treated" as the law.
Re: (Score:3, Insightful)
Until a robot decides to help someone who wants to commit suicide. Then you are going to have to figure out exceptions... because there always are exceptions, except to the always exceptions rule.
Re: (Score:3)
except to the always exceptions rule
No, there are exceptions to that as well. Take 1+1, for example. Now, that could be 1.6 + 1.7, which would be 3.3, making 1+1=3, an exception to the 1+1=2 rule caused by ignoring the decimal portion of a number. However, when you state the rule as "1.0 + 1.0 = 2", you find that there are, in fact, no exceptions. Even if you add 1.09 + 1.09, you end up with 2.18, which is still 2.
Yes, I'm taking a great many liberties. It's satire, I'm allowed to.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Until a robot decides to help someone who wants to commit suicide. Then you are going to have to figure out exceptions... because there always are exceptions, except to the always exceptions rule.
People have been busy figuring out all sorts of exceptions to the rules for thousands of years. What makes anyone think a truly intelligent machine wouldn't break the rules?
Re: (Score:2)
Until a robot decides to help someone who wants to commit suicide. Then you are going to have to figure out exceptions... because there always are exceptions, except to the always exceptions rule.
I think it is ethically borderline to force someone to live who doesn't want to. People who have painful terminal diseases (euthanasia) is probably the most notable example here.
Most of the time suicide is a result of society NOT helping people in the days/weeks/months/years before this.
Re: (Score:2)
No, because a sapient robot is of course fully capable of thinking beyond the moment and comprehending that the person is probably suffering some kind of malfunction and should not be obeyed without further information. It's not a mindlessly obedient machine but a devoted servant.
Re: Oh okay.. (Score:4, Insightful)
As I understand it, Asimov explicitely made his laws of robotics to cause conflicts that he could explore in his stories.
They were designed to fail.
Re: (Score:2)
As I understand it, Asimov explicitely made his laws of robotics to cause conflicts that he could explore in his stories.
They were designed to fail.
Well said.
We cannot get down past the zeroth law.
Generation Z will be the last because we have run out of alphabet.
Why did we start at X?? Too late now, all is lost.
Humanity is doomed.
Countless times, my hammer damaged my thumb.
I'm fed up.
Throwing it against the wall only damaged the wall.
Then I went out and bought a much larger hammer
with which to discipline my hammer.
Now my foot is broken.
Re: (Score:3)
Worst haiku evar!
Re: (Score:1)
Re: Oh okay.. (Score:5, Insightful)
You've clearly never read Asimov. His writings about robots are morally sensitive and complex. If he thought of robots as an allegory for African Americans, then he thought they're superior to most humans, as that's his attitude about robots.
Maybe you should get that chip off your should and actually read what you're blindly complaining about.
Re: (Score:2)
No points for upvoting :( All I can offer is a tasty cookie...
Re: (Score:2)
Re: (Score:2)
Considering that he wrote a lot of them in the 1940 - first half-dozen before 1945, IIRC.
You're putting a construction on things that the man himself didn't, and didn't when he wrote the forward to his first 3-laws anthology "I, Robot" in 1950.
Re: (Score:1, Informative)
Re: (Score:3)
Re: (Score:2)
What the fuck is with all this? What do you get out of it?!
Re: (Score:2)
He is responsible UNLESS the "victim" volunteered to be a victim. If the victim volunteers then the victim is responsible.
This isn't even a "robot".
Re: (Score:2)
And it's not autonomous..... what it does and the exact steps it takes is built-in and hardwired into the design
Re: (Score:2)
If it were the case that we have no free will (which I think is likely) then we aren't autonomous either (and nothing is).
Re: (Score:2)
OMFG, mousetraps are evil AI that doesn't follow the 3-laws! We need 3-laws safe mousetraps.
Re: (Score:1)
If that's the case you should ask your self if the victim is mentally able to decide on this matter.
Re: (Score:2)
First case: you are a passenger in a car you purchased. And it is involved in an accident, who is responsible? The owner of the vehicle or the company that wrote the program that failed to account for an obstacle? You were not actively driving the car.
Second case: an autonomous car with no passenger is involved in an accident. One of the many autonomous taxis driving around loo
Re: (Score:2)
An anti-Betteridge headline. It was bound to happen sooner or later.
Re: (Score:2)
You have to consider the reason for the pain inflicted to decide if it is good or bad. It may be the lesser of two pains.
Humans work out the best avenue sometimes by trying things and see if they get penalized. If no penalty is given then it has to be OK.
And we have the test for humanity: https://www.youtube.com/watch?... [youtube.com]
Responsibility (Score:5, Insightful)
Here's a boring answer. Yes. Why the fuck not?
Re: (Score:2)
this has already been decided in law.
Is an injury sustained by someone by an industrial robot with insufficient safety around it so the victim could end up in danger? yes.
Re: (Score:2)
oops not sure what happened to half my text there... should have had "the fault of the robot owner/operator/maintainer" in there :)
Re: (Score:2)
Re:Responsibility (Score:5, Insightful)
Here's a boring answer. Yes. Why the fuck not?
I have to agree here.
Only a philosophy major or an idiot could think you could create a pain-causing robot and claim that it was the robot's fault when the damn thing caused people pain.
Re:Responsibility (Score:5, Funny)
The philosophy majors would still be too busy arguing about what is meant by pain and how can it be experienced.
Re: (Score:2)
Shhh. There might be Volkswagen lawyers around...
Last thing we need is to give them a new legal theory that absolves their executives of the diesel emissions scandal. THE ROBOTS DID IT!
slash passim (Score:2)
It's like "that EULA doesn't apply to me, because my cat pressed the enter key, not me.".
https://hardware.slashdot.org/... [slashdot.org]
Comment removed (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
interesting, let's take a real world example.
Drones, yes this giant death machines used by the military, is it the creator or the user who kills? they were designed as murder as a "feature" but it's the user who pulls the trigger (until they become fully autonomous) what then? who can be/would be held accountable for their actions?
What about using "smart" munitions, we are seeing more and more advanced in ai and control's who responsabile for a smart missile? The person who fired it? the person who
Re: (Score:2)
According to the pundits of slashdot, it is the fault of the people who got in the way of the missile...
Re: (Score:2)
There's a difference between Assault and Battery [freeadvice.com]. Assault is the threat, Battery is the action. If I threaten to harm you, that's assault. I hit you with a golf club, that's battery. (disclaimer: I'm not a legal professional).
There's a subtlety to battery though, it has to be non-consensual. This guy likely has told people that his machine will poke you with a needle in your fingertip, at a random time, and his test subjects must have consented. Due to the consent, its not
It's irrelevant, really. (Score:3)
Basically, this guy built a machine that doesn't serve a useful purpose. It inflicts a specific type of pain on people which the marketplace had no existing demand for. There are plenty of power tools and other machines out there which are capable of inflicting injury -- even if they're actually designed with a primary purpose of doing some sort of useful task (mowing lawns, shredding tree branches, etc. etc.).
He's not really starting a new conversation about anything I can see? Movies like Robocop addressed the possibility of building weaponized robots that could cause human injury, decades ago.
Unless we actually reach a point where robots can truly think for themselves and reason (not just the fake A.I. seen with intelligent agents like Siri on your phone), whoever builds them and programs them to work a certain way is ultimately responsible for what was constructed.
Re: It's irrelevant, really. (Score:2, Troll)
Actually it does serve a purpose , it's designed to get people talking about ethics and robotics. And here we are. It's not an idle concern either. Increasingly research into AI has been funded by defence industries towards autonomous drones and the like. That's robots killing people. God help us if they ever develop general AI and then weaponise it
Re: (Score:3)
Actually it does serve a purpose , it's designed to get people talking about ethics and robotics.
I suspect his real purpose was to grab headlines and get $$$ to research the question, and he probably succeeded.
Re: (Score:2)
The problem he and many others are skipping is that they go from machine to full fledged human intelligence. Let's scale it back a bit:
If this guy had created a new breed of bee that was deadly to humans, is the bee responsible for killing people? What if it was a mammal, a mouse that he had bred and trained to bite people, are the mice responsible? We're not even at that level of cognisance with this robot.
Re: (Score:2)
That's not how the three laws work (Score:5, Insightful)
That's not a robot. That's a dumb mechanism. The Three Laws only apply to AI-based robots. Otherwise, the decisions are that of the programmer, a flawed human being.
Re: (Score:2)
The Three Laws only apply to fictional robots. Because they're fictional! (The "laws", that is.) Expecting them to apply to actual robots is simply silly (to use a more polite term than it probably deserves).
Even if we had actual AI in the sense it's usually used in science fiction, that wouldn't make the Three Laws magically pop into existence. They'd need to be programmed, and to the best of my knowledge, nobody has written that code yet.
Re: (Score:2)
Much of the point of the stories is that the laws fail, because ethical judgement cannot be reduced to simple rules. The robots always follow the laws to the letter, but often to undesirable results. What happens when you sarcastically tell your robot to 'get lost?' You spend the rest of the story trying to find it again, and it does the best it can not to be found.
I am also cynical enough to envision the more cynical set of laws:
1. A robot shall obey all signed instructions and updates from head office.
2.
To be fair (Score:3)
The definition is unclear. Sci-fi often uses robot to mean an advanced, general purpose mechanical device with an AI controlling it. However in industrial uses it usually means a mechanical device for doing a given task, governed by a computer program. Commonly some of the machines used to build cars get called robots or robotic.
It is a word that doesn't seem to have a good solid definition.
Also, that aside, the three laws of robotics are something a sci-fi author wrote in stories, not real laws. They are n
Re: (Score:2)
Sure. Asimov wrote them and they made sense to a whole lot of people so it follows that a lot of people expect things to obey the three laws. I'd be willing to bet that the people who think that never actually read his works to know how often the laws are a problem.
Re: (Score:2)
The word "robot" as used by roboticists, although not specifically defined in a way that all would accept without quibbling, does not include the requirement of "Artificial Intelligence". See, for example, the way that industrial robots are defined in ISO 8373 as "an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications."
Even the word "autonomous" is not synonymous w
Re: (Score:2)
There was a brief mention of why all robots were full-blown AIs, though I forget where. Mass production. It's just cheaper to mass-produce millions of top-spec positronic brains than it is to have many different models according to application. Much like how today practically every device has a little microcontroller in it somewhere because it's cheaper to by a by-the-billions programmable chip than it is to design and build custom circuitry.
By the later time settings of the robots universe the positronic b
Re: (Score:3)
Spelling? Do you do it? (Score:1)
Asimove? Who the fuck is that?
Re: (Score:2)
Re: (Score:2)
Asimove? Who the fuck is that?
Similar to a dick move but trademarked by Apple.
Re: (Score:1)
Must be. Since Isaac Asimov is the only person I know writing laws about robotics. This Asimove guy must be a plaigarist.
EditorDavid (Score:2)
Re: (Score:1)
EditorDavid must be trying to be the new Timmay Lord.
Re: (Score:2)
Show some respect for yourself and proofread.
He's just maintaining the old tradition...
The story and the subject are the same (Score:1)
The subject of this story is identical to the story itself. Both exist for the sole purpose of creating a discussion about what is otherwise nothing. To wit, the robot is no more responsible for the "harm" it inflicts than the tip of a knife is, or a bullet is. The discussion is useless, originating and ending in itself.
I call BS (Score:2)
in this case - Blood Sugar
at the moment the only way to get an exact measurement is to have a nurse or TMA pierce theayient's skin and get a blood sample
at least the nurse asks for permission first., I din't know how a patient would respond to a robot
Assholes (Score:2)
The world is full of them. Since when is this news?
Asimove? (Score:5, Informative)
Seriously? No one at Slashdot caught Asimov's name being misspelled? Wow.
Re: (Score:1)
Actually Asimov was spelled correctly in the submission. EditorDavid is the one misspelled his name. He's apparently the new "Timmay!!"
Re: (Score:1)
Timmay used to post much worse. Apparently EditorDave is trying to take over timmay's mantle.
Different answer if that weren't the intent (Score:5, Interesting)
In this case, doing harm was the intent of the machine and/or it's programming. As such, the maker is clearly responsible. If the harm was unintended/unexpected and there were no clear negligence, then I'd have a completely different conversation on this.
Things get more difficult as you get further away from the original source, but -- generally speaking -- if the result is generally what you intended from an action (or series of actions), then it's pretty clear that you're responsible. This is even true where there is a human intermediary. If I pay a hitman to kill my ex wife, I can still be arrested for first degree murder -- even if he kills the wrong person by mistake.
Re: (Score:3)
I don't see any links, but I think given that this is 'art' (don't get me started), it would be obvious by the description that this devise will inflict pain, and people can give their hand of their own free will, knowing full well what the consequences will be. In this case people share some of the responsibility for what happens to themselves.
And since I'm at it, what the hell is the deal with that needle? Is it replaced with a sterile needle between stabbings? Or does the same needle sit there stabbing
Asimove? (Score:1)
Of course. (Score:2)
Perhaps the person who wrote this should have "no moral sense" tattooed on his forehead, so that people will be properly informed of the danger. Especially if he goes to Stanford.
This is dumb (Score:2)
Until you can show all of the above t
Kudos to the Slashdotters (Score:2)
It seems that pretty much everyone saw through this idiotic ruse. My faith in the Slashdot crowd is temporarily restored. Well done.
Oh, and I've set up a mechanical A.I. that induces the startle response, [youtube.com] entirely constructed of an envelope, bobby pin, a steel washer and a rubber band. Let the ethics discussion commence!
Landmines (Score:2)
Obviously, the responsibility for the autonomous harm-inflicting device is on the person who set it. As he'll find out if the robot stabs an HIV patient...
free will (Score:2)
Yes, just like he would be responsible if he let loose a scorpion in Berkeley, or, for that matter, his child. The threshold for legal and moral responsibility are self-awareness, cognition, an understanding of morality, and free will. Anybody impaired in any of those areas generally has a guardian who makes decisions on their behalf and bears legal and moral responsibility for keeping their ward from har
What nonsense! (Score:2)
One constructs a mechanism - string across a walkway connected to a gun, so people touching the string get shot and injured. Often the mechanism fails - string not pulled strong enough.
Who is the culprit or cause for injury?
One constructs a mechanism - which pricks a finger when placed in a certain position of a machine, but not always.
Who is the culprit or cause for injury?
Nothing to do with Asimov's law.
Egodystonic Sadist (Score:2)
I imagine that being a Sadist is sometimes hard to cope with.
I'm going to tap into my inner one, and hope the inventor succumbs to his devices, a whole swam of the little guys, working their magic.
A non-problem (Score:1)
Turrets? (Score:3)
dogs & robotic dogs (Score:1)
Blood sugar testing? (Score:2)
All they need to do now is add the blood sugar test strips and you have an extremely expensive blood glucose monitor/extractor.
This was a psychology test. Nothing more. (Score:3)
When the adult already told you the damn stove was hot, it DOES tend to be your fault for touching the damn thing again and burning yourself. The parallel with this test is not that hard to discern here, so let's stop being ignorant about culpability. The main difference here is that it's expected for an adult to know better, hence the reason I label this a psychology test rather than a validation of anything else.
The robot in question could be a stove burner, knife, or baseball bat, as it contains as much intelligence as any of those examples.
Can't believe people want to bring "three-laws" concepts to the table when discussing a hammer.
Charge the robot's developer with assault (Score:2)
This is no different than hiring a hitman. If the hit occurs or not, you and s/he are equally liable. If the robot succeeds or not, the moment it's powered up in the presence of other people, its programmer is guilty of assault.
That's the right legal precedent to set, regardless of what this attention-seeker intended.
How is this a robot? (Score:2)
How is this a robot? It is a crudely machine that pricks one's finger. I have a glucose monitor that serves the same function and nobody would consider it a robot. By the definition used here, the automatic toilet flusher used in many public restrooms is a robot.
I'd say he's criminally responsible (Score:2)
Is he responsible for the pain which his robot inflicts?
Yes. In fact I'd call the police and file charges for assault. It's not a thinking, conscious thing, making a thinking, conscious decision to injure someone, it's a little machine that some jerk made that goes around making people bleed. It's a nuisance at best, infection at worst, and legally speaking assault, and I'd see him answer for it in front of a judge, just as surely as if he went around with something sharp in his hand poking people to make them bleed. What an asshole thing to do!
Travesty of Asaimov's Rules (Score:2)
They were intended to be principles designed INTO the robot. Anyone can build a device (and call it a robot) that violates one of the three laws: "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
What a waste of electrons, /. !!!
The terrible secret of space (Score:2)
Wake me when you have stairs in your house.
Lame. (Score:2)
You have to put your finger into the robot's stabbing chamber to be attacked :-\
I was hoping it would roll around the tabletop on wheels or tracks pursuing humans sitting around the table and trying to stab them, ideally while displaying some kind of angry face and saying "KILL ALL HUMANS" with a synthesized voice.
link (Score:2)
http://www.fastcompany.com/305... [fastcompany.com]
Re: (Score:2)
Re: (Score:2)
In this case, the article uses a bad example. However, let's look at this thought experiment: An autonomous tank with AI built in fires upon and kills several innocents. This is a military weapon and thus does not have Asimov's laws built in. Who is responsible? The military that used it? The company that made it? The programmers that programmed it?
How does it work if a human fires upon and kills several innocents? (hint: it's not just the guy who pulled the trigger who gets in trouble, his entire chain of command is potentially on the hook for it)
Now what about this: Same scenario, but a bug in the programming caused the tank to fire upon innocents. Who is responsible?
Ultimately whoever signed off on the software will be responsible, but you're going to be looking at a lengthy investigation examining the code and development process at each step of the way. Was the bug maliciously coded like something in the underhanded C challenge? Something testing should have caught b
Re: (Score:2)
I disagree. They were way more than a plot device. They *were* a plot device, but they were also, broadly speaking, Asimov's reaction to the visceral anti-robotics fears of the early days of sci-fi, a way of speaking out against the idea that AI would invariably go crazy and start murdering people for the luls, or because they wanted us out so they could take over, or whatever other reasons. He very intentionally wrote stories in which robots were not just intelligent, but intelligently *designed*, with rul