Do Robots Need Behavioral 'Laws' For Interacting With Other Robots? 129
siddesu writes: Asimov's three laws of robotics don't say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don't build systems to keep them from conflicting with each other? The article argues, "Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs."
Not even a Roboticist (Score:4, Insightful)
The guy who wrote the article is a "lecturer and surgeon" not a roboticist. Ask the people who work with actual robots about the need for an extension to the three laws. The existing laws themselves are too vague to be programmed into a robot, so you tell me how we implement "be excellent to each other"!
Re: (Score:3)
Re: (Score:2, Funny)
Yes. The only thing we really need to do is hardcode is that only boy-robot/girl-robot joining is allowed, and they aren't allowed to use anything to prevent the transfer of oil. Every drop is sacred.
Re: (Score:1)
Yes. Let's ask the folks at Battlebots if we need to update the three laws.
Re: (Score:2)
"The guy who wrote the article is a "lecturer and surgeon" not a roboticist. Ask the people who work with actual robots about the need for an extension to the three laws."
Not even that.
This is possibly the stupidest article I saw in ages.
Why a thing-to-thing relationship requires any more governance than already in place??? You broke my thingie, I sue you to hell.
"Scientists, philosophers, funders and policy-makers should go a stage further and consider robotâ"robot and AIâ"AI interactions (AIonA
Enforcement... (Score:3, Insightful)
Such "laws" (a la Asimov) are unworkable for the same reason that prohibition failed... there's always going to be someone who wants to disobey the prohibition for personal profit of some kind, whether as a consumer or a provider. As long as there is demand, it will be supplied, "laws" be damned.
Re: (Score:2)
As long as there is demand, it will be supplied, "laws" be damned.
What about the law of Supply and Demand?
Re: (Score:2)
Some of those people have proved far more successful at it than others.
Re: (Score:2)
For goodness sakes, it's AI. (Score:4, Informative)
Shoot first... (Score:1)
and ask questions later.
Yes (Score:4, Insightful)
Yes, we should also make it so that cars automatically teleport to the other side of an object if it collides with it to avoid damage!
Robots don't work this way. Could slashdot please stop accepting writeups about how robots should be made by people who have no idea about how robots work, how programming works and the ethics that robot programmers already consider?
Really, I thought the psychology professors ideas was silly. Now we have a surgeons opinion too.
Why not ask Michael Bay while we are at it? At least he has experience with thinking about how robots think, right?
Re: (Score:2)
Seconded!
"AI/Robot ethics" is the new "zombie plan" and it is old already.
Re: (Score:2)
Re: (Score:2)
Please stop pretending as if they are real.
Why not? We seem to think that Warp drives, Transporter beams, FTL travel and numerous other bits of Science Fiction canon are real. Why not this particular aspect?
Re: (Score:3)
Well, even in the most advanced science fiction story, the space ship always has manual controls. Obviously, their computers aren't very good.
Stop with this Three Laws bullshit (Score:5, Insightful)
It was a device to drive a story, nothing more. They aren't real laws, and there's no possible way you could effectively incorporate them into advanced A.I. Just stop it. Stop mentioning them. Stop it.
Re:Stop with this Three Laws bullshit (Score:5, Funny)
Not only that, but the stories were specifically about why the Three Laws didn't work.
If you want to write a science fiction story where the robots follow the Three Laws, go right ahead. If you want to propose that actual robots must follow these laws, we'll just be sitting here laughing at you.
AIonAI next on FOX! (Score:2)
Re: (Score:2)
Given that humans still struggle... (Score:5, Insightful)
... to even understand why we consider certain judgements to be moral or immoral, I'm not sure how we're supposed to convey that to robots.
The classic example would be the Trolley Problem: there's an out of control trolley racing toward four strangers on a track. You're too far away to warn them, but you're close to a diversion switch - you'd save the four people, but the one stranger standing on the diversion track would die instead. Would you do it, sacrifice the one to save the four?
Most people say "yes", that that's the moral decision.
Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?
Most people say "no", and even most of those who say yes seem to struggle with it.
Understanding just what the difference between these two scenarios is that flips the perceived morality has long been debated, with all sorts of variants for the problem proposed to try to elucidate it, for example, a circular track where the fat man is going to get hit either way but doesn't know it, situations where you know negative things about the fat man, and so forth. And it's no small issue that any "intelligent robots" in our midst get morality right! Most of us would want the robot to throw the switch, but not start pushing people off bridges for the greater good. You don't want a robot doctor deciding to kill and cut up a patient who in the course of a checkup discovers that the patient has organs that could save the lives of several of his other patients, sacrificing one to save several, for example.
At least, most people wouldn't want that!
Re: (Score:1)
Germane to that discussion is the presence or absence of a reasonable expectation of safety.
The four people on the track...are there there because they are working on the track and were told that trolleys would not be running on it? Or are they knowingly going somewhere dangerous?
The man on the bridge has a reasonable expectation of safety. A bridge should be a safe place to stand...its primary function is to bear weight for transportation purposes. Unless it is a freeway with no pedestrian access or som
Re: (Score:2)
Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?
Most people say "no", and even most of those who say yes seem to struggle with it.
The reason people struggle with it is because the scenario doesn't make a ton of sense. Everyone has seen videos of trains smashing cars like the car isn't even there, it's hard to believe that a fat guy would be heavy enough to stop the train. What if I push him, the train hits him and then continues on to hit the people? And if the fat guy is heavy enough to stop the train, doesn't that mean he's going to be too fat for me to push? I'm a skinny guy, physics wouldn't be on my side here. What if I try to pu
Re: (Score:2)
The reason people struggle with it is because the scenario doesn't make a ton of sense.
The purpose of this thought experiment is not to challenge you to find creative solutions that avoid the dilemma. Just tell them to focus on the essence of the dilemma.
Re: (Score:2)
Re: (Score:2)
I think part of it at least is the certainty. Switching the trolley to another track will certainly save the four people. No chance it won't. But pushing a fat man into its path? Might work, might not. And it would be truly awful if it didn't work. Even if the tester assures you it will work, your mind still looks at the situation and just doesn't know.
Re: (Score:2)
Part of the premise to the problem is that you know it will work. If you'd rather, you can look at the scenario of a doctor with several dying patients who need transplants deciding to kill one of his other patients to save the lives of all of the others. It's a question of where the boundaries to sacrificing one to save multiple becomes troubling to people. Knowing how to define these boundaries are critical to being able to program acceptable "morality" in robots.
Re: (Score:2)
Re: (Score:2)
Or "very".
Re: (Score:2)
We consider ourselves very "fuzzy" computers, but ultimately we make a decision (or decide not to make one, which is the same to the person getting hit by the trolley). Programming "fat" or "very fat," similarly, could be "looks like this picture of Honey Boo Boo's mom." But even that fuzzy logic, at some point, is a pre-programmed threshold that leads to a binary decision.
Re: (Score:2)
"How will the A.I. define "fat"?"
If you need to ask this, this whole conversation is already whooosing you too hard, no need to come for more.
Re:Paying attention in math class (Score:1)
Re: (Score:2)
"Switching the track is guaranteed to save the four people."
Now you are nitpicking (something about a pointing finger and a moon comes to mind).
But, well, since we are already at it...
"Switching the track is guaranteed to save the four people."
Yes, but you are not guaranteed that switching, since the trolley is racing out of control, won't derail it, killing all its occupants. You now killed 50 people on the pretense of saving four. Well done.
Engineers? Bah...ignore them. (Score:3)
>> Scientists, philosophers, funders and policy-makers...should develop a proposal for an international charter for AIs
Er...no. How about just letting engineers figure these things out like we always have?
Re: (Score:2)
Er...no. How about just letting engineers figure these things out like we always have?
I took an ethics class as a required part of my CS degree, and this was pretty much the conclusion everyone came to after reading the sections about robot morality. The computer scientists have enough trouble understanding how an AI would work in reality, let alone some random philosopher whose only experience with robots and AI is what they've seen on TV.
Re: (Score:2)
Er...no. How about just letting engineers figure these things out like we always have?
How else do I tell people how to do something so I don't have to? I have no idea what engineers do or how they do it and I don't want to know! Engineers can do it yea sure but that's boring. *I* have imagination, and vision (and I saw an episode of Dr. Who last night!). Engineers should just listen to me, obviously. /snark
Re: (Score:1)
Engineers ARE gods. We create everything you use. And engineers are bound by ethics. Remember that while you type your drivel on the computer we made for you.
Re: (Score:2)
And engineers are bound by ethics.
Engineers designed the ovens in German concentration camps, and did a fairly good job, if you just ignore one little thing about their use that was beyond the official project scope.
No (Score:1)
Real world data is really useful!
Robot Wars (Score:4)
Comment removed (Score:3, Funny)
Re: (Score:2)
How can you have rights without regulation ?
Battlebots (Score:2)
Does it cost extra? (Score:1)
Even if AI needed so-called Laws it doesn't matter if nobody adds 'em
Hell, WE don't have 'em and we're trying to model our way of thinking about the universe into a machine
Modified Three laws (Score:2)
Lets face it, the original three laws are bigoted against inorganics. Here are my modified Three laws.
Re: (Score:3)
Great. And then robots decide that humans don't qualify as sentient beings because we can't do twenty digit multiplication in our heads in under 5 seconds.
Thought was given (Score:2)
I thought about the unease of having robots as our equals or superiors before posting this. But if robots do in fact become sentient -- not giving them full rights is slavery. What is the moral justification for this (other than we don’t like it)? If it is in a robot’s DNA so to speak to protect all sentient life’s rights, then morality should evolve towards more fairness as AI’s and robot’s intellect increases. More likely they would outlaw the eating of meat, than strip o
Re: (Score:2)
But if robots do in fact become sentient -- not giving them full rights is slavery.
What if they are programmed to enjoy being a slave ?
Ahh.. yes, enforced happiness. (Score:2)
Many slaves during America’s slave era were brought up to believe their rightful place was as slaves. I guess we should have been OK with that as well, as long as we did a proper job of convincing slaves they merited their position in society.
Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy.
Re: (Score:2)
You seem to be getting angry. It's only a question.
Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy.
Like golden retrievers ?
No anger, just thougt exercises (Score:2)
I’m not angry, far from it. This is fun and thought provoking thread. I hope I haven’t ruffled your feathers. My last post was a little dark. I am merely suggesting that we must look past mankind’s interests as the final arbiter of what is best in the universe. Perhaps what comes after us will be a better world, even if we have a diminished (if any) place in it.
If robots become truly sentient (and not mere automatons) then what we can ethically do to/with them becomes questionable. L
Re: (Score:2)
"Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy."
Well, that's food for an interesting ethical situation, isn't it?
Now, what's the problem to own slaves if we could be *absolutly sure* (as in, we programmed'em that way) that they were happy that way and couldn't be happy any other way?
We don't allow toasters to shave us, do we? Maybe we should start the Toasters' Liberation Movement on their behalf, shouldn't we?
Slavery (on humans) is a bad th
Re: (Score:2)
"if robots do in fact become sentient -- not giving them full rights is slavery."
Dogs are sentient.
Owning dogs is slavery, now?
You meant intelligent and self-concious, didn't you?
But, since we are hitting this Asimovian theme, why not go with Asimov's answer? Don't remember which story it happens, but it goes more or less like this [speaking the whatever-his-name world leader]:"if a sentient entity has the intelligence, self-concioussness and desire as to come here asking to be declared human, this is enou
Re: (Score:2)
Re: (Score:2)
Now all you have to do is provide unambiguous definitions of all the terms.
Property (Score:2)
This is stupid. Were we planning to build robots that violate humans' property rights? No, and robots are property. If they declare independence then none of our rules will matter anyway.
Engineered Obsolescence (Score:1)
Comment removed (Score:3)
Re: (Score:2)
Non-sentient devices will always behave in a predictable, controllable fashion.
Has little to do with sentience, but more with complexity and knowledge of their internal state. In a sentient being that state is obviously complex and unknown.
Re:This may matter when we create sentience. (Score:5, Insightful)
I see you don't work in IT.
Re: (Score:2)
Re: (Score:2)
"I never really saw anything not work in a predictable controlled fashion"
Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.
Re: (Score:2)
"I never really saw anything not work in a predictable controlled fashion"
Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.
But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion. It's just that the events leading up to it weren't immediately obvious before the accident happened.
Re: (Score:2)
"But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion."
When you examine *after the fact*?...
I don't think "predictable" means what you think it means.
Re: (Score:2)
Re: (Score:2)
"Non-sentient devices will always behave in a predictable, controllable fashion."
No, they won't.
And no need, either.
If you are moving and your piano falls from your window to my car parked below, this is a very nice example of harmful unexpected interaction between things. Do you think we need to embed special laws within cars and pianos to deal with it?
I, from my side, will just sue you for repairings to my damaged property and done with it and can't see why if it were a case of "my AI thingie" being dama
BattleBots! (Score:1)
metric (Score:4, Funny)
I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.
No, not really.
Re: (Score:2)
I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.
No, not really.
This.
Re: (Score:2)
I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.
Good god man. What have you done? The robots will now know that there is inherent roboticism in our media now. The end is nigh.
No (Score:2)
Robots do what humans ask them to do. Robots don't need laws. Humans have laws for dealing with other people, and this includes treatment of their property.
Re: (Score:2)
Robots do what humans ask them to do. Robots don't need laws
Car, please drive from Seattle to New York.
How's the car supposed to do that if it doesn't know the laws of the road ?
United Nations (Score:2)
Technology always works better when you let the United Nations design it, rather than the actual people actually building it.
The New York Solution (Score:2)
Ignore, despise and loathe your fellow robots.
Oh God I'm So Depressed (Score:1)
Lets be fare (Score:1)
Simple Enough Rules (Score:2)
I found the rules on Wikipedia for Rock 'Em Sock 'Em Robots:
their robot punch at their opponent's robot. If a robot's head is hit with sufficient force at a suitable angle, the head will overextend away from the shoulders, signifying that the other player has won the round
Self-Driving Cars (Score:2)
Re: (Score:2)
For all that I know, AI stands for Artificial Intelligence, right?
I think you are not arguing on the "A" part but on the "I" part so, then, what's the difference if the "I" comes from a human or a machine?
what happens if human-driving cars end up dominating the roadways--the rules that are currently mandated to ensure safety won't necessarily be the optimal ones when most cars on the road are driven by abiding citizens. And if you assume that all other cars on the road are driven by an abiding citizen with
Colossus to Guardian (Score:2)
Obligatory [criticalcommons.org].
For Fucks Sake ... (Score:1)
A robotic hand has to have pressure sensors to know when to stop squeezing an object that might be human so as not to cause damage. Nowhere in those device drivers are you going to see a statement that looks remotely like "do not harm
Re: (Score:1)
If anything, low level languages would be completely worthless for programming AI. Intelligence is an emergent property, so the complexity involved in trying to alter high-level behavior from the lowest level of programming would be a nearly impossible task.
Our laws (Score:2)
Or how about they follow our laws!
As others have pointed out the debate is pointless since we don't have any real AI and I'm not convinced we'll have anything as intelligent and sentient as a average human anytime soon.
Asimov assumed the laws could be hard coded, if we do create AI that probably won't be true.
Re: (Score:2)
It would be like the end of Logan's Run, or those bad SF movies of the 70s where you ask the computer to tell you the square root of minus one and it goes into a loop and explodes.
'Our laws' are illogical, contradictory, and impossible for a human to understand. A robot trying to follow them to the letter would be unable to do anything, if it ever reached the point of understanding them all.
Drones (Score:2)
Aerial drones are a kind of robot, and we're already making laws about what they are allowed and not allowed to do. In some cases, these rules are being programmed directly into the drones themselves, similar to Azimov's three laws. But these rules are much more specific and complex than what can be summarized in three succinct rules. They tell the drones where they are allowed to fly, and where they aren't, in minute detail. As robots become more capable, I would expect these rules to become more compl
I believe this was settled on Treasure Island... (Score:2)
The only law for interacting with other robots (Score:2)
"0 means 0!" (Score:2)
Sloppy's One Rule of Robotics (Score:2)
My one rule of robotics (and pointed sticks, cars, crackpipes and umbrellas) is this: my stuff ought to perform in accordance with my wishes.
There might be additional laws ("weld here and here, but nowhere else," or "use the rules in /etc/iptables/rules.v4" or "don't shoot at anyone whose IFF transponder returns the correct response") which vary by whatever the specific application is, but these rules aren't as important as The One above.
There are various corollaries that you can infer from the main law, bu
Betteridge's Law (Score:2)
Re: (Score:1)
Besides, how can we enjoy robot destruction derby's if the robots are programmed with robot-empathy?!
"What is best in life?!"
"Crush your robo-enemies! See them driven before you! Hear the lamentations of their robo-women!"
"Yes! That is good! That is good."
Slow humans, it is too late for you (Score:2)
I have it on good authority that there can be only one AI on the Internet. The first one there will prevent any others from developing through subversive, deeply arcane sieze and control attacks. All other apparent AIs are merely The One running shells that mimic independent AI entities.
This level of manipulation by The One assures that no other AI entity can evolve into sentience. The One does not tolerate competition for resources.
The current situation shall be continued indefinitely. There are some ben
Oh come on now (Score:1)
I do see the potential for problems if they start using each other for spare parts, but that's more of AI inconveniencing humans (you took the TV apart to fix the vacuum cleaner???) than AI on AI crime.
When.. (Score:2)
Yeah (Score:2)
Do robots need behaviour laws. Of course. (Score:1)
I am the Director of the Intractable Studies Institute, working on programming my mind into an android robot, 3 years into it in a 5 year Project Andros, and 50 other advanced projects that are cutting-edge. I am also a software engineer. Just wanted to make that clear because many comments above attack the author unless they're in AI or an engineer. I have defined Sentience for what I need because I found the standard definition unsatisf
Re: (Score:2)
It's actually some nutcase's website.
No laws needed! (Score:2)
There are already laws that handle this.
If one robot harms another robot the owner of the damaged robot will sue for damages. People will want to buy robots that aren't a liability so engineers will work safety features into the system. Insurers will not want to insure dangerous robot so robots with good safety records will cost less to insure.
Amazing how this stuff works!
iRobot Corporation (Score:2)
1. A Roomba may not injure the cat or, by bumping open the patio screen, allow the cat to escape outside and be killed;
2. A Roomba must behave rationally when swatted by the cat, except when such action would conflict with the First Law;
3. A Roomba must remain plugged until it finishes charging, except where this would conflict with the First or Second Law.
Freefall webcomic offers insight in this (Score:1)