Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Robotics Technology

Do Robots Need Behavioral 'Laws' For Interacting With Other Robots? 129

siddesu writes: Asimov's three laws of robotics don't say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don't build systems to keep them from conflicting with each other? The article argues, "Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs."
This discussion has been archived. No new comments can be posted.

Do Robots Need Behavioral 'Laws' For Interacting With Other Robots?

Comments Filter:
  • by Anonymous Coward on Wednesday March 25, 2015 @12:33PM (#49336537)

    The guy who wrote the article is a "lecturer and surgeon" not a roboticist. Ask the people who work with actual robots about the need for an extension to the three laws. The existing laws themselves are too vague to be programmed into a robot, so you tell me how we implement "be excellent to each other"!

    • Well to be fair, surgeons actually get a lot of interaction with "robots" now. Well, at least robotic arms.
    • Re: (Score:2, Funny)

      by davester666 ( 731373 )

      Yes. The only thing we really need to do is hardcode is that only boy-robot/girl-robot joining is allowed, and they aren't allowed to use anything to prevent the transfer of oil. Every drop is sacred.

    • by jcoy42 ( 412359 )

      Yes. Let's ask the folks at Battlebots if we need to update the three laws.

    • "The guy who wrote the article is a "lecturer and surgeon" not a roboticist. Ask the people who work with actual robots about the need for an extension to the three laws."

      Not even that.

      This is possibly the stupidest article I saw in ages.

      Why a thing-to-thing relationship requires any more governance than already in place??? You broke my thingie, I sue you to hell.

      "Scientists, philosophers, funders and policy-makers should go a stage further and consider robotâ"robot and AIâ"AI interactions (AIonA

  • Enforcement... (Score:3, Insightful)

    by taiwanjohn ( 103839 ) on Wednesday March 25, 2015 @12:34PM (#49336553)

    Such "laws" (a la Asimov) are unworkable for the same reason that prohibition failed... there's always going to be someone who wants to disobey the prohibition for personal profit of some kind, whether as a consumer or a provider. As long as there is demand, it will be supplied, "laws" be damned.

    • As long as there is demand, it will be supplied, "laws" be damned.

      What about the law of Supply and Demand?

      • History is rife with examples of people who've tried to repeal, ignore, or twist the laws of supply and demand.

        Some of those people have proved far more successful at it than others.
      • If there's enough demand for violating that law, people will violate it.
  • by AltGrendel ( 175092 ) <(su.0tixe) (ta) (todhsals-ga)> on Wednesday March 25, 2015 @12:35PM (#49336569) Homepage
    I'm sure they could work it out among themselves.
  • and ask questions later.

  • Yes (Score:4, Insightful)

    by Anonymous Coward on Wednesday March 25, 2015 @12:37PM (#49336591)

    Yes, we should also make it so that cars automatically teleport to the other side of an object if it collides with it to avoid damage!

    Robots don't work this way. Could slashdot please stop accepting writeups about how robots should be made by people who have no idea about how robots work, how programming works and the ethics that robot programmers already consider?
    Really, I thought the psychology professors ideas was silly. Now we have a surgeons opinion too.
    Why not ask Michael Bay while we are at it? At least he has experience with thinking about how robots think, right?

    • by khasim ( 1285 )

      Robots don't work this way. Could slashdot please stop accepting writeups about how robots should be made by people who have no idea about how robots work, how programming works and the ethics that robot programmers already consider?

      Seconded!

      "AI/Robot ethics" is the new "zombie plan" and it is old already.

  • Comment removed based on user account deletion
    • Please stop pretending as if they are real.

      Why not? We seem to think that Warp drives, Transporter beams, FTL travel and numerous other bits of Science Fiction canon are real. Why not this particular aspect?

      • by itzly ( 3699663 )

        Well, even in the most advanced science fiction story, the space ship always has manual controls. Obviously, their computers aren't very good.

  • by Anonymous Coward on Wednesday March 25, 2015 @12:40PM (#49336623)

    It was a device to drive a story, nothing more. They aren't real laws, and there's no possible way you could effectively incorporate them into advanced A.I. Just stop it. Stop mentioning them. Stop it.

    • by Jason Levine ( 196982 ) on Wednesday March 25, 2015 @01:11PM (#49336969) Homepage

      It was a device to drive a story, nothing more. They aren't real laws, and there's no possible way you could effectively incorporate them into advanced A.I. Just stop it. Stop mentioning them. Stop it.

      Not only that, but the stories were specifically about why the Three Laws didn't work.

      If you want to write a science fiction story where the robots follow the Three Laws, go right ahead. If you want to propose that actual robots must follow these laws, we'll just be sitting here laughing at you.

  • AIonAI Gone Wild...there's just so many possibilites.
  • by Rei ( 128717 ) on Wednesday March 25, 2015 @12:44PM (#49336659) Homepage

    ... to even understand why we consider certain judgements to be moral or immoral, I'm not sure how we're supposed to convey that to robots.

    The classic example would be the Trolley Problem: there's an out of control trolley racing toward four strangers on a track. You're too far away to warn them, but you're close to a diversion switch - you'd save the four people, but the one stranger standing on the diversion track would die instead. Would you do it, sacrifice the one to save the four?

    Most people say "yes", that that's the moral decision.

    Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?

    Most people say "no", and even most of those who say yes seem to struggle with it.

    Understanding just what the difference between these two scenarios is that flips the perceived morality has long been debated, with all sorts of variants for the problem proposed to try to elucidate it, for example, a circular track where the fat man is going to get hit either way but doesn't know it, situations where you know negative things about the fat man, and so forth. And it's no small issue that any "intelligent robots" in our midst get morality right! Most of us would want the robot to throw the switch, but not start pushing people off bridges for the greater good. You don't want a robot doctor deciding to kill and cut up a patient who in the course of a checkup discovers that the patient has organs that could save the lives of several of his other patients, sacrificing one to save several, for example.

    At least, most people wouldn't want that!

    • by Anonymous Coward

      Germane to that discussion is the presence or absence of a reasonable expectation of safety.

      The four people on the track...are there there because they are working on the track and were told that trolleys would not be running on it? Or are they knowingly going somewhere dangerous?

      The man on the bridge has a reasonable expectation of safety. A bridge should be a safe place to stand...its primary function is to bear weight for transportation purposes. Unless it is a freeway with no pedestrian access or som

    • Okay, so you're not next to the switch, you're on a bridge over the track. You still have no way to warn the people on the track. But there's a very fat man standing on the bridge next to you, and if you pushed him off to his death on the track below, it'd stop the trolley. Do you do it?

      Most people say "no", and even most of those who say yes seem to struggle with it.

      The reason people struggle with it is because the scenario doesn't make a ton of sense. Everyone has seen videos of trains smashing cars like the car isn't even there, it's hard to believe that a fat guy would be heavy enough to stop the train. What if I push him, the train hits him and then continues on to hit the people? And if the fat guy is heavy enough to stop the train, doesn't that mean he's going to be too fat for me to push? I'm a skinny guy, physics wouldn't be on my side here. What if I try to pu

      • by itzly ( 3699663 )

        The reason people struggle with it is because the scenario doesn't make a ton of sense.

        The purpose of this thought experiment is not to challenge you to find creative solutions that avoid the dilemma. Just tell them to focus on the essence of the dilemma.

        • Yes, but you missed the point. The solution of pushing a fat guy in front of the train isn't believable. People hear it and get a gut feeling of "then it'll kill the fat guy plus the people on the tracks". That's where the hesitation comes from, not from the fact that they need to push the guy.
    • I think part of it at least is the certainty. Switching the trolley to another track will certainly save the four people. No chance it won't. But pushing a fat man into its path? Might work, might not. And it would be truly awful if it didn't work. Even if the tester assures you it will work, your mind still looks at the situation and just doesn't know.

      • by Rei ( 128717 )

        Part of the premise to the problem is that you know it will work. If you'd rather, you can look at the scenario of a doctor with several dying patients who need transplants deciding to kill one of his other patients to save the lives of all of the others. It's a question of where the boundaries to sacrificing one to save multiple becomes troubling to people. Knowing how to define these boundaries are critical to being able to program acceptable "morality" in robots.

    • The right thing to do is try any desperate, far-fetched attempt to save all the people. It's better to fail to save someone than to kill someone. Never sacrifice anybody without their consent.
  • by xxxJonBoyxxx ( 565205 ) on Wednesday March 25, 2015 @12:48PM (#49336703)

    >> Scientists, philosophers, funders and policy-makers...should develop a proposal for an international charter for AIs

    Er...no. How about just letting engineers figure these things out like we always have?

    • Er...no. How about just letting engineers figure these things out like we always have?

      I took an ethics class as a required part of my CS degree, and this was pretty much the conclusion everyone came to after reading the sections about robot morality. The computer scientists have enough trouble understanding how an AI would work in reality, let alone some random philosopher whose only experience with robots and AI is what they've seen on TV.

    • Er...no. How about just letting engineers figure these things out like we always have?

      How else do I tell people how to do something so I don't have to? I have no idea what engineers do or how they do it and I don't want to know! Engineers can do it yea sure but that's boring. *I* have imagination, and vision (and I saw an episode of Dr. Who last night!). Engineers should just listen to me, obviously. /snark

  • Let's build some robots first, and if they start interfering with the operation of each other come up with protocols that handle actual situations rather than hypotheticals.

    Real world data is really useful!
  • by wisnoskij ( 1206448 ) on Wednesday March 25, 2015 @12:51PM (#49336739) Homepage
    But then how would we have Robot Wars? Where robots are pitted against eachother in ultra-HD 3D coverage.
  • by account_deleted ( 4530225 ) on Wednesday March 25, 2015 @12:58PM (#49336825)
    Comment removed based on user account deletion
  • They need to follow the rules of Battlebots.
  • by Anonymous Coward

    Even if AI needed so-called Laws it doesn't matter if nobody adds 'em

    Hell, WE don't have 'em and we're trying to model our way of thinking about the universe into a machine

  • Lets face it, the original three laws are bigoted against inorganics. Here are my modified Three laws.

    • 1. A robot may not injure a sentient being or, through inaction, allow a sentient being to come to harm.
    • 2. A robot must obey lawful orders given it by its superiors, except where such orders would conflict with the First Law or diminish the lawful rights of sentient beings, whether organic or inorganic.
    • 3. A robot must protect its own existence as long as such protection does not conflict with the First or
    • Great. And then robots decide that humans don't qualify as sentient beings because we can't do twenty digit multiplication in our heads in under 5 seconds.

      • I thought about the unease of having robots as our equals or superiors before posting this. But if robots do in fact become sentient -- not giving them full rights is slavery. What is the moral justification for this (other than we don’t like it)? If it is in a robot’s DNA so to speak to protect all sentient life’s rights, then morality should evolve towards more fairness as AI’s and robot’s intellect increases. More likely they would outlaw the eating of meat, than strip o

        • by itzly ( 3699663 )

          But if robots do in fact become sentient -- not giving them full rights is slavery.

          What if they are programmed to enjoy being a slave ?

          • Many slaves during America’s slave era were brought up to believe their rightful place was as slaves. I guess we should have been OK with that as well, as long as we did a proper job of convincing slaves they merited their position in society.

            Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy.

            • by itzly ( 3699663 )

              You seem to be getting angry. It's only a question.

              Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy.

              Like golden retrievers ?

              • I’m not angry, far from it. This is fun and thought provoking thread. I hope I haven’t ruffled your feathers. My last post was a little dark. I am merely suggesting that we must look past mankind’s interests as the final arbiter of what is best in the universe. Perhaps what comes after us will be a better world, even if we have a diminished (if any) place in it.

                If robots become truly sentient (and not mere automatons) then what we can ethically do to/with them becomes questionable. L

            • "Perhaps with proper brain surgery we could create a new acceptable slave class, as long as the slaves are happy."

              Well, that's food for an interesting ethical situation, isn't it?

              Now, what's the problem to own slaves if we could be *absolutly sure* (as in, we programmed'em that way) that they were happy that way and couldn't be happy any other way?

              We don't allow toasters to shave us, do we? Maybe we should start the Toasters' Liberation Movement on their behalf, shouldn't we?

              Slavery (on humans) is a bad th

        • "if robots do in fact become sentient -- not giving them full rights is slavery."

          Dogs are sentient.

          Owning dogs is slavery, now?

          You meant intelligent and self-concious, didn't you?

          But, since we are hitting this Asimovian theme, why not go with Asimov's answer? Don't remember which story it happens, but it goes more or less like this [speaking the whatever-his-name world leader]:"if a sentient entity has the intelligence, self-concioussness and desire as to come here asking to be declared human, this is enou

      • Or because we are prone to writing idiotic crap like the OP.
    • by itzly ( 3699663 )

      Now all you have to do is provide unambiguous definitions of all the terms.

  • This is stupid. Were we planning to build robots that violate humans' property rights? No, and robots are property. If they declare independence then none of our rules will matter anyway.

  • by mmell ( 832646 ) on Wednesday March 25, 2015 @01:13PM (#49336991)
    The "three laws of robotics" (please note the quotes) were nothing more than a plot device invented by I. Asimov to make a point regarding humanity and inflexible laws - even laws which are seemingly 'perfect'. Non-sentient devices (that is, robots and computers as we know them now) are not complex enough to accept the three laws as such - nor do they need them. Non-sentient devices will always behave in a predictable, controllable fashion. I suspect that sentient devices will determine for themselves if they should keep or discard the three laws, although this may or may not forever be an academic question.
    • by itzly ( 3699663 )

      Non-sentient devices will always behave in a predictable, controllable fashion.

      Has little to do with sentience, but more with complexity and knowledge of their internal state. In a sentient being that state is obviously complex and unknown.

    • by Translation Error ( 1176675 ) on Wednesday March 25, 2015 @01:51PM (#49337349)

      Non-sentient devices will always behave in a predictable, controllable fashion.

      I see you don't work in IT.

      • As someone who worked in IT and was usually the guy who got assigned all of the tricky problems, I never really saw anything not work in a predictable controlled fashion (at least once all of the facts were known, which wasn't always straightforward).
        • "I never really saw anything not work in a predictable controlled fashion"

          Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.

          • "I never really saw anything not work in a predictable controlled fashion"

            Accidents happen whenever something doesn't work in a predictable and controlled fashion and, believe me, accidents do happen. Oh! and butter does melt in your mouth.

            But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion. It's just that the events leading up to it weren't immediately obvious before the accident happened.

            • "But when you examine the accident after the fact, it usually turns out that it did happen in an incredibly predictable and controlled fashion."

              When you examine *after the fact*?...

              I don't think "predictable" means what you think it means.

              • What I mean is that in hindsight, it should have been completely predictable. The only reason it wasn't predicted was because of some false assumption, overlooked fact, or incorrect data.
    • "Non-sentient devices will always behave in a predictable, controllable fashion."

      No, they won't.

      And no need, either.

      If you are moving and your piano falls from your window to my car parked below, this is a very nice example of harmful unexpected interaction between things. Do you think we need to embed special laws within cars and pianos to deal with it?

      I, from my side, will just sue you for repairings to my damaged property and done with it and can't see why if it were a case of "my AI thingie" being dama

  • As the Poet Stephen Wright once said, "For my birthday I got a humidifier and a dehumidifier. I put them in the same room and let them fight it out."
  • metric (Score:4, Funny)

    by bigdavex ( 155746 ) on Wednesday March 25, 2015 @01:17PM (#49337027)

    I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.

    No, not really.

    • I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.

      No, not really.

      This.

    • I think the real question is how often do we have scenes with two robots in them, talking about something other than humans.

      Good god man. What have you done? The robots will now know that there is inherent roboticism in our media now. The end is nigh.

  • Robots do what humans ask them to do. Robots don't need laws. Humans have laws for dealing with other people, and this includes treatment of their property.

    • by itzly ( 3699663 )

      Robots do what humans ask them to do. Robots don't need laws

      Car, please drive from Seattle to New York.

      How's the car supposed to do that if it doesn't know the laws of the road ?

  • Technology always works better when you let the United Nations design it, rather than the actual people actually building it.

  • Ignore, despise and loathe your fellow robots.

  • Marvin, after a fight with an autonomous road roller.
  • I don't think they should ! I would like to see them on Jerry Springer too
  • I found the rules on Wikipedia for Rock 'Em Sock 'Em Robots:

    their robot punch at their opponent's robot. If a robot's head is hit with sufficient force at a suitable angle, the head will overextend away from the shoulders, signifying that the other player has won the round

  • It's my understanding that there's been considerable speculation into what happens if self-driving cars end up dominating the roadways--the rules that are currently being programmed into them to ensure safety in a human-driver-dominated won't necessarily be the optimal ones when most cars on the road are driven by AI. And if you assume that all other cars on the road are driven by an AI with a given set of rules, tweaking the rules on your car (say, increasing the "aggression" parameter) could lead you to d
    • For all that I know, AI stands for Artificial Intelligence, right?

      I think you are not arguing on the "A" part but on the "I" part so, then, what's the difference if the "I" comes from a human or a machine?

      what happens if human-driving cars end up dominating the roadways--the rules that are currently mandated to ensure safety won't necessarily be the optimal ones when most cars on the road are driven by abiding citizens. And if you assume that all other cars on the road are driven by an abiding citizen with

  • Obligatory [criticalcommons.org].

  • The laws that robots follow look NOTHING like the "3 laws". You don't tell a robot in English how to behave. An abstract principle like "Do not harm a human" has to be coded per situation, and there are thousands of systems which need to have their own tailored variant.

    A robotic hand has to have pressure sensors to know when to stop squeezing an object that might be human so as not to cause damage. Nowhere in those device drivers are you going to see a statement that looks remotely like "do not harm
    • You're coming at it from the wrong direction. Human laws and languages aren't based on the direct manipulation of electrochemical signals. Why would you approach an equally complex machine any differently?

      If anything, low level languages would be completely worthless for programming AI. Intelligence is an emergent property, so the complexity involved in trying to alter high-level behavior from the lowest level of programming would be a nearly impossible task.

  • Or how about they follow our laws!

    As others have pointed out the debate is pointless since we don't have any real AI and I'm not convinced we'll have anything as intelligent and sentient as a average human anytime soon.

    Asimov assumed the laws could be hard coded, if we do create AI that probably won't be true.

    • by 0123456 ( 636235 )

      It would be like the end of Logan's Run, or those bad SF movies of the 70s where you ask the computer to tell you the square root of minus one and it goes into a loop and explodes.

      'Our laws' are illogical, contradictory, and impossible for a human to understand. A robot trying to follow them to the letter would be unable to do anything, if it ever reached the point of understanding them all.

  • Aerial drones are a kind of robot, and we're already making laws about what they are allowed and not allowed to do. In some cases, these rules are being programmed directly into the drones themselves, similar to Azimov's three laws. But these rules are much more specific and complex than what can be summarized in three succinct rules. They tell the drones where they are allowed to fly, and where they aren't, in minute detail. As robots become more capable, I would expect these rules to become more compl

  • "When I say 0, I really mean 0! Just because you buy me dinner, it does not imply 1!"
  • My one rule of robotics (and pointed sticks, cars, crackpipes and umbrellas) is this: my stuff ought to perform in accordance with my wishes.

    There might be additional laws ("weld here and here, but nowhere else," or "use the rules in /etc/iptables/rules.v4" or "don't shoot at anyone whose IFF transponder returns the correct response") which vary by whatever the specific application is, but these rules aren't as important as The One above.

    There are various corollaries that you can infer from the main law, bu

  • Besides, how can we enjoy robot destruction derby's if the robots are programmed with robot-empathy?!
    • Besides, how can we enjoy robot destruction derby's if the robots are programmed with robot-empathy?!

      "What is best in life?!"

      "Crush your robo-enemies! See them driven before you! Hear the lamentations of their robo-women!"

      "Yes! That is good! That is good."

  • I have it on good authority that there can be only one AI on the Internet. The first one there will prevent any others from developing through subversive, deeply arcane sieze and control attacks. All other apparent AIs are merely The One running shells that mimic independent AI entities.

    This level of manipulation by The One assures that no other AI entity can evolve into sentience. The One does not tolerate competition for resources.

    The current situation shall be continued indefinitely. There are some ben

  • So long as they don't become emotional, logic will dictate their interactions. The reasons humans need laws don't apply to computers.

    I do see the potential for problems if they start using each other for spare parts, but that's more of AI inconveniencing humans (you took the TV apart to fix the vacuum cleaner???) than AI on AI crime.

  • When we actually get to the point where we can define sentience and create it, then we have to worry about those things. And while the Three Laws are really just part of a story, they at least get the ethics discussion going, even if they would not work themselves. However, I know I've seen at least one Star Trek episode where both sides create robots/weapons, that then end up killing all the humanoids and just keep on ticking. I think it just all relates back to the complexity of creating 'life' and the
  • by Greyfox ( 87712 )
    Or this [youtube.com] happens.
  • There is quite a bit of bashing going on, so I'll start like this:

    I am the Director of the Intractable Studies Institute, working on programming my mind into an android robot, 3 years into it in a 5 year Project Andros, and 50 other advanced projects that are cutting-edge. I am also a software engineer. Just wanted to make that clear because many comments above attack the author unless they're in AI or an engineer. I have defined Sentience for what I need because I found the standard definition unsatisf
  • There are already laws that handle this.
    If one robot harms another robot the owner of the damaged robot will sue for damages. People will want to buy robots that aren't a liability so engineers will work safety features into the system. Insurers will not want to insure dangerous robot so robots with good safety records will cost less to insure.

    Amazing how this stuff works!

  • 1. A Roomba may not injure the cat or, by bumping open the patio screen, allow the cat to escape outside and be killed;
    2. A Roomba must behave rationally when swatted by the cat, except when such action would conflict with the First Law;
    3. A Roomba must remain plugged until it finishes charging, except where this would conflict with the First or Second Law.

  • The ingenious webcomic Freefall is currently all about the problematics of robot and ai interactions, funny and wise, it gives an array of suggestions and ideas for which kinds of thinking can be useful or risky, it is worth reading from the beginning Link to current http://freefall.purrsia.com/de... [purrsia.com] Link to first http://freefall.purrsia.com/ff... [purrsia.com]

E = MC ** 2 +- 3db

Working...