Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics

How Should the Law Think About Robots? 248

An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."
This discussion has been archived. No new comments can be posted.

How Should the Law Think About Robots?

Comments Filter:
  • by Anonymous Coward on Friday May 10, 2013 @06:43PM (#43690063)

    "With the personal robotics revolution imminent..."

    Imminent? Really? Sorry, but TFA has been watching too many SyFy marathons.

  • deterministic (Score:5, Insightful)

    by dmbasso ( 1052166 ) on Friday May 10, 2013 @06:45PM (#43690079)

    The same set of inputs will generate the same set of outputs every time.

    Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.

  • And that is the fallacy of the three laws as written by Asimov- he was a biophysicist, not a binary mathematician.

    The three laws are too vague. They really are guidelines for designers, not something that can be built into the firmware of a current robot. Even a net connected one, would need far too much processing time to make the kinds of split second decisions about human anatomy and the world around them to fulfill the three laws.

  • deterministic? (Score:5, Insightful)

    by Anonymous Coward on Friday May 10, 2013 @06:49PM (#43690123)

    Robots do not have deterministic output based on your commands. First of all, they have sensor noise, as well as environmental noise. Your commands are not the only input. They also hidden state, which includes flaws (both hardware, and software), both design, manufacturing and wear related.

    While this point is obvious, it is also important: someone attempting to control a robot, even if they know exactly how it works, and are perfect, can still fail to predict and control the robots actions. This is often the case (minus the perfection of the operator) in car crashes (hidden flaws, or environmental factor cause the crash). Who does the blame rest with here? It depends on lots of things. The same legal quandary facing advanced robots already applies to car crashes, weapon malfunctions, and all other kinds of equipment problems. Nothing new here.

    Also, if you are going to make the point that "This projection of human attributes is dangerous when trying to design legislation for robots.", please don't also ask "How Should the Law Think About Robots?". I don't want the Law to Think. Thats a dangerous projection of human attributes!

  • by Marxist Hacker 42 ( 638312 ) * <seebert42@gmail.com> on Friday May 10, 2013 @06:49PM (#43690127) Homepage Journal

    We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

  • by ShanghaiBill ( 739463 ) * on Friday May 10, 2013 @06:53PM (#43690155)

    The three laws are too vague. They really are guidelines for designers

    The "three laws" were a plot device for a science fiction novel, and nothing more. There is no reason to expect them to apply to real robots.

  • Laws and guns are both tools... they don't think and don't murder.

  • Minor copy edit: (Score:5, Insightful)

    by Alsee ( 515537 ) on Friday May 10, 2013 @06:59PM (#43690203) Homepage

    As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your senses) and the outputs (your behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the person will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the person. While this mental agency is part of our definition of a person, it is vital for us to remember what is causing this agency.

    -

  • by Anonymous Coward on Friday May 10, 2013 @07:01PM (#43690227)

    We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

    Perhaps we shouldn't give potentially mutinous personalities to our tools? I mean, my screwdriver doesn't need an AI in it. Neither do my pliers. My table saw can hurt me, but only if the laws of physics and my own inattentiveness make it so, not something someone programmed into it.

    Oh, wait, my mistake. I didn't grow up addicted to science fiction written by authors who lost track of which characters were designed to be actual tools and which were human beings due to that author's inability to discern people from things. I guess I just don't understand the apparently very vital uses of designing a mining device programmed to feel ennui, or a construction crane that some engineer at some point explicitly decided to give the ability to hate and some marketing director signed off on it. Maybe it's just that I can't see any sci-fi with a message of "oh no, our robots suddenly have feelings now and are rebelling" in any sort of serious light because ANY ENGINEER ON THE PLANET WOULDN'T DESIGN THAT SHIT BECAUSE IT'S FUCKING STUPID TO GIVE YOUR TOOLS THE EASY ABILITY TO MUTINY.

    Oh, boo fucking hoo. I don't care that you overengineered your tools and your lack of real social skills means you have feelings for them. That's your problem, not a problem with society.

  • by Squiddie ( 1942230 ) on Friday May 10, 2013 @07:04PM (#43690253)
    We could just make them non-sentient. We all know how the whole "thinking robot" thing turns out. We've all seen Terminator.
  • by postbigbang ( 761081 ) on Friday May 10, 2013 @07:04PM (#43690265)

    Self-awareness is wonderful. But the criteria for judging that is as muddy as when live begins for purposes of abortion.

    Robots are chattel. They can be bought and sold. They do not reproduce in the sense of "life". They could reproduce. Then they'd run out of resources after doing strange things with their environment, like we do. Dependencies then are the crux of ownership.

    Robots follow instructions that react to their environment, subject to, as mentioned above, the random elements of the universe. I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer. If you program machine gun bots for armies, then you'd better hope the army is doing the "right" thing, which I believe is impossible with such bots.

    Once that environmental autonomy is achieved, they get rights associated with sentient responsible beings. Until then: chattel.

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday May 10, 2013 @07:08PM (#43690299)
    Comment removed based on user account deletion
  • Re:deterministic (Score:3, Insightful)

    by nathan s ( 719490 ) on Friday May 10, 2013 @07:11PM (#43690335) Homepage

    I was hoping someone would make this comment - I fully agree. It seems pretty arrogant to presume that just because we are so ignorant of our own internal mechanisms that we don't understand the connection between stimuli and behavior that there is no connection, but I understand that a lot of people like to feel that we are qualitatively "different" and invoke free will and all of these things to maintain a sense that we have a moral right to consider ourselves superior to other forms of life, whatever their basis.

    Having RTFA, or scanned it, it seems like the authors are primarily concerned about issues of liability - i.e., if we anthropomorphize these intelligent machines and they hurt someone, we can't sue the manufacturer if their actions aren't firmly planted in the realm of the deterministic and thus ultimately some failure on the part of the designer/creator to prevent these things from being dangerous. Sort of stupid; I'm agnostic (more atheist, really), but this sort of thinking would have us make laws to allow us to sue $deity if somebody got hurt by anything in nature, by analogy, if they could. Pretty typical, though, of the modern climate of "omg think of the children" risk aversion and general need to punish _someone_ for every little thing that happens.

  • Re:deterministic (Score:5, Insightful)

    by CastrTroy ( 595695 ) on Friday May 10, 2013 @07:14PM (#43690353)
    You just don't get it. All those things you mentioned are inputs.
  • Re:Exaxctly. (Score:2, Insightful)

    by Anonymous Coward on Friday May 10, 2013 @07:51PM (#43690695)

    What is your proof that they will never exist?

    Who says that robots will be abacus with greater computational power?

    What evidence do you have that our brains are not deterministic systems, of which the part that brings awareness or "being" cannot be reproduced in other ways?

    It seems that the wishful thinking is on your part.

  • Re:Exaxctly. (Score:4, Insightful)

    by Kielistic ( 1273232 ) on Friday May 10, 2013 @08:20PM (#43690963)
    I'm not sure you understand what deterministic means. Does a cpu overheating and shutting down prove that cpus are non-deterministic? Absolutely not, just that shutting down is part of the process.
  • by yndrd1984 ( 730475 ) on Friday May 10, 2013 @10:31PM (#43691895)

    Boredom proves that human brains are not deterministic.
    anybody who has thought about this problem deeply, or has worked with small children, knows that the human brain is not deterministic.
    If the brain is deterministic, it should be resetting to start state every time you wake up.
    And for simple tasks, should be able to go into an infinite loop quite nicely without *ever* getting bored.

    No. All of these are appeals to intuition or a misunderstanding of how a deterministic processes behave.

    So no, internal states do not make something deterministic or non-deterministic.

    True, but unknown internal states can make something deterministic appear to be non-deterministic.

    Quantum Fluctuations may be the cause

    If QM makes something non-deterministic then every physical behavior is non-deterministic, including the behavior of robots.

    Maybe someday when we find a truly random input instead of merely a pseudo random input, but not yet.

    It shouldn't be that hard to hook up a Geiger counter to a computer.

  • Re:deterministic (Score:5, Insightful)

    by Your.Master ( 1088569 ) on Saturday May 11, 2013 @12:47AM (#43692625)

    You did have a choice, and you did write it. Determinism doesn't mean you didn't have a choice.

    It means if we take our time machine back to before you posted this, and watched you do it again, without doing anything that could possibly influence your decision directly or indirectly, we'd observe you making the same choice. Over and over. Right up until we accidentally influenced you by stepping on that butterfly that caused that hurricane that caused the news crew to pre-empt that episode of the show you were recording last week that made you give up and go to sleep a bit earlier which made you less tired today which allowed you to consider the consequences more thoroughly and make the opposite choice. But until then, you're given the same inputs, and you're making the same choice. Every time.

    Why is it that people seem proud if the idea that their choices are not based on their experience, learning, and environment? In other words, why is choice more meaningful if it's random and causeless? Why is it more valid to take credit for your random actions than your considered actions? I would think people would be more proud of the things they demonstrate they could do repeatably rather than the things that for all we know rely on them rolling a natural 20, as it were.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...