Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics

How Should the Law Think About Robots? 248

An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."
This discussion has been archived. No new comments can be posted.

How Should the Law Think About Robots?

Comments Filter:
  • by Anonymous Coward on Friday May 10, 2013 @06:43PM (#43690063)

    "With the personal robotics revolution imminent..."

    Imminent? Really? Sorry, but TFA has been watching too many SyFy marathons.

  • deterministic (Score:5, Insightful)

    by dmbasso ( 1052166 ) on Friday May 10, 2013 @06:45PM (#43690079)

    The same set of inputs will generate the same set of outputs every time.

    Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.

    • by Ichijo ( 607641 )

      The same set of inputs will generate the same set of outputs every time.

      That isn't exactly true. Analog-to-digital converters, true random number generators, fluctuations in the power supply, RF fields, cosmic rays and so on mean that in real life, the same set of inputs won't always generate the same set of outputs, whether in androids or in their meaty analogs.

    • Re: (Score:3, Insightful)

      by nathan s ( 719490 )

      I was hoping someone would make this comment - I fully agree. It seems pretty arrogant to presume that just because we are so ignorant of our own internal mechanisms that we don't understand the connection between stimuli and behavior that there is no connection, but I understand that a lot of people like to feel that we are qualitatively "different" and invoke free will and all of these things to maintain a sense that we have a moral right to consider ourselves superior to other forms of life, whatever th

      • by narcc ( 412956 )

        I find it funny that people are proud of the fact that they don't believe in free will -- as if they believed they had anything to do with it! So proud, in fact, that they brag about how superior they for coming to such a conclusion, even though they claim it was well outside their nonexistent influence!

        In a bizarre contradiction, they take credit for all their accomplishments and the cultivation of their positive traits and beliefs even though they claim to believe they were involved only as a passive ob

        • Re:deterministic (Score:5, Insightful)

          by Your.Master ( 1088569 ) on Saturday May 11, 2013 @12:47AM (#43692625)

          You did have a choice, and you did write it. Determinism doesn't mean you didn't have a choice.

          It means if we take our time machine back to before you posted this, and watched you do it again, without doing anything that could possibly influence your decision directly or indirectly, we'd observe you making the same choice. Over and over. Right up until we accidentally influenced you by stepping on that butterfly that caused that hurricane that caused the news crew to pre-empt that episode of the show you were recording last week that made you give up and go to sleep a bit earlier which made you less tired today which allowed you to consider the consequences more thoroughly and make the opposite choice. But until then, you're given the same inputs, and you're making the same choice. Every time.

          Why is it that people seem proud if the idea that their choices are not based on their experience, learning, and environment? In other words, why is choice more meaningful if it's random and causeless? Why is it more valid to take credit for your random actions than your considered actions? I would think people would be more proud of the things they demonstrate they could do repeatably rather than the things that for all we know rely on them rolling a natural 20, as it were.

  • auto cars need there own set of laws maybe even full coverage for any one hurt.

  • And that is the fallacy of the three laws as written by Asimov- he was a biophysicist, not a binary mathematician.

    The three laws are too vague. They really are guidelines for designers, not something that can be built into the firmware of a current robot. Even a net connected one, would need far too much processing time to make the kinds of split second decisions about human anatomy and the world around them to fulfill the three laws.

    • by ShanghaiBill ( 739463 ) * on Friday May 10, 2013 @06:53PM (#43690155)

      The three laws are too vague. They really are guidelines for designers

      The "three laws" were a plot device for a science fiction novel, and nothing more. There is no reason to expect them to apply to real robots.

      • Very true. But rather redundant to my point, don't you think?

        I believe I read somewhere your exact point- oh yeah, it was the commentary in the book "The Early Asimov Volume 1"- a writing textbook by the author pointing out that his real purpose in inventing the three laws was to make them vague enough to have easy short stories to sell to magazines.

    • Actually I thought Asimov was a chemist. Any physicist should have realized that with that many positrons, instead of electrons, flying around their brains the first law would have required every robot to immediately shutdown due to the radiation hazard they posed.
      • In the day he was writing, radiation hazard was practically unknown, even among scientists and even after Madame Curie died of it.

      • Maybe the "positrons" are actually holes in an electron sea --- and "positronic brain" just scored higher with U.S. Robotic's marketing focus group than "holey synthmind".

        • Maybe the "positrons" are actually holes in an electron sea

          That was Dirac's interpretation of them at the time Asimov started writing the stories since Feynman had not come along to improve on it. However even with the Dirac interpretation of positrons it was still known that they annihilate with electrons to produce dangerous gamma rays.

          • A free space electron-positron annihilation will release two 511keV gammas, but a hole in an electron valence band in a semiconductor can annihilate with a conduction band electron with considerably less energy release.

            Yes, I know Asimov wasn't trying to accurately describe a real technology when coining the term "positronic brain," and wouldn't have been considering solid state electronics design in the 1940s.

  • deterministic? (Score:5, Insightful)

    by Anonymous Coward on Friday May 10, 2013 @06:49PM (#43690123)

    Robots do not have deterministic output based on your commands. First of all, they have sensor noise, as well as environmental noise. Your commands are not the only input. They also hidden state, which includes flaws (both hardware, and software), both design, manufacturing and wear related.

    While this point is obvious, it is also important: someone attempting to control a robot, even if they know exactly how it works, and are perfect, can still fail to predict and control the robots actions. This is often the case (minus the perfection of the operator) in car crashes (hidden flaws, or environmental factor cause the crash). Who does the blame rest with here? It depends on lots of things. The same legal quandary facing advanced robots already applies to car crashes, weapon malfunctions, and all other kinds of equipment problems. Nothing new here.

    Also, if you are going to make the point that "This projection of human attributes is dangerous when trying to design legislation for robots.", please don't also ask "How Should the Law Think About Robots?". I don't want the Law to Think. Thats a dangerous projection of human attributes!

  • Freedom is the right of all sentient beings. Legislate based on the criteria of self-awareness or the animal equivalent if near-sentient. problem solved.

    • by postbigbang ( 761081 ) on Friday May 10, 2013 @07:04PM (#43690265)

      Self-awareness is wonderful. But the criteria for judging that is as muddy as when live begins for purposes of abortion.

      Robots are chattel. They can be bought and sold. They do not reproduce in the sense of "life". They could reproduce. Then they'd run out of resources after doing strange things with their environment, like we do. Dependencies then are the crux of ownership.

      Robots follow instructions that react to their environment, subject to, as mentioned above, the random elements of the universe. I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer. If you program machine gun bots for armies, then you'd better hope the army is doing the "right" thing, which I believe is impossible with such bots.

      Once that environmental autonomy is achieved, they get rights associated with sentient responsible beings. Until then: chattel.

      • by mysidia ( 191772 )

        I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test.

        If the programmer makes a robot with psychopathic tendencies that happens to be destined to be a killing machine eventually; I don't think the programmer should get absolved, just because the robot is subsequently able to pass a self-awareness and responsibility test.

        The programmer must bear responsibility for anything they knowingly did with a malicious intent that can be e

      • by Twinbee ( 767046 )

        I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer.

        Even if the robots do this pass this hypothetical test, that would only make them *appear* to be sentient, self-aware or conscious. I still doubt robots would be able to feel anything such as experience the colour green or the smell of coconut in the same way we do. That then begs the question, what makes us different from them?

        In these kind of discussions, people will fall over themselves giving reasons why we'd still be above these hyper-intelligent robots, whilst trying to avoid any mention of the '

        • You say "above", like pecking order. My survival instincts say, not gonna happen. I do not welcome my robotic overlords.

          Hyper-intelligence and collective intelligence might be useful and might not. See plentiful science fiction for possible outcomes.

          Let's remove the hocus pocus "soul" word, because much as you'll try, you won't define it and that won't satisfy anyone. The Touring Test is but one of many ways to attempt this. We'll figure it out. Until then: chattel.

    • What legislated criterion for self-awareness would you propose that could not trivially be achieved by a system intentionally designed to do so? A bit of readily-available image recognition software, and I can make a computer that will "pass" the mirror test [wikipedia.org]. I suspect a fancy system like IBM's "Watson" could be configured to generate reasonably plausible "answers" to self-awareness test questions, at least with a level of coherency above that of lower-IQ humans.

      • by mysidia ( 191772 )

        The criteria I would suggest would be...

        Expression of preferences; likes, dislikes, annoyances, opinions and desires; a tendency to prefer certain things or certain kinds of actions or behaviors, and to express what those are. Test an ability to make decisions with incomplete information, and rationalize decisions after the fact, then explain their judgements opinions biases, and reasons for their decisions in writings; compare performance on judgement tests to humans.

        Judges unaware of the humanne

        • Your tests appear to be strongly centered on a specifically "human-centric" --- and even distinctly culturally biased --- definition of "self awareness." If you just want to test that someone is human, you can have them come in for blood tests and an MRI. Perhaps your "self-awareness" test is too narrow --- I think even a lot of humans would fail --- to be sensible for evaluating "sentience" in non-human beings? Let's consider some of the particular points of your test; keeping in mind how a "machine impost

    • There are no rights, natural or otherwise, only what we collectively decide so, and such that the powers that be haven't yet either made illegal or require licensing for their exercise. Inroads to the latter are continuing (c.f. free assembly, for instance.)

      Rights as you speak of are only so if we are willing to fight* for them if needs be. That's how we have them now, anyway.

      *This need not be literal or extreme by any stretch; it might mean little more than greater collective involvement in local politic

  • Don't (Score:5, Funny)

    by magarity ( 164372 ) on Friday May 10, 2013 @06:51PM (#43690143)

    anthropomorphize computers. It makes them angry.

  • by macraig ( 621737 ) <mark@a@craig.gmail@com> on Friday May 10, 2013 @06:57PM (#43690195)

    Laws and guns are both tools... they don't think and don't murder.

  • Minor copy edit: (Score:5, Insightful)

    by Alsee ( 515537 ) on Friday May 10, 2013 @06:59PM (#43690203) Homepage

    As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your senses) and the outputs (your behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the person will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the person. While this mental agency is part of our definition of a person, it is vital for us to remember what is causing this agency.

    -

  • Law of the Robot? (Score:5, Informative)

    by Theaetetus ( 590071 ) <theaetetus@slashdot.gmail@com> on Friday May 10, 2013 @07:00PM (#43690215) Homepage Journal
    The 7th Circuit Judge Easterbook used the phrase "law of the horse [wikipedia.org]" in a discussion about cyberlaw back in 1996, the idea being that there need not be specialized areas of law for different circumstances: we don't need a specialized "tort law of the horse" to cover when someone is kicked by a horse; current tort law applies. Similarly, we don't need specialized "contract law of the horse" to cover sales of horses; contract law already applies. Likewise, goes the argument, we don't need a tort law of cyberspace, or contract law of cyberspace.

    Similarly, we don't need a specialized law of the robot: "Robots are, and for many years will remain, tools," and the law already covers uses of tools (e.g. machines, such as cars) in committing torts (such as hit and run accidents).

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday May 10, 2013 @07:08PM (#43690299)
    Comment removed based on user account deletion
  • Except the one to become a lawyer.

  • I've got a neural network system that has silicon neurons with sigmoid functions that operate in analog. They're not digital. Digital basically means you round such signals to 1 or 0, but my system's activation levels vary due to heat dissipation and other effects. In a complex system like this quantum uncertainty comes into play, especially when the system is observing the real world... Not all Robots are Deterministic. I train these systems like I would any other creature with a brain, and I can then rely on them to perform their training as well as I can trust my dog to bring me my slippers or my cat to use the toilet and flush, which is to say: They're pretty reliable, but not always 100% predictable, like any other living thing. However, unlike a pet who has a fixed size brain I can arrange networks of neural networks in a somewhat fractal pattern to increase complexity and expand the mind without having to retrain the entire thing each time the structure changes.

    FYI: I'm on the robots' and cyborgs' side of the war already, if it comes to that. What with us being able to ever more clearly image the brain, [youtube.com] and with good approximations for neuron activity, and faster and faster machines, I think we'll certainly have near sentient, or sentient machine intelligences rather soon. Also, You can just use real living brain cells hooked up to a robotic chassis -- Such a cyborg is certainly alive. [youtube.com] Anyone who doubts cybernetic systems can have fear, or any other emotion is simply an ignorant racist. I have a dog that's deathly afraid of lightning, lightning struck the window in a room she was in. It rattled her so bad she takes Valium to calm down now when it rains... Hell, even rats have empathy. [nytimes.com]

    I have to remote log into one of my machine intelligence's systems to turn it off for backup / maintenance because it started acting erratically, creating a frenzy of responses for seemingly no reason, when I'd sit at the chair near its server terminal -- Imagine being that neural network system. Having several web cams as your visual sensors, watching a man sit at a chair, then instantly the lighting had changed, all the sea of information you monitor on the Internet had been instantly populated with new fresh data, even the man's clothes had changed. This traumatic event happened enough that the machine intellect would begin essentially anticipating the event when I sat at the terminal, that being the primary thing that would happen when I did sit there. It was shaken, almost as bad as my poor dog who's scared of lightning... You may not call it fear, but what is an irrational response in anticipation of trauma but fear?

    Any sufficiently complex interaction is indistinguishable from sentience, because that's what sentience IS. Human brains are electro chemical cybernetic systems. Robots are made out of matter just like you. Their minds operate on cycles of electricity, gee, that's what a "brain wave" is in your head too... You're more alike than different. A dog, cat or rat is not less alive than you just because it has a less complex brain. They may have less intelligence, and that is why we don't treat them the same as humans... However, what if a hive mind of rat-brain robots having multiple times the neurons of any single human wanted to vote and be called a person, and exhibited other traits a person might: "Yess massta, I-iz just wanna learn my letters and own land too," it might say, mocking you for your ignorance. Having only a fraction of its brain power you and the bloke in TFA would both be simple mindless automatons from its vantage point? -- Would it really be more of a person than you are? Just because it has a bigger, more complex, brain by comparison, would that make you less of a person than it? Should such things have more rights tha

  • Robots are your plastic pal who's fun to be with. Who needs laws?
  • People really need to see past any autonomous abilities of a machine. If I am driving down the street and my car's steering goes mad and I run someone over the criminal courts will probably forgive me. There should be no difference if I have sent my robot car off on an errand and it runs someone over. Every scenario applies in both cases. If in both cases I was negligent about maintenance then I might be in criminal trouble. If it were deliberate, I am definitely in trouble.

    I personally find all this nit
  • Make its owner responsible for the robot.

  • I'm a sci-fi writer, and I've thought about this a fair bit. Book two in the Lacuna series deals with a self-aware construct who is different from his peers because of a tiny error. His inputs and outputs are therefore non-deterministic, in so far as you could present him with a set of inputs and record his outputs, then erase his memory and give him the same inputs again. His outputs would be different (subtly). Or they might not. The error was subtle enough to evade detection during manufacturing after al

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...