How Should the Law Think About Robots? 248
An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."
All I needed to read... (Score:3, Insightful)
"With the personal robotics revolution imminent..."
Imminent? Really? Sorry, but TFA has been watching too many SyFy marathons.
deterministic (Score:5, Insightful)
The same set of inputs will generate the same set of outputs every time.
Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.
The fallacy of the three laws (Score:4, Insightful)
And that is the fallacy of the three laws as written by Asimov- he was a biophysicist, not a binary mathematician.
The three laws are too vague. They really are guidelines for designers, not something that can be built into the firmware of a current robot. Even a net connected one, would need far too much processing time to make the kinds of split second decisions about human anatomy and the world around them to fulfill the three laws.
deterministic? (Score:5, Insightful)
Robots do not have deterministic output based on your commands. First of all, they have sensor noise, as well as environmental noise. Your commands are not the only input. They also hidden state, which includes flaws (both hardware, and software), both design, manufacturing and wear related.
While this point is obvious, it is also important: someone attempting to control a robot, even if they know exactly how it works, and are perfect, can still fail to predict and control the robots actions. This is often the case (minus the perfection of the operator) in car crashes (hidden flaws, or environmental factor cause the crash). Who does the blame rest with here? It depends on lots of things. The same legal quandary facing advanced robots already applies to car crashes, weapon malfunctions, and all other kinds of equipment problems. Nothing new here.
Also, if you are going to make the point that "This projection of human attributes is dangerous when trying to design legislation for robots.", please don't also ask "How Should the Law Think About Robots?". I don't want the Law to Think. Thats a dangerous projection of human attributes!
Re:A race of slaves (Score:3, Insightful)
We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Re:The fallacy of the three laws (Score:5, Insightful)
The three laws are too vague. They really are guidelines for designers
The "three laws" were a plot device for a science fiction novel, and nothing more. There is no reason to expect them to apply to real robots.
The Law Doesn't Think, People Do. (Score:4, Insightful)
Laws and guns are both tools... they don't think and don't murder.
Minor copy edit: (Score:5, Insightful)
As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your senses) and the outputs (your behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the person will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the person. While this mental agency is part of our definition of a person, it is vital for us to remember what is causing this agency.
-
Re:A race of slaves (Score:5, Insightful)
We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.
Perhaps we shouldn't give potentially mutinous personalities to our tools? I mean, my screwdriver doesn't need an AI in it. Neither do my pliers. My table saw can hurt me, but only if the laws of physics and my own inattentiveness make it so, not something someone programmed into it.
Oh, wait, my mistake. I didn't grow up addicted to science fiction written by authors who lost track of which characters were designed to be actual tools and which were human beings due to that author's inability to discern people from things. I guess I just don't understand the apparently very vital uses of designing a mining device programmed to feel ennui, or a construction crane that some engineer at some point explicitly decided to give the ability to hate and some marketing director signed off on it. Maybe it's just that I can't see any sci-fi with a message of "oh no, our robots suddenly have feelings now and are rebelling" in any sort of serious light because ANY ENGINEER ON THE PLANET WOULDN'T DESIGN THAT SHIT BECAUSE IT'S FUCKING STUPID TO GIVE YOUR TOOLS THE EASY ABILITY TO MUTINY.
Oh, boo fucking hoo. I don't care that you overengineered your tools and your lack of real social skills means you have feelings for them. That's your problem, not a problem with society.
Re:A race of slaves (Score:4, Insightful)
Re:Overcomplicating the subject (Score:5, Insightful)
Self-awareness is wonderful. But the criteria for judging that is as muddy as when live begins for purposes of abortion.
Robots are chattel. They can be bought and sold. They do not reproduce in the sense of "life". They could reproduce. Then they'd run out of resources after doing strange things with their environment, like we do. Dependencies then are the crux of ownership.
Robots follow instructions that react to their environment, subject to, as mentioned above, the random elements of the universe. I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer. If you program machine gun bots for armies, then you'd better hope the army is doing the "right" thing, which I believe is impossible with such bots.
Once that environmental autonomy is achieved, they get rights associated with sentient responsible beings. Until then: chattel.
Comment removed (Score:5, Insightful)
Re:deterministic (Score:3, Insightful)
I was hoping someone would make this comment - I fully agree. It seems pretty arrogant to presume that just because we are so ignorant of our own internal mechanisms that we don't understand the connection between stimuli and behavior that there is no connection, but I understand that a lot of people like to feel that we are qualitatively "different" and invoke free will and all of these things to maintain a sense that we have a moral right to consider ourselves superior to other forms of life, whatever their basis.
Having RTFA, or scanned it, it seems like the authors are primarily concerned about issues of liability - i.e., if we anthropomorphize these intelligent machines and they hurt someone, we can't sue the manufacturer if their actions aren't firmly planted in the realm of the deterministic and thus ultimately some failure on the part of the designer/creator to prevent these things from being dangerous. Sort of stupid; I'm agnostic (more atheist, really), but this sort of thinking would have us make laws to allow us to sue $deity if somebody got hurt by anything in nature, by analogy, if they could. Pretty typical, though, of the modern climate of "omg think of the children" risk aversion and general need to punish _someone_ for every little thing that happens.
Re:deterministic (Score:5, Insightful)
Re:Exaxctly. (Score:2, Insightful)
What is your proof that they will never exist?
Who says that robots will be abacus with greater computational power?
What evidence do you have that our brains are not deterministic systems, of which the part that brings awareness or "being" cannot be reproduced in other ways?
It seems that the wishful thinking is on your part.
Re:Exaxctly. (Score:4, Insightful)
Re:Perhaps ours are too (Score:4, Insightful)
No. All of these are appeals to intuition or a misunderstanding of how a deterministic processes behave.
True, but unknown internal states can make something deterministic appear to be non-deterministic.
If QM makes something non-deterministic then every physical behavior is non-deterministic, including the behavior of robots.
It shouldn't be that hard to hook up a Geiger counter to a computer.
Re:deterministic (Score:5, Insightful)
You did have a choice, and you did write it. Determinism doesn't mean you didn't have a choice.
It means if we take our time machine back to before you posted this, and watched you do it again, without doing anything that could possibly influence your decision directly or indirectly, we'd observe you making the same choice. Over and over. Right up until we accidentally influenced you by stepping on that butterfly that caused that hurricane that caused the news crew to pre-empt that episode of the show you were recording last week that made you give up and go to sleep a bit earlier which made you less tired today which allowed you to consider the consequences more thoroughly and make the opposite choice. But until then, you're given the same inputs, and you're making the same choice. Every time.
Why is it that people seem proud if the idea that their choices are not based on their experience, learning, and environment? In other words, why is choice more meaningful if it's random and causeless? Why is it more valid to take credit for your random actions than your considered actions? I would think people would be more proud of the things they demonstrate they could do repeatably rather than the things that for all we know rely on them rolling a natural 20, as it were.