The Sci-Fi Myth of Robotic Competence 255
malachiorion writes: "When it comes to robots, most of us are a bunch of Jon Snow know-nothings. With the exception of roboticists, everything we assume we know is based on science fiction, which has no reason to be accurate about its iconic heroes and villains, or journalists, who are addicted to SF references, jokes and tropes. That's my conclusion, at least, after a story I wrote Popular Science got some attention—it asked whether a robotic car should kill its owner, if it means saving two strangers. The most common dismissals of the piece claimed that robo-cars should simply follow Asimov's First Law, or that robo-cars would never crash into each other. These perspectives are more than wrong-headed—they ignore the inherent complexity and fallibility of real robots, for whom failure is inevitable. Here's my follow-up story, about why most of our discussion of robots is based on make-believe, starting with the myth of robotic hyper-competence."
Measuring Competence (Score:5, Interesting)
Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).
Article has many good valid points, though, but that point irked me.
And in practice, laws 2 and 3 are swapped (Score:5, Interesting)
Re:Things are a lot more complicated (Score:5, Interesting)
911 vehicles on the other hand should always value their own occupants less than than others,
The first rule taught in first responder classes is that if you become a casualty then you become worthless as a first responder. For example, as a lifeguard, if you die trying to save someone then they aren't going to survive, either. If that means you have to wait until the belligerent victim goes unconscious (and maybe unsavable) before you approach him, you wait.
The idea that every first responder vehicle must sacrifice itself and its occupants is going to result in very few people being first responders, either through choice or simple attrition.
Asimov's Three Laws wouldn't work (Score:5, Interesting)
Asimov's Three Laws of Robotics are justly famous. But people shouldn't assume that they will ever actually be used. They wouldn't really work.
Asimov wrote that he invented the Three Laws because he was tired of reading stories about robots running amok. Before Asimov, robots were usually used as a problem the heroes needed to solve. Asimov reasoned that machines are made with safeguards, and he came up with a set of safeguards for his fictional robots.
His laws are far from perfect, and Asimov himself wrote a whole bunch of stories taking advantage of the grey areas that the laws didn't cover well.
Let's consider a big one, the biggest one: according to the First Law, a robot may not harm a human, nor through inaction allow a human to come to harm. Well, what's a human? How does the robot know? If you dress a human in a gorilla costume, would the robot still try to protect him?
In the excellent hard-SF comic Freefall [purrsia.com], a human asked Florence (an uplifted wolf with an artificial Three Laws design brain; legally she is a biological robot, not a person) how she would tell who is human. "Clothes", she said.
http://freefall.purrsia.com/ff1600/fc01585.htm [purrsia.com]
http://freefall.purrsia.com/ff1600/fc01586.htm [purrsia.com]
http://freefall.purrsia.com/ff1600/fc01587.htm [purrsia.com]
In Asimov's novel The Naked Sun, someone pointed out that you could build a heavily-armed spaceship that was controlled by a standard robotic brain and had no crew; then you could talk to it and tell it that all spaceships are unmanned, and any radio transmissions claiming humans are on board a ship are lies. Hey presto, you have made a robot that can kill humans.
Another problem: suppose someone just wanted to make a robot that can kill. Asimov's standard explanation was that this is impossible, because it took many people a whole lot of work to map out the robot brain design in the first place, and it would just be too much work to do all that work again. This is a mere hand-wave. "What man has done, man can aspire to do" as Jerry Pournelle sometimes says. Someone, somewhere, would put together a team of people and do the work of making a robot brain that just obeys all orders, with no pesky First Law restrictions. Heck, they could use robots to do part of the work, as long as they were very careful not to let the robots understand the implications of the whole project.
And then we get into "harm". In the classic short story "A Code for Sam", any robot built with the Three Laws goes insane. For example, allowing a human to smoke a cigarette is, through inaction, allowing a human to come to harm. Just watching a human walk across a road, knowing that a car could hit the human, would make a robot have a strong impulse to keep the human from crossing the street.
The Second Law is problematic too. The trivial Denial of Service attack against a Three Laws robot: "Destroy yourself now." You could order a robot to walk into a grinder, or beam radiation through its brain, or whatever it would take to destroy itself as long as no human came to harm. Asimov used this in some of his stories but never explained why it wasn't a huge problem... he lived before the Internet; maybe he just didn't realize how horrible many people can be.
There will be safeguards, but there will be more than just Three Laws. And we will need to figure things out like "if crashing the car kills one person and saves two people, do we tell the car to do it?"
My concern is far less esoteric (Score:5, Interesting)
If self-driving cars ceed control back to the real driver when things get "interesting", without all the conditiioning that driving countless kilometers will the driver still be able to react competently? Or will it be like throwing inexperenced learner-drivers into the deep end?
Driving is a skill, and like any skill it needs to be practiced often to stop going rusty...
Re:It's all about ME, ME, ME. (Score:5, Interesting)
IMOHO, one of the reasons that many people think that robots are "hyper-competent" is that too many people think that a program can encompass and accommodate every possible circumstance.
This simply reflects the tendency people have to believe in their own hyper-competence. Most interesting ethical issues are unsolvable in any formal sense by virtue of three simple facts:
1) moral values are ordinal, not cardinal (I value my children's lives more than my cats life, no matter how many cats I have)
2) we value outcomes but choose actions
3) outcomes are related to actions by some more-or-less broad probability distribution.
This means we cannot choose outcomes directly, but we cannot do probability calculations to assign values because ordinals don't support simple arithmetic.
There are two special cases that fortunately cover a lot of every-day life:
A) the probability distribution is narrow enough that we can ignore it, so we can effectively choose outcomes based on our ordinal values
B) there is a market in the outcomes we are choosing between, which allows us to compute cardinal (dollar) values from our ordinals, so we can do probability calculations on the domain.
But interesting moral quandries are simply not computable, so talking about them as if they are even by human beings is to go on a hiding to nowhere.