The Sci-Fi Myth of Robotic Competence 255
malachiorion writes: "When it comes to robots, most of us are a bunch of Jon Snow know-nothings. With the exception of roboticists, everything we assume we know is based on science fiction, which has no reason to be accurate about its iconic heroes and villains, or journalists, who are addicted to SF references, jokes and tropes. That's my conclusion, at least, after a story I wrote Popular Science got some attention—it asked whether a robotic car should kill its owner, if it means saving two strangers. The most common dismissals of the piece claimed that robo-cars should simply follow Asimov's First Law, or that robo-cars would never crash into each other. These perspectives are more than wrong-headed—they ignore the inherent complexity and fallibility of real robots, for whom failure is inevitable. Here's my follow-up story, about why most of our discussion of robots is based on make-believe, starting with the myth of robotic hyper-competence."
It's all about ME, ME, ME. (Score:1, Insightful)
after a story I wrote...
This is just self-promotion. Go away.
Robot Competence (Score:4, Insightful)
We all know robots aren't competent. They are consistently being defeated by John Connor, the Doctor, and Starbuck.
Driverless Cars Are Boring (Score:5, Insightful)
There was an article a short while ago written by a journalist who rode in a driverless car for a stretch. There was one adjective that really stood out, an adjective that most people don't take into consideration when talking about driverless cars.
That one word: boring.
Driverless cars drive in the most boring, conservative, milquetoast fashion imaginable. They're going to be far less prone to accidents from the outset simply because they don't take the kind of chances that many of us wouldn't even begin call "risky". They drive the speed limit. They follow at an appropriate distance. They don't pull quick lane changes to get ahead of slowpokes. They don't swing around blind corners faster than they can stop upon detecting an unexpected hazard. They don't nudge through crosswalks. They don't cut off cyclists in the bike lane. They don't get impatient. They don't get frustrated. They don't get angry. They don't get sleepy. They don't get distracted. They just drive, in a deliberate, controlled, and entirely boring fashion.
The problem with so, so many of the "what if?" accident scenarios is that the people posing said scenarios presume that the car would be putting itself in the same kinds of unnecessarily hazardous driving positions that human drivers put themselves in every single day, as a matter of routine, and without a moment's hesitation.
Very, very few people drive "boring" safe. Every driverless car will. Every trip. All the time.
Maybe the problem is the word "robot" (Score:5, Insightful)
Robots stores in Science Fiction are about powerful artificial sentient minds wrapped in an mobile and often human like container.
Robots in real life have been defined as machines with mechanical appendages that can programmed and reprogrammed for a variety of tasks. Their computational capabilities are seldom extraordinary and they usually don't even employ AI.
More recently, "robot" has also been used to describe machines with ai-like programming even if they are single function (like a robotic car).
When a word is used in three greatly different ways, should we be surprised that there is is confusion about that a "robot" can do?
Re:Measuring Competence (Score:5, Insightful)
When he says that robots aren't "competent", I don't think that he's saying that they can't do things. He's just pointing out they they only do certain specific things that they've been told to do, even if they do those things extremely well.
I think the example used points this out: The question is asked, "If the robotic car be put in the position of killing 1 person in order to save 2 people, how should it make the decision?" He's saying that there's a problem with the question, which is the assumption that the robot will be capable of understanding such a scenario.
With our current engineering techniques, we can't expect the robot to understand what it's doing, nor the moral implications. We can't program it to actually understand whether it will kill people. The most we can program it to do is, given a detection of some heuristic value, follow a certain protocol of instructions. So for example, if the robotic car can detect that it's about to hit someone, try to stop. If it calculates that it will be unable to stop, try to swerve. You might program it to detect people specifically and place extra priority on swerving around them, e.g. "if you're about to hit something identified as a person, or hit a road sign, choose to hit the road sign". We might even get it to do something like, "If you're losing control and you can detect several people, and you can't avoid the whole crowd, swerve into the sparsest area of the crowd while slowing as much as possible.
The engineers should try to anticipate these kinds of things. We as citizens should also debate about how we'd want these kinds of instructions should work to avoid legal liability. For example, we might say that in order for the AI to be legal, it must show that it will stop the car when [event x] happens. But to ask, "how will the car make moral decisions?" fundamentally misunderstands its decision-making capabilities. The answer is, "It won't make moral decisions at all."
your premise is wrong (Score:5, Insightful)
Your entire premise is wrong. And now you're posting it again.
This will be a legal issue, not an issue solved by the "roboticists" whatever that is...
In a legal sense, taking an action that kills 1 person to save another puts you in jeopardy of being liable. Swerving or taking other actions that lead to someones death makes YOU responsible. If someone runs out in the road, you apply the breaks firmly and appropriately, then that is not your fault. It's the person who ran out into the road. So in cases where the computers unsure what to do, it will follow the first commandment "STOP THE CAR" and it will let things play out as they will. Any other choice opens up a can of worms... how old are the other occupants? If 1 car has a 90yr old in it and the other has a baby, which do you hit? What if ones the mayor? The problems increase exponentially as soon as you get away from "STOP THE CAR" so just stop the dang car and be done with it.
With regards to your comment about Scifi... you're reading pretty terrible SciFi. Most of the stuff I read is written by actual scientists so... yea...
Re:Driverless Cars Are Boring (Score:4, Insightful)
That one word: boring.
Right. Just like commercial air travel, elevators, and escalators. Which is the whole point.
This will be just fine with the trucking industry. The auto industry can deal with "boring" by putting in more cupholders, faux-leather upholstery, and infotainment systems.
Re:It's all about ME, ME, ME. (Score:5, Insightful)
The irony is that he's 180 degrees off from the main problem with his story, which is that he thinks robots are magic too. The reason robots will not be making ethical decisions is that they can't, not only because getting them to reason ethically would require us to agree on a system of ethics for them to follow, but because even if they had such a system, they don't have enough data to act on it with the degree of accuracy that would be required for the premise of the article to make sense. The author essentially assumes that these car-driving robots will be omniscient, or that they will be able to trust the omniscience of the robots in other cars with which they are communicating. The first supposition is nonsensical; the second is unlikely to be true, for the same reason that video game cheats are a problem.
Re:It's all about ME, ME, ME. (Score:4, Insightful)
IMOHO, one of the reasons that many people think that robots are "hyper-competent" is that too many people think that a program can encompass and accommodate every possible circumstance. Even if the robot cars, as a group, were able to arrive at omniscience (at least for their own realm) there will still occur events that no program has anticipated.
No! (Score:5, Insightful)
If this ever comes up as a question than the person asking the question is obviously NOT an engineer.
Keep
It
Simple,
Stupid
The cars should be programmed to stop and revert to human control whenever there is a problem that the car is not programmed to handle.
And the car should only be programmed to handle DRIVING.
No. The car should not even be able to detect other occupants. Adding more complexity means more avenues for failure.
The car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE.
No. Again, the car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE. Emergency vehicles should ALWAYS be human controlled.
From TFA:
As is that entire article.
The entirety of the car's programming should be summed up as:
a. Is the way clear? If yes then go.
b. If not, are the obstacles ones that I am programmed for? If yes then go.
c. Stop.