The Sci-Fi Myth of Robotic Competence 255
malachiorion writes: "When it comes to robots, most of us are a bunch of Jon Snow know-nothings. With the exception of roboticists, everything we assume we know is based on science fiction, which has no reason to be accurate about its iconic heroes and villains, or journalists, who are addicted to SF references, jokes and tropes. That's my conclusion, at least, after a story I wrote Popular Science got some attention—it asked whether a robotic car should kill its owner, if it means saving two strangers. The most common dismissals of the piece claimed that robo-cars should simply follow Asimov's First Law, or that robo-cars would never crash into each other. These perspectives are more than wrong-headed—they ignore the inherent complexity and fallibility of real robots, for whom failure is inevitable. Here's my follow-up story, about why most of our discussion of robots is based on make-believe, starting with the myth of robotic hyper-competence."
Re:Measuring Competence (Score:5, Informative)
Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).
Article has many good valid points, though, but that point irked me.
You have to keep in mind that to some extent the perfect record may be due to having a human driver that takes control when problematic situations arise. They're not completely autonomous 700,000 miles. We would want to know how many times the human has had to take control and why.
BTW, They have had one wreck, but Google says it happened while the driver had taken control, but did not say why the driver took control.
That topic is covered in this article, and in more detail from the article's link to "That Atlantic" article.
Robot cars, at the moment, have a similarly savant-like range of expertise. As The Atlantic recently covered, Google’s driverless vehicles require detailed LIDAR maps—3D models created from lasers sweeping the contours of a given roadway—to function. Autonomous cars have to do impressive things, like detecting the proximity of surrounding cars, and determining right of way at intersections. But they are algorithmically locked onto their laser roads. They stay the proscribed course, following a trail of sensor-generated breadcrumbs. Compared to what humans have to contend with, these robots are the most sheltered sort of permanent student drivers. No one is quizzing them by sending pedestrians or drunk drivers darting into their path, or diverting them through un-mapped, snow-covered country lanes. Their ability to avoid fatal collisions remains untested.
More detail from this:
http://www.theatlantic.com/tec... [theatlantic.com]
Re:Robots are a lower life form (Score:4, Informative)
Negative. K-9 would be a better example.
The Cybermen have living human brains. They are cyborgs, not robots.
Re:It's all about ME, ME, ME. (Score:4, Informative)