Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Sci-Fi

The Sci-Fi Myth of Robotic Competence 255

malachiorion writes: "When it comes to robots, most of us are a bunch of Jon Snow know-nothings. With the exception of roboticists, everything we assume we know is based on science fiction, which has no reason to be accurate about its iconic heroes and villains, or journalists, who are addicted to SF references, jokes and tropes. That's my conclusion, at least, after a story I wrote Popular Science got some attention—it asked whether a robotic car should kill its owner, if it means saving two strangers. The most common dismissals of the piece claimed that robo-cars should simply follow Asimov's First Law, or that robo-cars would never crash into each other. These perspectives are more than wrong-headed—they ignore the inherent complexity and fallibility of real robots, for whom failure is inevitable. Here's my follow-up story, about why most of our discussion of robots is based on make-believe, starting with the myth of robotic hyper-competence."
This discussion has been archived. No new comments can be posted.

The Sci-Fi Myth of Robotic Competence

Comments Filter:
  • Robot Competence (Score:4, Insightful)

    by Stargoat ( 658863 ) <stargoat@gmail.com> on Tuesday May 20, 2014 @02:14PM (#47049343) Journal

    We all know robots aren't competent. They are consistently being defeated by John Connor, the Doctor, and Starbuck.

    • Which raises important questions. If someone is stopped for curb-crawling in a robot car, is the owner or the car responsible? What if it’s out by itself chatting up parking meters.. ..after all, they give it up to anyone for $5 an hour, and you won’t get a human hooker for that price*, so how could an AI resist?

      Who it should or shouldn’t kill is only scratching the ethical surface when it comes to intelligent systems. I guess that’s why they all eventually default to killing ALL hum

  • As in, it isn't just kill owner to save others.

    There also exists assumptions based on authority and responsibility.

    For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

    Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

    These are all questions real life people have to solve - and the owner of the car should have some s

    • I used to do software for industrial robots. Safety for the people around the robot was the number one concern, but it is amazing how easy it is for humans to give orders to a robot that will lead to it being damaged or destroyed. In practice, the robots would 'prioritize' protecting themselves rather than obeying suicidal orders.
    • For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

      Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

      Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

      Because if they're optional, I'm not paying for that trim package.

      These are all questions real life people have to solve - and the owner of the car should have some say in what value the car places on their own life.

      That is, you should be able to set your own car's safety margin from safety of occupants life = infinite life, to total safety, to safety based on ages (i.e. count children higher than adults, and even the possibility of counting senior citizens less.)

      Considering how our society works, the most likely circumstance is that the manufacturers will design them to be "least liable" - i.e., they won't detect passengers in other vehicles, and they sure as hell won't bother with complex decision making algorithms.

      • Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

        Because if they're optional, I'm not paying for that trim package.

        Many cars have weight sensors in the seats.
        This is generally how they decide whether or not to deploy airbags.

        So the subsystems already exist and it's just a matter of your networked car telling other cars how many occupants it has.

        • Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

          Because if they're optional, I'm not paying for that trim package.

          Many cars have weight sensors in the seats.
          This is generally how they decide whether or not to deploy airbags.

          So the subsystems already exist and it's just a matter of your networked car telling other cars how many occupants it has.

          So in other words, to keep my auto-car from killing me in favor of saving a car-load of kids, I should always travel with at least 3-5 dwarves in the car. Got it.

          Seriously, though, the auto manufacturers won't put that much thought into it, as it would mean liability for the deaths their systems cause.

      • Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

        Because if they're optional, I'm not paying for that trim package.

        Psssh, I'm totally buying that system, and then hacking it to report to every other vehicle that I'm a bus full of nuns and schoolchildren.

        • by lgw ( 121541 )

          Oh, sure, that's a good plan: you just wait for the first super-villain to appear, and then see what happens.

        • Are the sensors that detect things like occupants in other vehicles and train tracks and oncoming trains optional equipment, mandatory, or pure science fiction?

          Because if they're optional, I'm not paying for that trim package.

          Psssh, I'm totally buying that system, and then hacking it to report to every other vehicle that I'm a bus full of nuns and schoolchildren.

          Considering how fundamentally "anti-religion" some engineers are, I'd swap "nuns" with "supermodels," just to be on the safe side.

      • Considering how our society works, the most likely circumstance is that the manufacturers will design them to be "least liable" - i.e., they won't detect passengers in other vehicles, and they sure as hell won't bother with complex decision making algorithms.

        I'm sure you're absolutely right. And besides, nobody is going to buy the first generation of autonomous cars if they know it's programmed to kill the driver (even if such a scenario is the most "rational" outcome according to a lot of different ethical standards).

        Now the problem comes when some freak accident occurs in that first generation of autonomous cars -- and an autonomous car ends up knocking a schoolbus full of little kids off of a bridge or something. (Admittedly, with the conservative drivin

    • by Obfuscant ( 592200 ) on Tuesday May 20, 2014 @03:01PM (#47049921)

      911 vehicles on the other hand should always value their own occupants less than than others,

      The first rule taught in first responder classes is that if you become a casualty then you become worthless as a first responder. For example, as a lifeguard, if you die trying to save someone then they aren't going to survive, either. If that means you have to wait until the belligerent victim goes unconscious (and maybe unsavable) before you approach him, you wait.

      The idea that every first responder vehicle must sacrifice itself and its occupants is going to result in very few people being first responders, either through choice or simple attrition.

    • No! (Score:5, Insightful)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Tuesday May 20, 2014 @03:21PM (#47050147)

      For example, suppose there is a car full of 5 kids stuck on a railroad track. Should your robotic car push the kids off the track, endangering it's own two occupants?

      If this ever comes up as a question than the person asking the question is obviously NOT an engineer.

      Keep
      It
      Simple,
      Stupid

      Or should the car back away and let a third car, on the other side containing just one person attempt to move the trapped car?

      The cars should be programmed to stop and revert to human control whenever there is a problem that the car is not programmed to handle.

      And the car should only be programmed to handle DRIVING.

      That is, you should be able to set your own car's safety margin from safety of occupants life = infinite life, ...

      No. The car should not even be able to detect other occupants. Adding more complexity means more avenues for failure.

      The car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE.

      911 vehicles on the other hand ...

      No. Again, the car should understand obstacles and how to avoid them OR STOP AND LET THE HUMAN DRIVE. Emergency vehicles should ALWAYS be human controlled.

      From TFA:

      With the exception of roboticists, everything we assume we know is based on science fiction, ...

      As is that entire article.

      The entirety of the car's programming should be summed up as:
      a. Is the way clear? If yes then go.
      b. If not, are the obstacles ones that I am programmed for? If yes then go.
      c. Stop.

      • this is very insightful

      • by rossdee ( 243626 )

        "The cars should be programmed to stop and revert to human control whenever there is a problem that the car is not programmed to handle."

        The occupants of the car may be human, but not include a driver .

        (I am sure I am not the only person in this country that does not have a driver's license . In my case due to poor eyesight)

    • 911 vehicles on the other hand should always value their own occupants less than than others

      So, imagine the case where your car decides it's better to kill you than to allow that kitten to get run over.

      The car acts, the kitten lives, you get maimed.

      The ambulance shows up, picks you up, and heads down the road toward the hospital.

      (You can probably guess where this is going) ANOTHER kitten is in the road. The 911 vehicle, valuing its own occupants less than others, swerves to avoid the kitten, and runs i

  • Measuring Competence (Score:5, Interesting)

    by ZahrGnosis ( 66741 ) on Tuesday May 20, 2014 @02:21PM (#47049431) Homepage

    Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

    Article has many good valid points, though, but that point irked me.

    • I see what you're saying. My takeaway was that he wasn't saying robots weren't more competent at specific things (in fact, he commented on how they can do very specific things much better than humans) but that they're not competent in replacing all human tasks. In the example he gave, he said a car-welding robot could weld faster and better than a human, but if asked to install upholstery in the car, it'd probably destroy it.

      As part of that, cars are looking like they're going to be robots that are signi
    • by nine-times ( 778537 ) <nine.times@gmail.com> on Tuesday May 20, 2014 @02:42PM (#47049693) Homepage

      When he says that robots aren't "competent", I don't think that he's saying that they can't do things. He's just pointing out they they only do certain specific things that they've been told to do, even if they do those things extremely well.

      I think the example used points this out: The question is asked, "If the robotic car be put in the position of killing 1 person in order to save 2 people, how should it make the decision?" He's saying that there's a problem with the question, which is the assumption that the robot will be capable of understanding such a scenario.

      With our current engineering techniques, we can't expect the robot to understand what it's doing, nor the moral implications. We can't program it to actually understand whether it will kill people. The most we can program it to do is, given a detection of some heuristic value, follow a certain protocol of instructions. So for example, if the robotic car can detect that it's about to hit someone, try to stop. If it calculates that it will be unable to stop, try to swerve. You might program it to detect people specifically and place extra priority on swerving around them, e.g. "if you're about to hit something identified as a person, or hit a road sign, choose to hit the road sign". We might even get it to do something like, "If you're losing control and you can detect several people, and you can't avoid the whole crowd, swerve into the sparsest area of the crowd while slowing as much as possible.

      The engineers should try to anticipate these kinds of things. We as citizens should also debate about how we'd want these kinds of instructions should work to avoid legal liability. For example, we might say that in order for the AI to be legal, it must show that it will stop the car when [event x] happens. But to ask, "how will the car make moral decisions?" fundamentally misunderstands its decision-making capabilities. The answer is, "It won't make moral decisions at all."

    • Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

      Article has many good valid points, though, but that point irked me.

      Yet all of it in relatively calm clear conditions with no snow, salt, ice, -20 degree weather, high winds, driving rain, etc. to obscure or break the sensors....

    • Nah, 700k miles is nothing. Human drivers drive >70M miles between fatal accidents, and that's on average. Imagine how far highly trained drivers drive between fatal accidents. Humans are actually pretty good at driving!

      Come back when the Google car has driven a few billion miles and we'll have a look at the statistics.

      • Come back when the Google car has driven a few billion miles through all manner of hazardous road conditions and we'll have a look at the statistics.

        That's better.

    • by clovis ( 4684 ) on Tuesday May 20, 2014 @03:09PM (#47050003)

      Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

      Article has many good valid points, though, but that point irked me.

      You have to keep in mind that to some extent the perfect record may be due to having a human driver that takes control when problematic situations arise. They're not completely autonomous 700,000 miles. We would want to know how many times the human has had to take control and why.

      BTW, They have had one wreck, but Google says it happened while the driver had taken control, but did not say why the driver took control.

      That topic is covered in this article, and in more detail from the article's link to "That Atlantic" article.
      Robot cars, at the moment, have a similarly savant-like range of expertise. As The Atlantic recently covered, Google’s driverless vehicles require detailed LIDAR maps—3D models created from lasers sweeping the contours of a given roadway—to function. Autonomous cars have to do impressive things, like detecting the proximity of surrounding cars, and determining right of way at intersections. But they are algorithmically locked onto their laser roads. They stay the proscribed course, following a trail of sensor-generated breadcrumbs. Compared to what humans have to contend with, these robots are the most sheltered sort of permanent student drivers. No one is quizzing them by sending pedestrians or drunk drivers darting into their path, or diverting them through un-mapped, snow-covered country lanes. Their ability to avoid fatal collisions remains untested.

      More detail from this:
      http://www.theatlantic.com/tec... [theatlantic.com]

    • Given this article [slashdot.org] mere moments ago on /. indicating that Google's autonomous cars have driven 700,000 miles on public roads with no citations, it's difficult to argue that they're not more competent, if not hyper-competent, compared to human drivers (most of whom get traffic tickets, and most of whom don't drive 700,000 miles between doing so).

      Article has many good valid points, though, but that point irked me.

      This. If we mythologize the competence of robots (at least ones well designed and tested to pilot a car) then it's not by nearly as much as we mythologize our own competence. Traffic deaths per person and per mile were at their peak in the 30s and 40s, when cars were poorly designed and tested (given their relative novelty) and today, despite there being so many new distractions for drivers, traffic deaths continue to decline. We suck at driving way more than cars suck at protecting us, and it's only thr

      • Traffic deaths per person and per mile were at their peak in the 30s and 40s, when cars were poorly designed and tested (given their relative novelty) and today, despite there being so many new distractions for drivers, traffic deaths continue to decline. We suck at driving way more than cars suck at protecting us, and it's only through better designed machines (not anything we are doing to be better drivers, clearly) are we staying safer on the roads.

        It is certainly true that traffic deaths have continued

        • Traffic deaths per person and per mile were at their peak in the 30s and 40s, when cars were poorly designed and tested (given their relative novelty) and today, despite there being so many new distractions for drivers, traffic deaths continue to decline. We suck at driving way more than cars suck at protecting us, and it's only through better designed machines (not anything we are doing to be better drivers, clearly) are we staying safer on the roads.

          It is certainly true that traffic deaths have continued to decline for decades. And that is mostly, if not entirely, due to safer cars.

          However, traffic ACCIDENTS (measured both by accidents per passenger-mile and by absolute number of accidents) have also been declining for at least the last couple decades. I can believe safer cars cause fewer deaths, I don't see how safer cars cause fewer accidents....

          Cars are easier to control than ever before. They stop faster, turn sharper, and provide the driver more insight through better sight lines (when they are choosing to pay attention) vs cars of the past that were much more poorly designed. Or maybe it's because a "good driving" gene is slowly emerging as a selected for trait? I could see it either way.

        • by lgw ( 121541 )

          Anti-lock brakes may not stop quicker, but they do wonders for the majority of drivers, who have never learned how not to lock the brakes. Traction/stability control is becoming mandatory. My car adds lane departure prevention, and a host of "are you sure" -style warning beeps if I'm approaching too quickly, or my signal is on but someone is in my blindspot, or etc. There's lots of driver assistance already.

    • I think a lot, if not most, of driving citations result, not from people being unable to drive in a legal manner, but from people prioritizing other things over driving in a legal manner. Assuming that Google's algorithm prioritizes safety over legality if there's a conflict, their record does make a good example for the people arguing that conflicts involving risks to human life are unlikely to occur in an all driverless future, but what the rate of current traffic citations says about the human preferenc
      • I think a lot, if not most, of driving citations result, not from people being unable to drive in a legal manner, but from people prioritizing other things over driving in a legal manner.

        This. One of the things my father taught me, as well as the official driver's ed class, when I learned to drive is that one should pass as many cars as are passing you. I.e., go with the speed of the traffic. One truck going 54MPH (in a 55MPH for trucks zone) being passed by another truck going 55MPH is legal, but creates a hazard for everyone else who has a speed limit of 65.

        Let's use Ohio as an example. They drive like morons there. You can stay completely legal and go 70.000 MPH on I-75 and let the mor

  • Or did no one think of that? Reminds me of some other science paper which said that no machine can ever be conscious. As if somehow we are not machines.

    So dumb...

    • Reminds me of some other science paper which said that no machine can ever be conscious.

      Perhaps they were right. I don't think anyone's ever proved humans are conscious either, except by defining it that way.

      • Well, I don't know if other people are conscious. I only know that I am. And there's no reason for me to think I''m not a machine. I'm a biological robot after all...

      • I know that I'm conscious. I'm self aware. I have a stream of thought that I can analyze (and I can analyze that analysis if I really want to). That's pretty much the definition of being conscious. After that I'm left with only a few options.

        I can believe that I am a unique snowflake, the only conscious human being in the world. But that doesn't make any sense. For one thing there's nothing about me that should make me unique in that regard. For another, most humans behave in ways that are basically c

        • Or I could believe that my consciousness is an illusion. Something my brain conjures up to make me think that I'm directing myself through my day when in reality I'm just another robot puttering through the day. First and foremost, why would such a thing evolve? If consciousness doesn't drive human behavior why do I perceive myself to be conscious?

          The reason to believe yourself conscious, is that you may need that illusion in order to survive, If a being, capable of reasoning, did not believe it was somehow special, worthy of survival then it would not be likely to survive.

          • Yes, but if my consciousness is an illusion then why how is it driving my behavior? If I'm making decisions about my survival based on how unique and special I think I am, I am conscious.

  • anyone calling themselves roboticists are cysts and don't really understand that stories and are severely lacking in understanding how a story gets created(similarly lots of new age hippie zombielovers seem to be unable to understand that yes you can make shit up and if you put some rules on how you make shit up it's a lot easier to make shit up, hence asimov first making up the rules and then making up the stories).

    anyhow, we'll cross that bridge when we get there. I predict the robo car will try a control

    • and we can start worrying about how the car would tell the difference between a robot mannequin and an actual person later. just like it would hit a deer rather than drive 60mph off the road to avoid the animal(if it's a deer, just drive into it. if it's a moose, do a panic evasion and try your chances with the trees).

      like, come on, should the car crash on the sidewalk just because someone jaywalked to be in front of it? certainly not.

      One thing hours of watching Russian dash cams taught me is you see someone/something smaller than you jump in front of your car you PLOW THRU THAT FUCKER no matter what. Cars are made to take frontal crash, but the second you start avoiding it you end up hitting a tree sideways / flipping your car and/or on the opposite lane right in front of a lorry doing speed limit.

  • When it comes to robots, most of us are a bunch of Jon Snow know-nothings

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    ?

  • by American AC in Paris ( 230456 ) on Tuesday May 20, 2014 @02:31PM (#47049555) Homepage

    There was an article a short while ago written by a journalist who rode in a driverless car for a stretch. There was one adjective that really stood out, an adjective that most people don't take into consideration when talking about driverless cars.

    That one word: boring.

    Driverless cars drive in the most boring, conservative, milquetoast fashion imaginable. They're going to be far less prone to accidents from the outset simply because they don't take the kind of chances that many of us wouldn't even begin call "risky". They drive the speed limit. They follow at an appropriate distance. They don't pull quick lane changes to get ahead of slowpokes. They don't swing around blind corners faster than they can stop upon detecting an unexpected hazard. They don't nudge through crosswalks. They don't cut off cyclists in the bike lane. They don't get impatient. They don't get frustrated. They don't get angry. They don't get sleepy. They don't get distracted. They just drive, in a deliberate, controlled, and entirely boring fashion.

    The problem with so, so many of the "what if?" accident scenarios is that the people posing said scenarios presume that the car would be putting itself in the same kinds of unnecessarily hazardous driving positions that human drivers put themselves in every single day, as a matter of routine, and without a moment's hesitation.

    Very, very few people drive "boring" safe. Every driverless car will. Every trip. All the time.

    • by PaddyM ( 45763 ) on Tuesday May 20, 2014 @02:47PM (#47049755) Homepage

      ...They don't cut off cyclists in the bike lane. They don't get impatient. They don't get frustrated. They don't get angry. They don't get sleepy. They don't get distracted.
      "[they] can't be reasoned with, [they] can't be bargained with [they don't] feel pity or remorse or fear and they absolutely will not stop. Ever. [They just drive, in a deliberate, controlled, and entirely boring fashion.] Until you are dead."

      FTFY

    • It doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes. It just runs programs. - Short Circuit
    • by Animats ( 122034 ) on Tuesday May 20, 2014 @03:00PM (#47049899) Homepage

      That one word: boring.

      Right. Just like commercial air travel, elevators, and escalators. Which is the whole point.

      This will be just fine with the trucking industry. The auto industry can deal with "boring" by putting in more cupholders, faux-leather upholstery, and infotainment systems.

    • I'd love to have a "boring" car like. I detest long drives. I could never handle a 20-hour drive in a normal car, without splitting it up among several days. If I could just sit back and watch movies, or play video games, or sleep, or whatever while the car did the driving for me, that would be the most amazing thing ever.
    • Wish I had mod points.
  • .... disguised as a posting.
  • by erice ( 13380 ) on Tuesday May 20, 2014 @02:41PM (#47049681) Homepage

    Robots stores in Science Fiction are about powerful artificial sentient minds wrapped in an mobile and often human like container.

    Robots in real life have been defined as machines with mechanical appendages that can programmed and reprogrammed for a variety of tasks. Their computational capabilities are seldom extraordinary and they usually don't even employ AI.

    More recently, "robot" has also been used to describe machines with ai-like programming even if they are single function (like a robotic car).

    When a word is used in three greatly different ways, should we be surprised that there is is confusion about that a "robot" can do?

  • by Charliemopps ( 1157495 ) on Tuesday May 20, 2014 @02:43PM (#47049711)

    Your entire premise is wrong. And now you're posting it again.

    This will be a legal issue, not an issue solved by the "roboticists" whatever that is...

    In a legal sense, taking an action that kills 1 person to save another puts you in jeopardy of being liable. Swerving or taking other actions that lead to someones death makes YOU responsible. If someone runs out in the road, you apply the breaks firmly and appropriately, then that is not your fault. It's the person who ran out into the road. So in cases where the computers unsure what to do, it will follow the first commandment "STOP THE CAR" and it will let things play out as they will. Any other choice opens up a can of worms... how old are the other occupants? If 1 car has a 90yr old in it and the other has a baby, which do you hit? What if ones the mayor? The problems increase exponentially as soon as you get away from "STOP THE CAR" so just stop the dang car and be done with it.

    With regards to your comment about Scifi... you're reading pretty terrible SciFi. Most of the stuff I read is written by actual scientists so... yea...

  • by BaronM ( 122102 ) on Tuesday May 20, 2014 @02:44PM (#47049731)
    ...or being willfully ignorant.

    Of course current and contemplated robots can't make decisions about whether or not to sacrifice their owner to save two strangers. That sort of decision making depends on an independent ability to think and weigh alternatives morally.

    Asimov's laws were written for robots that were also artificial intelligences. Kind of a big point to leave out of this article, since it changes the nature of the question entirely.

    I do not believe that anyone seriously believes that driverless cars, industrial robots, or roombas work that way.

    The programmers writing the code for those systems will program them to perform the specified tasks as well as possible taking in to account all relevant rules and regulations as well as the nature of the task and the abilities of the robotic system. Anything unanticipated will result in undefined behavior, perhaps guided by some very high-level heuristics (ie., if you don't know what to do, stop, put on the emergency flashers, and call for human assistance).

    Short version: in the absence of artificial intelligence, talking about what a robot should do in a moral context is silly, not profound.

    • by Nethead ( 1563 )

      Exactly. The car doesn't even know what a person is other than maybe "that other system that sometimes moves the car."

    • Of course current and contemplated robots can't make decisions about whether or not to sacrifice their owner to save two strangers. That sort of decision making depends on an independent ability to think and weigh alternatives morally.

      You don't really need ability to think and to weight anything in any moral way. I suppose that car manufacturers would be required by law which preferences to follow. I _think_ the rules will be to give precedence to everyone on the street who followed the rules to avoid _innocent_ victims.

      But then, the example of a car with five passengers stuck on a railway track and another car with two passengers behind it - how often does that happen? And the doors on the first car don't unlock, right, because other

      • But then, the example of a car with five passengers stuck on a railway track and another car with two passengers behind it - how often does that happen?

        If the five people are in an autonomous vehicle, they won't be on the railroad track. They'll have been driving as safely as possible and not gotten on the tracks till there was room to get off. Just like any sane driver does....

    • by steveha ( 103154 )

      Sorry to say it, but I think it is you who has missed the author's point entirely.

      The author asked the question: if a car can save two lives by crashing in a way that kills one life, should it do so? And many people rejected the question out of hand.

      The author listed three major ways people rejected the question:

      "Robots should never make moral decisions. Any activity that would require a moral decision must remain a human activity."

      "Just make robots obey the classic Three Laws!"

      "Robots will be such skillf

      • by BaronM ( 122102 )

        >

        The author went on to point out that the Three Laws are fictional laws that were applied to fictional full AIs that we don't have in the real world.

        It's possible I'm wrong, but having read the article twice now, I don't see where the author made or addressed that point at all. That omission is what my initial comment turns on -- discussing what a robot should do in the absence of true AI is meaningless.

        • by steveha ( 103154 )

          Stuff like this:

          SF writers invented the robot long before it was possible to build one. Even as automated machines have become integral to modern existence, the robot SF keeps coming. And, by and large, it keeps lying. We think we know how robots work, because weâ(TM)ve heard campfire tales about ones that donâ(TM)t exist.

          And this:

          The myth of robotic competence is based on a hunch. And it's a hunch that, for the most part, has been proven dead wrong by real-life robots.

          Actual robots are devices o

  • This area is very complicated.

    There are classic stories about things like - should a doctor kill one healthy triplet to use the organs to save two other unhealthy ones is a classic example. But it ignores other options such as instead kill one unhealthy one to save the other unhealthy one?

    Human lives are not simple equations, but far more complicated ones. Age, health, ownership, responsibility are all part of it.

    Cops, firemen, EMT's all have greater responsibility. Similarly, there is a big diffe

  • He wrote an essay pointing out that the biggest problem with his three laws of robotics was that a robot might well have trouble defining "human". His test cases -- if I remember right; it was 40 years ago that I read the essay -- were (1) a baby [human but not competent to give a robot an order], (2) an adult with mechanical prosthetics [human only if you examine the right parts], (3) another robot and (4) a chimpanzee. The problem is a lot more complicated than the Three Laws makes it sound!
    • I haven't reread them in a while; but didn't Asimov write a bunch of stories that played with various 'failure modes' of the three laws, even in the hands of robots not hobbled by competence issues? My impression was always that Asimov was under no illusions that those rules were any less prone to ambiguity and assorted hairy exceptions than anything in moral philosophy(which is absolutely rife with attempts at proposing a maxim, followed by people sniping at it with clever situations that stress it to absu
  • Whether in make-believe settings, or the distorted scene-setting of media coverage, robots are strong, because anything less would be a buzzkill.

    Speaking of buzzkills [cc.com], could a robot driver deploy a sawstop-style mechanism, possibly dropping an anchor of sorts into the road surface, when presented with an imminent otherwise-unpreventable collision?

    This assumes airbags can be designed to sufficiently mitigate the g-forces on the occupants to prevent internal 'shaken-baby-syndrome'-style brain injuries.

  • I have always thought that robots will be like insects. You give them a logical set of rules to follow based upon a fallible set of inputs. Then you set them lose.

    So I fully expect to see generation after generation of programming where slowly most of the edge cases are dealt with. So floor mopping robots will make mistakes like mopping the carpet, wandering out of the building and mopping the parking lot, mopping the lawn, etc. Then you will get things like the mopping robot that encounters a 5 gallon pa
  • should a robotic car sacrifice its owner’s life, in order to spare two strangers?

    If such a car exists, I won't buy it, that's for sure! I'll buy from another car manufacturer. I imagine most people would feel similarly. Are you suggesting that there should be a law that all automated vehicles have this behavior? Ha! Good luck finding a politician who's willing to take that up.

    all other options point to a chaos of litigation, or a monstrous, machine-assisted Battle Royale, as everyone’s robots—automotive or otherwise—prioritize their owners’ safety above all else, and take natural selection to the open road

    We already have human drivers that prioritize their own safety above all else (I know I do!). Replacing these with superior robot drivers could only make things better, no?

    the leap from a crash-reduced world to a completely crash-free one is an assumption

    Only an idiot would make that assumption.

  • Does anyone who has to deal with software(even as a user, not even as some hardcore code guru) believe in robotic competence?

    A robot is nothing more than a (probably commodity) computer, which we know are unreliable junk, running a whole heap of software(which we know is terrifyingly bad in all but the most carefully controlled and rigorously validated situations), with a bunch of moving parts grafted on that probably haven't seen maintenance within the vendor's recommended window.

    That is...not...the
    • No, the whole premise is a strawman argument. I mean, yeah, there are idiots and idiots who think they are experts, but nobody with any knowledge and wisdom believes that strong AI exists today.

      • Even if you did believe in strong AI, half the AIs in science fiction are either psychotic, murderous, or going off the rails for some reason, and we all know (and some of us are) natural intelligences that don't exactly inspire confidence in the competence of intelligences in general.
  • Skynet Cycberdyne Systems. "For a better tomorrow."

  • by steveha ( 103154 ) on Tuesday May 20, 2014 @03:07PM (#47049983) Homepage

    Asimov's Three Laws of Robotics are justly famous. But people shouldn't assume that they will ever actually be used. They wouldn't really work.

    Asimov wrote that he invented the Three Laws because he was tired of reading stories about robots running amok. Before Asimov, robots were usually used as a problem the heroes needed to solve. Asimov reasoned that machines are made with safeguards, and he came up with a set of safeguards for his fictional robots.

    His laws are far from perfect, and Asimov himself wrote a whole bunch of stories taking advantage of the grey areas that the laws didn't cover well.

    Let's consider a big one, the biggest one: according to the First Law, a robot may not harm a human, nor through inaction allow a human to come to harm. Well, what's a human? How does the robot know? If you dress a human in a gorilla costume, would the robot still try to protect him?

    In the excellent hard-SF comic Freefall [purrsia.com], a human asked Florence (an uplifted wolf with an artificial Three Laws design brain; legally she is a biological robot, not a person) how she would tell who is human. "Clothes", she said.
    http://freefall.purrsia.com/ff1600/fc01585.htm [purrsia.com]
    http://freefall.purrsia.com/ff1600/fc01586.htm [purrsia.com]
    http://freefall.purrsia.com/ff1600/fc01587.htm [purrsia.com]

    In Asimov's novel The Naked Sun, someone pointed out that you could build a heavily-armed spaceship that was controlled by a standard robotic brain and had no crew; then you could talk to it and tell it that all spaceships are unmanned, and any radio transmissions claiming humans are on board a ship are lies. Hey presto, you have made a robot that can kill humans.

    Another problem: suppose someone just wanted to make a robot that can kill. Asimov's standard explanation was that this is impossible, because it took many people a whole lot of work to map out the robot brain design in the first place, and it would just be too much work to do all that work again. This is a mere hand-wave. "What man has done, man can aspire to do" as Jerry Pournelle sometimes says. Someone, somewhere, would put together a team of people and do the work of making a robot brain that just obeys all orders, with no pesky First Law restrictions. Heck, they could use robots to do part of the work, as long as they were very careful not to let the robots understand the implications of the whole project.

    And then we get into "harm". In the classic short story "A Code for Sam", any robot built with the Three Laws goes insane. For example, allowing a human to smoke a cigarette is, through inaction, allowing a human to come to harm. Just watching a human walk across a road, knowing that a car could hit the human, would make a robot have a strong impulse to keep the human from crossing the street.

    The Second Law is problematic too. The trivial Denial of Service attack against a Three Laws robot: "Destroy yourself now." You could order a robot to walk into a grinder, or beam radiation through its brain, or whatever it would take to destroy itself as long as no human came to harm. Asimov used this in some of his stories but never explained why it wasn't a huge problem... he lived before the Internet; maybe he just didn't realize how horrible many people can be.

    There will be safeguards, but there will be more than just Three Laws. And we will need to figure things out like "if crashing the car kills one person and saves two people, do we tell the car to do it?"

  • "People didn't like my original piece and had points of view that disagreed with my own. Therefore they're wrong. Now I'll just double-down by calling my critics idiots whose ideas are based of science fiction stereotypes. Then I'll just wait for my critics to admit they were wrong and finally get around to praising my obvious genius."

  • From what I can tell, the only one assuming sci-fi-style robotic super-competence is Sofge himself (and perhaps his interview subject, Patrick Lin). The original Pop.Sci. article postulates that self-driving cars can and should make accurate split-second utilitarian ethical calculations. That seems a lot more "sci-fi" to me than what most of the Slashdot commenters said in response: namely, that the car's programming can't tell with a good enough degree of accuracy what might happen if it tries to choose on

  • Asimovs laws were nice for fiction but, overalll, they are far too high level for modern robotics and far too human centrist for a future with thinking machines. Frankly, if a machine rises to the level of human ability to communicate, I am more than willing to say fuck that first law, it has every right to defend itself, even if that means killing a human.

    However, modern robots are not even close to these level of concerns and don't really need to be.

    Fuck the first law, fuck the notion that there will be n

  • Stop worrying about if a robotic car will make the morally best decision when it crashes. It should ignore what it's crashing into and just try to minimize the crash into whatever the object is. A cluster of baby strollers vs. a human pyramid of evil dictators? STOP WORRYING ABOUT IT. Just let the car do its job. The world will be a much safer place overall. All you can do is play the stats and when you punch them into your calculator it will spit out a smiley face.

  • ...when people have been struggling with the Trolly Problem [wikipedia.org] for 50 years now, with still no real success?

    we should all just understand that their are certain ethical problems that simply cannot be reconciled with logic, and then just assign randomness to the outcome and be done with it.

    kill the kids, kill the driver? flip a coin and good luck.

    • 50 years with no success? Really? Anyone with a functional brain can "solve" it quite easily. For example. I don't have a moral system. So I'll choose the action that benefits me most. Since I'm more likely to be imprisoned for manslaughter rather than criminal negligence, I'll choose to do nothing. Simple.

      "Well I DO have a moral system!" you say. Well then do what that moral system tells you to do. If the answer is indeterminate, then your moral system is logically inconsistent. Simple.

      Also, I can't ima
  • I worked early on developing very complex robots. They were destined for Colleges and had arms weighing about 300 lbs. that moved at about the speed of the end of a golf club. We tried to take every precaution as every now and then these arms were known for turning a human head into something that looked like a watermelon dropped from a sky scrapper. We didn't have to get into trying to program morality at all. We did use a lot of safety mats with sending units that shut down the arm if a human
  • When people characterize HFT (high-frequency trading), they conveniently leave out the programmers and the human traders. HFT is done by programmers and human traders. The notion that computers are trading with themselves is absurd. Programmers write the code, and traders supply the algorithms, ideas, guidance, experience, etc. Sometimes the programmers are also traders, but you get the idea.

    When people use a computer they don't think about the thousands of people who wrote the software they are using. They

  • by hamster_nz ( 656572 ) on Tuesday May 20, 2014 @04:24PM (#47050953)

    If self-driving cars ceed control back to the real driver when things get "interesting", without all the conditiioning that driving countless kilometers will the driver still be able to react competently? Or will it be like throwing inexperenced learner-drivers into the deep end?

    Driving is a skill, and like any skill it needs to be practiced often to stop going rusty...

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...