Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Robotics Software Transportation Hardware

How Do We Program Moral Machines? 604

nicholast writes "If your driverless car is about to crash into a bus, should it veer off a bridge? NYU Prof. Gary Marcus has a good essay about the need to program ethics and morality into our future machines. Quoting: 'Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.'"
This discussion has been archived. No new comments can be posted.

How Do We Program Moral Machines?

Comments Filter:
  • Screw the bus (Score:4, Interesting)

    by dywolf ( 2673597 ) on Tuesday November 27, 2012 @03:55PM (#42108379)

    Screw the bus.
    I don't care about the bus.
    The bus is big and likely will barely feel the impact anyway.
    I care about the fact I don't want to die.
    Why would buy and use a machine that would choose to let me die?

    And I posit that the author has failed to consider freedom of travel, freedom of choice, and other basic individual rights/freedoms that mandating driverless cars would run over (pun intended).

  • by SirGarlon ( 845873 ) on Tuesday November 27, 2012 @03:59PM (#42108431)
    I think your statistics on accidents are informative but you're missing an important point. With automated cars, we expect accident rates to go down significantly (so saith the summary). So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume. (The manufacturer does not care about accidents where the machine is not at fault, beyond complying with crash-safety requirements.)
  • by Rich0 ( 548339 ) on Tuesday November 27, 2012 @04:04PM (#42108507) Homepage

    Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses. There is no reason that somebody should be punished for making a car 10X safer than any car on the road today.

    As far as programming morality - I think that will be the easy part. The real issue is defining it in the first place. Once you define it, getting a computer to behave morally is likely to be far EASIER than getting a human to do so, since a computer need not have self-interest in the decision making. You'd be hard pressed to find people who would swerve off a bridge to avoid a crowd of pedestrians, but a computer would make that decision without breaking a sweat if that were how it were designed. Computers commit suicide every day - how many smart bombs does the US drop in a year?

    But I agree, the current legal structure will be a real impediment. It will take leadership from the legislature to fix that.

  • by crazyjj ( 2598719 ) * on Tuesday November 27, 2012 @04:06PM (#42108537)

    So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume.

    Great. Now all you have to do is prove your system wasn't at fault in a court of law--against the sweet old lady who's suing, with the driver testifying that it was your system and not him that caused the accident, and a jury that hates big corporations. And you have to do it over and over again, in a constant barrage of lawsuits--almost one for every accident one of your cars ever gets in.

    Even if you won every single time, can you imagine the legal costs?

  • Re:Obvious Answer (Score:5, Interesting)

    by dywolf ( 2673597 ) on Tuesday November 27, 2012 @04:12PM (#42108617)

    You never actually read Asimov.
    And if you did, you're the one that failed to grasp the points.
    The points he even clearly spells out in several of his own essays.

    Asimov wasn't writing about the ambiguity or incompleteness of the laws...he wrote the damn laws. And he did consider them a blueprint. He said so. And when MIT (and other) students began using his rules as a programming basis he was proud!!

    It wasnt a warning.

    Asimov was writing about robots as an engineering problem to be solved, period.
    The laws are basic simple concepts that solve 99% of the problems in engineering a robot.
    He then wrote science fiction stories dealing with the laws in the manner of good science fiction, that is to make you think about: the science itself, the consequences of science, the difference in human thinking and logical thinking, difference in human and robots...ie to think period.

    Example: in telling a robot to protect a human, how far should a robot go in protecting that human? Should he protect that human from self inflicted harm like smoking, at the expense of the persons freedom? In this case Asimov, again, wasnt writing about the dangers of the laws, or to warn people against them. He's writing about the classic question of "protection/security vs freedom", this time approached from the angle of the moral dilema (sp) placed on a "thinking machine" as it tries to carry out its directives.

    in fact Asimov frequently uses and explains things through the literary mechanics of his "electropsychological potential" (or whatever word he used was). In a nutshell its a numeric comparison: Directive 1 causes X amount of voltage potential, Directive 2 causes Y amount, and Directive 3 causes Z amount, and whichever of these is the largest determines the behaviour of the robot. In one story a malfunctioning robot was obeying Rule 3 (self-preservation) at the detriment of the other two, because the voltage of Rule 3 was abnormally large and overpowering the others.

    Again, he wrote about robots not as monsters or warnings. he specifically stated many times that his writings were in fact about the exact opposite: that they arent monsters, but engineering problems created by man and solved by man. since man created them, man is responsible for them, and their flaws. robots are an engineering problem and the rules are a simple elegant solution to control their behaviour (his words).

  • by plover ( 150551 ) on Tuesday November 27, 2012 @04:21PM (#42108711) Homepage Journal

    Even if you won every single time, can you imagine the legal costs?

    No, but I can imagine a change to the legal system limiting the liability of the manufacturers of self-driving cars.

    If we could know that self-driving cars reduce accidents by 95% (a not unrealistic amount), it would be morally wrong for us to not put them on the road. If the only hurdle the manufacturers had left was the liability issue, then it would be morally wrong for Congress to not change the laws.

    Of course, Congress has been morally bankrupt since, oh, about 1789, so I doubt that they'll see this as an imperative. On the other hand, I do imagine the car makers paying lobbyists and making campaign contributions to ensure that self-driving car manufacturers are exempted from these lawsuits, so it could still happen.

  • by Anonymous Coward on Tuesday November 27, 2012 @04:23PM (#42108739)

    Fortunately, my automated car uses vision and radar to detect obstacles. It records everything it sees for 5 minutes before the crash, including the little old lady trying to put sugar in her coffee while making a left turn. Case closed.

  • by kelemvor4 ( 1980226 ) on Tuesday November 27, 2012 @04:25PM (#42108771)
    Meh. Companies already face this. If any one of the thousands of parts in your car fails and causes an accident, the manufacturer can (and usually does) get sued. Ask Toyota or Firestone how that plays out. All we're talking about here is another new part. If the internet was around when power steering or the automatic transmission were invented, I bet there would have been a similar discussion about those. I think the potential liability is a good thing, because otherwise manufacturers don't have much incentive to make safe products.
  • by clintp ( 5169 ) on Tuesday November 27, 2012 @05:07PM (#42109279)

    To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?

    ... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?

    Asimov's early robot stories dealt frequently with corporate liability and it was often the source of the plot conflicts. If a proofreading robot made a mistake causing a slander ("Galley Slave") or an industrial accident resulted in injury, US Robotics was put into the position of having to prove that it was not the fault of the robot (which it never was).

    This is why Asimov's US Robotics didn't sell you a robot, they leased it to you. The lease was iron-clad, could be revoked by either party at any time, had liability clauses, and had mandatory maintenance and upgrades to be performed by US Robotics technicians. If you refused the maintenance US Robotics would repossess, sue and claim theft if you withheld ("Bicentennial Man", though unsuccessfully; "Satisfaction Guaranteed").

    A properly functioning robot would not disobey the three laws, and an improperly functioning robot was repaired or destroyed immediately ("Lost Little Robot"). Conflicts between types of harm were resolved using probability based on the best information available at the moment ("Runaround"), and usually resulted in the collapse of the positronic brain when it was safe to do so ("Robots and Empire", etc.).

  • by bacon.frankfurter ( 2584789 ) <bacon.frankfurter@yahoo.com> on Tuesday November 27, 2012 @05:17PM (#42109377)
    Why have cars at all if we aren't allowed to drive them? Rip up all the highways, and replace them with a gigantic autonomous rail system.

    But no...

    That's not what's at stake here. The truth is that if I'm not in control of my whereabouts anymore, then how can I be sure I'm making decisions for myself? Without a car, you might find yourself imprisoned by the distance your two feet can take you. Someone out there will applaud this along the same premise that "those who obey the law, have nothing to hide, and my gosh, if a driverless car prevents a CRIMINAL from driving to a crime, then the system pays for itself!", but that's not the point. It's not about morality, it's about control, and if someone is stopping me from driving my own car, then who's stopping them from driving theirs? When we fork over control of our transportation, then will come the day that we're isolated into districts, where the equivalent of passports will be needed from county to county. If the car won't let me drive it, how can I be sure that the car will obey me at all?

    If all the cars in the world are autonomous, and computer controlled, well gee... what's to stop "someone" (anyone) from turning them all into a gigantic autonomous system that (I'm about to Godwin this...) conveys everyone to a huge concentration camp set to autonomous genocide?

    It's not morality that the author is arguing in favor of.

    It's our own autonomy that he's arguing against.

    Someone will have control of these cars. Somewhere there will be levers.

    Let's not imagine these automatic apparatuses to be forces of nature beyond an individual human's control. These are contrived, artificial, unatural man-made objects, at their core mechanical.
  • by CanHasDIY ( 1672858 ) on Tuesday November 27, 2012 @05:23PM (#42109459) Homepage Journal
    Morality is subjective.

    To "program morality" would be to engender a machine with the specific moral subset imbued upon it by its programmer.

    Thus, "machine morality" is actually "programmer morality."

    We each determine our own morals, which will occasionally conflict with one another.

    Forcing the public at large to follow a single person's idea of morality is, at the most basic level, an immoral act in itself.

    Thus, "moral machines" aren't really moral at all.

E = MC ** 2 +- 3db

Working...