Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Robotics Software Transportation Hardware

How Do We Program Moral Machines? 604

nicholast writes "If your driverless car is about to crash into a bus, should it veer off a bridge? NYU Prof. Gary Marcus has a good essay about the need to program ethics and morality into our future machines. Quoting: 'Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.'"
This discussion has been archived. No new comments can be posted.

How Do We Program Moral Machines?

Comments Filter:
  • by crazyjj ( 2598719 ) * on Tuesday November 27, 2012 @03:38PM (#42108161)

    I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human). And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars. But above these two problems, far and away above *all* problems with driverless cars is the real reason I think we'll never see anything more than driver *assisting* cars on the road: legal liability.

    To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers? How much would you have to add onto the sticker price to cover the costs of going to court every single time that particular car was involved in an accident? Of defending the efficacy of your driverless system against other manufacturer's systems (and against defect, and against the word of the driver himself that he was using the system properly) in one liability case after another?

    According to Forbes [forbes.com], the average driver is involved in an accident every 18 years. Let's suppose (and I'm sure the statisticians would object to this supposition) that that means that the average CAR is also involved in a wreck every 18 years as well. Since the average age of a car is about 11 years [usatoday.com] now, it's not unreasonable to assume that a little less than half of all cars on the road will be involved in at least one accident in their functional lifetimes. And even with the added safety of driverless systems, the first model available will still have to contend with a road mostly filled with regular, non-driverless-system cars. So let's say that a good 25% of those first models will probably end up in an accident at some point, which will make a very tempting target for lawyers going for the deep pockets of their manufacturers.

    Again, what car company wouldn't take that into account when asking themselves if they want to be a pioneer in this field?

  • by LionKimbro ( 200000 ) on Tuesday November 27, 2012 @03:44PM (#42108227) Homepage

    The proper sequence should be:

    Humans reason (with their morals) --> Humans write laws/code --> The laws/code go into the machines --> The machines execute the instructions.

    Laws are not a substitute for morals; they are the output from our moral reasoning.

  • by viperidaenz ( 2515578 ) on Tuesday November 27, 2012 @03:51PM (#42108321)
    So if my auto-driver car had to make a choice between my safety and that of someone else, it better choose me.
  • What they're talking about here, though, isn't really programming morality into machines in some kind of sentient, Isaac-Asimov sense, but just programming decision policies into machines, which have ethical implications. The ethical questions come at the programming stage, when deciding what policies the automatic car should follow in various situations.

  • by ShooterNeo ( 555040 ) on Tuesday November 27, 2012 @03:53PM (#42108367)

    No competent engineer would even consider adding code to allow the automated car to consider swerving off the bridge. In fact, the internal database the automated car would need of terrain features (hard to "see" a huge dropoff like a bridge with sensors aboard the car) would have the sides of the bridge explicitly marked as a deadly obstacle.

    The car's internal mapping system of drivable parts of the surrounding environment would thus not allow it to even consider swerving in that direction. Instead, the car would crash if there were no other alternatives. Low level systems would prepare the vehicle as best as possible for the crash to maximize the chances the occupants survive.

    Or put another way : you design and engineer the systems in the car to make decisions that lead to a good outcome on average. You can't possibly prepare it for edge cases like dodging a bus with 40 people. Possibly the car might be able to estimate the likely size of another vehicle (by measuring the surface area of the front) and weight decisions that way (better to crash into another small car than an 18 wheeler) but not everything can be avoided.

    Automated cars won't be perfect. Sometimes, the perfect combination of bad decisions, bad weather, or just bad luck will cause fatal crashes. They will be a worthwhile investment if the chance of a fatal accident were SIGNIFICANTLY lower, such that virtually any human driver, no matter how skilled, would be better served riding in an automated vehicle. Maybe a 10x lower fatal accident rate would be an acceptable benchmark?

        If I were on the design team, I'd make 4 point restraints mandatory for the occupants, and design the vehicle for survivability in high speed crashes including from the side.

  • by CastrTroy ( 595695 ) on Tuesday November 27, 2012 @03:57PM (#42108395)
    This is my exact reasoning why flying cars will never take off (pardon the pun). People keep their cars in terrible condition. If your car has an engine failure, worst case scenario, you pull over to the side of the road, or end up blocking traffic. In a flying vehicle, if your engine dies, It's very possible that you will die too. And if you are above a city, it's not impossible to imagine crashing into an innocent bystander.

    I imagine the same will be for self driving cars. It will never happen because if the car is getting bad information from its sensors, then crazy things can happen. People can't be bothered to clean more than 2 square inches from their windshield in the winter. Do you really think they are going to go around cleaning the 10 different sensors of ice and snow every winter morning? Sure the car could refuse to operate if the sensors are blocked, but then I guess people would just not want to buy the car, or complain to the dealer about it.
  • by SirGarlon ( 845873 ) on Tuesday November 27, 2012 @04:03PM (#42108481)
    You, sir, are a good example of why driverless cars will make me safer.
  • by WarJolt ( 990309 ) on Tuesday November 27, 2012 @04:08PM (#42108561)

    The funny thing is that most of the time you are in an airplane the autopilot(aka george) is in control. Even when you're landing ILS can in some cases land the plane on it's own. If you've ever been in a plane, chances are you have already put your life in the hands of a computer. I seriously doubt that 25% of the first models will get into accidents. With the new sensors that will be in these cars the computer will have a full 360 degree view of all visible objects. This is far more than a human can see. Furthermore computers can respond in a fraction of the time a human can.

    Training millions of humans to drive should be the far more scary proposition.
    Plus chances are you as an individual will be responsible for your car and the system designers and manufacturers will be able to afford good lawyers.

  • by Above ( 100351 ) on Tuesday November 27, 2012 @04:14PM (#42108639)

    Actually, I think you're both missing the biggest issue by focusing on true accidents. I think the OP's point is legitimate, even in the face of your assertion that rates go down. Companies are still taking on the risk as they are now the "driver". While the liabilities of these situations is large, there is a situation that is much, much larger.

    What happens when there is a bug in the system? Think the liability is bad when one car has a short circuit and veers head on into another? Imagine if there is a small defect. There are plenty of examples, like the Mariner 1 [wikipedia.org] crash, or the AT&T System Wide Crash [phworld.org] in 1990. We've seen the lengths to witch companies will go to track down potentially common issues, like the Jeep Cherokee sudden acceleration, or the Toyota sudden acceleration issues because it has the potential to affect all cars. But let's imagine a future where all cars are driverless, and the accident rate is 1/100th of what it is now.

    What happens when there is a Y2K style date bug? When some sensor fails if the temperature drops below a particular point? When a semi-colon is forgotten in the code, and the radio broadcast that sends out notification of an accident causes thousands of cars to execute the same re-route routine with the messed up code all at the same time.

    There is the very real potential for thousands, or even millions of cars to all crash _simultaneously_. Imagine everyone on the freeway simply veering left all the sudden. That should be the manufacturers largest fear. Crashes one at a time can be litigated and explained away, the business can go on. The first car company that crashes a few thousands cars all at the same time in response to some input will be out of business in a New York minute.

  • by AK Marc ( 707885 ) on Tuesday November 27, 2012 @04:26PM (#42108777)
    The drivers would complain, but the current vehicles "know" they are unsafe much of the time (lots of cars will give you an error light if a light is out - it's trivial to determine, as the resistance changes, and check engine lights and such range from sensor problems and trivial emissions issues to catastrophic engine problems). Yet, at worst, they'll enter a "limp" mode.

    If there was a government requirement that safety related problems that are detected must shut down the car and immobilize it in no more than 5 minutes, then the problem goes away.

    It would have to be the government because of tragedy of the commons. If one car company doesn't do it, they'll sell it as a feature, and if most don't, it'll be expected that they don't, so the ones that do will be shunned.

    When all self-friving cars refuse self-driving mode if they detect any problem, you either manually drive it, or don't go anywhere. And, when everyone expects their car to immobilize if they don't care for it, they'll care for it a little more than they do now.
  • by mcgrew ( 92797 ) * on Tuesday November 27, 2012 @04:54PM (#42109129) Homepage Journal

    They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers?

    I see you've 1) never programmed and 2) You run Windows. I agree, I would never get in a Microsoft car considering their shoddy programming, but Microsoft would never manufacture a driverless car simply because of that.

    Almost all automotive accidents are caused by human failure. Sure, there are exceptions -- I was in a head on crash because of a blown tire, and a blown tire on a megabus killed someone a couple of months ago here in Illinois. But accidents from mechanical failure are rare.

    But people cause almost every accident. Have you seen how stupid people drive these days? They race from red light to red light as if they're actually going to get there faster that way. They get impatient. They don't pay attention. They get angry and do stupid things like speed, tailgate, suddenly switch lanes without looking, fumble with their radios, talk on their cell phones, get in a hurry... computers don't do that. There will be damned few if any accidents that are the computer's fault.

    Hell, just this morning on the news they showed a car crashing through a store, barely missing a toddler -- the idiot driver thought the car was in reverse. Had he been driving a computer-controlled car, that would have never happened.

  • Re:Obvious Answer (Score:4, Insightful)

    by Sloppy ( 14984 ) on Tuesday November 27, 2012 @05:22PM (#42109447) Homepage Journal

    Asimov's laws were designed to create stories, not robots.

"Can you program?" "Well, I'm literate, if that's what you mean!"

Working...