Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Transportation Security Software Hardware News Science Technology

The Moral Dilemma of Driverless Cars: Save The Driver or Save The Crowd? 364

HughPickens.com writes: What should a driverless car with one rider do if it is faced with the choice of swerving off the road into a tree or hitting a crowd of 10 pedestrians? The answer depends on whether you are the rider in the car or someone else is, writes Peter Dizikes at MIT News. According to recent research most people prefer autonomous vehicles to minimize casualties in situations of extreme danger -- except for the vehicles they would be riding in. "Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan. "But everybody wants their own car to protect them at all costs." The result is what the researchers call a "social dilemma," in which people could end up making conditions less safe for everyone by acting in their own self-interest. "If everybody does that, then we would end up in a tragedy whereby the cars will not minimize casualties," says Rahwan. Researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015. The results consistently showed that people will take a utilitarian approach to the ethics of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians. But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. "This is a challenge that should be on the mind of carmakers and regulators alike," the researchers write. "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."
This discussion has been archived. No new comments can be posted.

The Moral Dilemma of Driverless Cars: Save The Driver or Save The Crowd?

Comments Filter:
  • by Hylandr ( 813770 ) on Tuesday June 28, 2016 @11:32PM (#52410691)

    Here we go again. We just had this discussion last week too.

    If the new slashdot owners are using the client base as fodder for some think-tank the least you could do is provide compensation after the first few times an article is recycled.

    • by Anonymous Coward on Tuesday June 28, 2016 @11:43PM (#52410747)

      My point is still valid though. False dichotomy. The car should (and pretty much every driver-less car will) use maximum braking power to reduce speed as much as possible. In almost all cases it will do this long before becomes too late to stop without hitting anyone. This gives pedestrians the most time to get out of the way and if it hits them it does so at the lowest possible speed.
      Further, when swerving you run the risk of a pedestrian diving out of the way, in the SAME direction that the car swerves.
      Typically such "oh no I must choose which object hit" scenarios occur when the car is driving recklessly or the driver is inattentive, neither of which should apply to non-hacked self-driving cars.

      • Let's add a bit more falseness to the presumption Rahwan makes. Asking people questions and examining real life episodes do not produce the same results. We have numerous examples of people actually choosing to hit a tree or building instead of people.

        • They probably thought they would survive hitting the tree or building. Hitting people means going to jail.

          OTOH, it I have no control of my car (it being self-driving and all), then I would prefer these outcomes (in order of preference):
          1. No damage to anyone (safe stop)
          2. Easily repaired damage to the car.
          3. Very small self-healing injuries to me.
          4. Non-permanent injuries to other people (broken leg etc).
          5. Massive damage to the car.
          6. Killing other people.
          7. Killing me.

          Actually, if I was a passenger in a t

          • And that should probably be handled by the safety features of the car itself. If you don't have to be able to operate it, that could allow for a lot of fundamental improvements that make the occupant safe even in the case of accident. That makes the math a hell of a lot simpler. Plus, these would almost certainly be electric cars, which open up many other operational advantages.
          • by localman ( 111171 ) on Wednesday June 29, 2016 @01:54AM (#52411141) Homepage

            I like how everyone assumes people make carefully considered, rational decisions in a high-speed crisis.

            People probably choose to veer away from hitting people because they don't realize they might kill themselves - they just see what is in front of them and sure to happen, and don't have the time or wherewithall to consider the unknown consequences.

            People will reach out to catch a falling knife, too, but that doesn't mean that they thought about the implications.

            • by Pentium100 ( 1240090 ) on Wednesday June 29, 2016 @03:34AM (#52411401)

              I did not say that I can carefully consider all the outcomes before deciding whether to hit a tree or a man. The time is usually too short to consider anything, other than trying to stop and maybe turning the car in a direction away from any object (maybe unsuccessfully).

              However, computers do what they are told and AI most likely does have time to consider this. Which means that this is now a problem - I do not want the AI in my car (or a taxi driver) to carefully consider the outcomes and decide to kill me instead of a pedestrian. Since it is most likely impossible for the taxi driver to carefully consider all options, I accept that the outcome is gong to be random (he may be too slow to react and hit the object in front whether it's a tree or a man, he may try to avoid the man in front only to hit a tree he didn't notice or he might try to avoid hitting the tree only to hit the man).

              Not so when such situations are considered well in advance (when programming the AI) - in that case I will not want to ride in a car that is driven by AI that will predictably choose to hit a tree instead of a man.

              For the purposes of the example, assume that the speed is high enough that hitting a tree will kill or permanently disable the people in the car, while hitting the man will kill the man, but leave the passengers better off (without permanent disability).

              In addition to that, when I am driving, I am in control and responsible for my decisions (whether they are thought out or I was just too slow to react). Not so, when the AI is in control.

            • by silentcoder ( 1241496 ) on Wednesday June 29, 2016 @05:05AM (#52411583)

              >People probably choose to veer away from hitting people because they don't realize they might kill themselves - they just see what is in front of them and sure to happen, and don't have the time or wherewithall to consider the unknown consequences.

              Well that fits my personal experience. The worst car accident I ever had happened when I swerved to avoid a hazzard on a highway while travelling at high speed. I ended up on the traffic island where I crashed into a tree.
              This is where modern automotive technology makes a huge difference however. Despite hitting a tree head-on at 120km/h I walked away with nary a scratch. Airbags and crumplezones kept myself and my passengers alive and almost entirely uninjured. Car was utterly destroyed, but that's better than humans being hurt.

              But thinking back - yes, that's exactly how it went. When you see a sudden hazard on the road at high speed there is simply no TIME to think through a chain of consequences or evaluate multiple possible chains of events. You can do this when you have more time - but modern ABS enabled cars can probably achieve a safe dead-stop in the same time - but when it's a sudden hazard like a large animal running onto the road out of bushes where it was hidden (as happened to me)
                there is just no time to do that. You deal with the problem immediately in front of you using the first viable option - you swerve to avoid, trying to regain control and avoid subsequent problems caused by the swerve becomes something you think about *after* you've swerved. You may not have the time to actually process what new problems there are and react to them at all (I sure didn't) but you simply cannot consider them beforehand. Not to mention that the bit of thought you can spare is based on the extremely limited information and judgement calls. Part of why I chose to swerve towards the island was that (1) it meant not crossing other lanes which would potentially cause me to hit other cars and (2) the plants on the island appeared to be small shrubs - unlikely to cause major damage even if I couldn't avoid hitting one. Turns out that despite being pruned low - that thing had a massive trunk capable of turning my engine into something resembling an empty tin can in a vaccuum.

          • They probably thought they would survive hitting the tree or building. Hitting people means going to jail.

            Or they were simply evading the immediate obstacle and didn't have the time to check whether there was anything there.

            OTOH, it I have no control of my car (it being self-driving and all), then I would prefer these outcomes (in order of preference):

            We have regulations so the actual ruleset ends up being whatever minimizes damage.

            But from a purely technical viewpoint, I wonder if programming the AI with

          • by stomv ( 80392 ) on Wednesday June 29, 2016 @08:44AM (#52412199) Homepage
            I salute your honesty, but in a situation where the outcomes are known in advance, you'd prefer breaking somebody else's leg to a total loss on the car?

            Even if the leg heals up fully, the pain could be tremendous. The inconvenience massive -- perhaps the victim lives on the 3rd floor? How about work -- lots of people require mobility for their job (think: waitress). Oh, yeah, and the financial cost to repair the leg could easily outpace the cost of replacing the car.

            You'd rather break someone else's bones than total a car where everyone escapes injury free? That's messed up.
      • by Hylandr ( 813770 ) on Wednesday June 29, 2016 @12:09AM (#52410857)

        Expound on the morality of the issue all you want. The final decision as to whether the outcome was predetermined or premeditated will belong to the jury.

        The real question I want the answer to is who will be on trial? Even then, until there is a sufficient body of judicial precedent I refuse to own, operate or allow to be carted away to my funeral in one.

        • by AmiMoJo ( 196126 )

          It's actually really simple and really obvious.

          The person who caused the accident will be held responsible. Most likely it will be a human, but it's possible that it will be bad design/programming by the self driving car manufacturer.

          The decision that the car made will be largely irrelevant. Just as we wouldn't expect a human driver to decide between their own life and a crowd of nuns in a split second, we wouldn't blame a self driving car for simply applying the brakes and stopping as quickly as possible.

      • by Anonymous Coward on Wednesday June 29, 2016 @12:54AM (#52411003)

        THIS!

        I think this topic is really representative of the media scaremongering today :

        1 - Take a situation which presents a moral dilema, however rare it may arise in real life even now ... How many times a day does this exact situation REALLY happens in the US for example? I wanna know to check it is not an imaginary problem!
        2 - Ask the wrong questions about the part of the situation that is the closer to a catastrophic failure that it can be, in a way that sound as scary or horrific as possible, to get the answer you are after : What if YOU have to die to save 10 strangers (and one may be the next Stalin anyway)?
        3 - Make sure to blow up the importance of this extreme-odds problem : like millions of people will die everyday ...
        4 - Find a culprit that is different from your readership : migrants, err ... sorry AI, robots! They're commin' for ya!
        5- Conveniently forget that the problem can be even rarer as AI won't be texting, as even if a glitch happens, it could be corrected after that and for all cars on the road! So really, what is the actual frequency now and what would it be with driverless cars?
        6 - Make it a priority : After all, we don't even know if it is a common problem now, if it will be in the future, but this make nice click-bait headlines and as I enjoy driving if I appeal to the luddite feeling/loss of control fear/hero-complex of readers and sway them I will avoid people to take my wheel/gun from me!

        Really, asking questions like : do you want people to die? and do you want to die? Of course, both will be answer by no, then proclaming people don't want driverless cars is just sleazy ...

        Meteorites fall on earth all the time, they can kill people too, where is our anti-meteorite patriot missile system? Quick crawl back to the caves and call your congress critter to do something about this life threatening problem! YOUR life is at stakes! /s

        Show us the numbers, and projections based on cause of these accidents right now, with number of people involved and outcome. Then you can convince me driverless cars are more dangerous than the actual situation now in that particular case ...

    • http://www.smbc-comics.com/com... [smbc-comics.com]

      Imagine you're in an out of control Trolley. You're headed towards three buildings and you control which you slam into. Two buildings contain only one person and one building contains five people. You randomly select a building to slam into. Then one of the other buildings is revealed to contain only one person but you can't switch to that building. Should you switch the tracks to the remaining other building?

      • You are driving you car past the home of a /. editor. He is posting a dupe and fucking up the summary. You are on the way to bone an actual woman (who's legs are on the mantle...).

        Do you stop and ninja his ass? What if you weigh 350+lbs and get winded eating?

        Answer Hell no. Actual Woman

      • http://www.smbc-comics.com/com... [smbc-comics.com]

        Imagine you're in an out of control Trolley. You're headed towards three buildings and you control which you slam into.

        Is it out of control or not?

        • by kqs ( 1038910 )

          That's the great thing about these thought experiments; they can be as unlikely as you'd like, which means that they are as inapplicable to the real world as you'd like. :-)

          SMBC is good at lampshading that.

    • Simple. Google puts a buy back option in the contract for the self driving car. They can buy back your car at any time for the full purchase price. Seems like a swell deal right? They invoke this when you are going to hit a pedestrian to buy back the car. Now it's no longer your car so the choice of who to kill here isn't predicated on your car's loyalty to you. problem solved. Plus, no need for insurance.

      • by Hylandr ( 813770 )

        I will wait until live issues have been tried in court and my expectations can be described by established precedent. Until then it's all make-believe.

    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday June 29, 2016 @12:48AM (#52410989)

      1. Any company TRYING to write code with the intention of killing/injuring the user will be sued out of existence.

      2. Whichever executive ordered the techs to write such code would never work again.

      3. Even if you allow a theoretical situation that bypasses #1 & #2, complex software is very difficult to write. The company (and executive and coders) would be sued out of existence when the car killed/injured the passenger to avoid running over a box of toy dolls.

      And yet we keep seeing this bullshit on /. People here are supposed to be more informed on the topics of AI and robotics and programming than the average. But here we are, again.

      • by dbIII ( 701233 )

        Whichever executive ordered the techs to write such code would never work again

        Unfortunately "should" replaces "would". Even Oliver North who gave classified anti-tank weapons to Islamic terrorists who had killed over a hundred US Marines less than a year previously got other jobs - for instance his current one as one of the people running the NRA.
        Well connected Execs who carry out what should be career ending movies often get a parachute out of there and have no trouble finding another high profile positi

      • by Xenx ( 2211586 )
        If a company wrote code to just up and kill the user, sure.. you might have a case. If the company wrote code to save the most number of lives in an accident, they shouldn't be liable. The morally correct option is tough to get past people. Passing a law is tough.
      • by rtb61 ( 674572 )

        The solution is quite clear cut, the law is the law. It would be emphatically illegal to produce any product that could actively break the law. So error in crossing lights and nuns and children cross the road in front of you and it is a single lane road, the vehicle will not break the law to take evasive action, it will brake as best as it can and attempt to minimise the harm to vehicle from the result of the impact with the obstructions. The same as a faulty train crossing, no illegal evasive action to ge

        • So automated vehicles require changes in the letter of the law to allow illegal driving actions to be taken in an emergency

          You do realize that's already the case?

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        The situation won't actually happen in real life... take NYC for example. Speed limit is 25mph just about everywhere---self driving cars *will* actually drive 25mph. At that speed, unless the pedestrian jumped right in front out of nowhere, the car can stop on a dot.

        Now imagine the pedestrian really did jump right out of "nowhere"; is that the fault of the car? And yes, 25mph hit would hurt, but with telemetry of the incident, it's gonna pretty easy to prove that the pedestrian was suicidal.

        Now the supposed

    • by catchblue22 ( 1004569 ) on Wednesday June 29, 2016 @01:17AM (#52411061) Homepage

      I think something that is usually not emphasized is that in most cases, human drivers will not have time to make such moral decisions. If you had time enough to think about moral implications, you would in most cases have time to avoid the accident in the first place.

    • Here we go again. We just had this discussion last week too.

      If the new slashdot owners are using the client base as fodder for some think-tank the least you could do is provide compensation after the first few times an article is recycled.

      Slashdot is going down the toilet. I can hardly find any articles worth clicking on any more due to the stupid clickbait headlines:
      Here's How Pinterest Plans to Get You To Shop More
      How Gadget Makers Violate Federal Warranty Law
      This Could Ruin Time Travel Forever
      Drivers Prefer Autonomous Cars That Don't Kill Them
      Why You Should Stop Using Telegram Right Now
      Robot Pizza Company Wants To Be 'Amazon of Food'
      Scientists Force Computer To Binge On TV Shows and Predict What Humans Will Do
      You Could Be P

  • by Koby77 ( 992785 )
    I thought there was a way to objectively decide morals: write rules ahead of time. If the car is driving in a perfectly legal manner down its lane in the road, and the 10 people in the road are jaywalking, then the car/driver is in the right of way and should proceed rather than kill its driver. Maybe try to slow down and not hit them so hard, but the car ought not sacrifice its driver for the mistakes of others. You get my point: if you don't want to get run over, then don't jaywalk. Conversely if the vehi
    • I thought there was a way to objectively decide morals: write rules ahead of time.

      I think you're confusing morality with ethics.

      Morality is the innate sense of what we think is right by ourself and others. Ethics is the attempt to codify this into rules.

      It's a bit like the difference between justice and the law.

    • and what about when that auto drive car drives right though that on street event that it failed to read the road closed part and just plows thought as it thinks that it has the legal right to be on that road?

      • by kqs ( 1038910 )

        Not sure how it would do that since it would sense the obstructions in the road. And if the sensors are not working, it would not move at all.

        Kinda like asking "what if you were driving down the highway at 65mph after being blinded?" When you make up "what if" scenarios, they should be at least vaguely plausible.

  • by e1618978 ( 598967 ) on Tuesday June 28, 2016 @11:41PM (#52410743)
    ... and laugh and run off as the driver's car kills the driver.
    • Add some randomness into the algorithm just to make it more like real life.

      It's hard to say that one decision is always correct, so choose differently from options presented.
  • by SmaryJerry ( 2759091 ) on Tuesday June 28, 2016 @11:42PM (#52410745)
    Self-driving cars will have face recognition, evaluate the net worth of the targets compared to the net worth of the driver and choose who lives accordingly.
  • by scorp1us ( 235526 ) on Tuesday June 28, 2016 @11:45PM (#52410755) Journal

    At what point will the vehicle suddenly find itself in the trolley problem [wikipedia.org]. It's doing several hundred restatements of the scenario per second. It will have started to react far sooner than this theorized last moment decision. In sort the question isn't valid because you're applying a human trait - distraction - to the computer.

    Sure there are potential scenarios vehicle crosses into on-coming traffic, a bolder rolls down a hill and lands in front of you, or a sink hole opens as you drive over it and you have to deal with it, but these are easily decided. It's decided by liability, and we already have a framework for that. The liability will sacrifice the person in the vehicle. It will do this because involving a bystander is a liability to the vehicle's insurance company. Meanwhile, in the existing legal framework, you are sill responsible for the operation of a computer operated vehicle. You legally speaking, have only yourself to blame. However even in these dire circumstances, I would trust the vehicle to use real-time data to try to make the accident as survivable as possible, for everyone. I expect it's ability to exceed my own. And I think eventually public opinion will come to believe that too - that autopilot survivability is better than human control in all circumstances.

    • This is the problem with all "thought experiments", they're not experiments at all as they start from an absurd position (see the trolly problem [wikipedia.org]).

      The main purpose of these experiments is to provide a mechanism for the questioner to place himself on a self-appointed higher moral plane while pointing out the moral failure of whatever response you give.

      The best approach to the posed question is like the one you gave - question the question and questioner.

      I dealt with the "If you could time travel back
  • ...to run over whoever keeps posting this dupe.

    BOOM! Problem solved!

  • How is that not a moral dilemma of human piloted cars. And seriously, what's the probability that we can debate this day in and day out, and then in the history of autonomous vehicles, it never comes up?
  • Doesn't anyone read science fiction or watch movies? These are not new questions.

    LK

  • It's going to be a fascinating, if redundant, discussion. The good news is that we will have a long time to discuss it before you start seeing a lot of self-driving passenger cars on our roadways.

    Now the real moral dilemma is whether one dollar of public funds should go toward infrastructure for self-driving passenger cars. I mean, if there's money left over once we get back to the point we were at middle of last century, when practically every US city had a robust (and profitable) public transportation s

  • A driverless car would not drive at a speed that was incommensurate with its own ability to sense things ahead of it which it may otherwise hit. If something up ahead happens to be hidden, whether it is because of nearby buildings or just the topography, the vehicle will be driving slowly enough that it will be able to safely stop *before* it hits something that it cannot yet see. No swerving necessary.
  • Not that shit again!
  • I mean it isn't like humanity hasn't been agonizing over these questions since the birth of civilization without coming to satisfactory answers

  • Kill the pedestrians and the people in the car. Try and keep the numbers equal.

    Welcome to digital morality.

  • Nothing immoral about having the car minimize injury to the driver, and fuck everyone else. In most cases this will also minimize injury to those outside the car.

    • Nothing immoral about having the car minimize injury to the driver, and fuck everyone else.

      There are far more cars carrying other people than me on the roads. As a rational person I'm therefore voting for forcing the self-driving cars to minimize total casualties with no particular preference for or against its passengers.

      Also, "and fuck everyone else" is pretty much the definition of immoral.

  • From a Liability perspective you're safer prioritizing overall minimization of loss of life.
    From a Sales perspective, who's going to buy a car that's programmed to purposefully kill you under certain circumstances?

    • From a Liability perspective you're safer prioritizing overall minimization of loss of life. From a Sales perspective, who's going to buy a car that's programmed to purposefully kill you under certain circumstances?

      The concept of ownership is becoming obsolete, so discussions around it may be rather pointless.

      To be honest, I never envisioned fleets of autonomous cars being owned or controlled by any entity other than a government-sanctioned and protected one, or the government itself. This will help ensure lawsuits derived from moral dilemmas become rather impossible to even conceive, let alone execute.

      And even if it is not, who's going to sell a car where the manufacturer is liable for who may be harmed during auton

  • by Chuckstar ( 799005 ) on Wednesday June 29, 2016 @12:16AM (#52410891)

    As far as I can tell, the autonomous algorithms don't work this way and probably never will work this way. That is, they don't calculate potential fatalities for various scenarios and then pick the minimum one. The car's response in any particular situation will be effectively some combination of simpler heuristics -- simpler than trying to project casualty figures, while still being a rather complex set of rules.

    Take one of these situations, and let's say the car ended up killing pedestrians and saving the occupants. The after-incident report for an accident like that is not going to read "the algorithm chose to save the occupants instead of the pedestrians". It's not going to read that way simply because that's not how the algorithm makes decisions. Instead the report is going to read something like "the algorithm gives extra weight to keeping the car on the road. In this situation, that resulted in putting the pedestrians in greater danger than the car's occupants. However, we still maintain that, on average, this results in a safer driving algorithm, even if it does not optimize the result of every possible accident."

    And regarding the "every possible accident" part of that: it is simply impossible to imagine an algorithm so perfect that, in any situation, it can optimize the result based on some pre-determined moral outcome. So it's not just "well, let's change how the algorithms work, then". Such an algorithm that makes driving decisions in any possible weird decision based on predicting fatalities, rather than relying on heuristics (however complex they are) is simply not realistic.

    • by sjames ( 1099 )

      Everyone knows the report will include a passenger in the back seat being pretty sure the friendly green light turned red and the computer voice said "Kill the humans!"

  • Put a DIP switch in the car. On position, Save driver at all cost, OFF position, minimize casualties, even if that means sacrificing the drivers. Default it to OFF.

    Explain in the manual how to change it.

    DO NOT LET THE DEALERSHIP CHANGE IT.

    Enjoy safer streets

  • as long as the tree is not too big
  • repeat of a worthless post. get this crap off my screen.
    I didn't care before.
    Now I want them all dead.
  • by ihaveamo ( 989662 ) on Wednesday June 29, 2016 @01:23AM (#52411077)
    Car must swerve left or right - Swerve to hit three 95-year-olds, or two 5-year-olds? I s'pose thats ageism....
    • Car must swerve left or right - Swerve to hit three 95-year-olds, or two 5-year-olds?

      The car has a retired fighter pilot AI [slashdot.org], which quickly performs a partial barrel roll, sliding between both on two wheels. It also automatically shares the video on Youtube.

  • In roughly a century of driving, humans have learned one strategy: slam on the breaks. The choice is "break, or don't". When the driver is replaced by a bot, the choice is STILL "break, or don't".

    I swear, this nonsense about algorithms implementing moral calculus is just a scam to get philosophy professors a few more speaking engagements.

    • In roughly a century of driving, humans have learned one strategy: slam on the breaks. The choice is "break, or don't". When the driver is replaced by a bot, the choice is STILL "break, or don't".

      I swear, this nonsense about algorithms implementing moral calculus is just a scam to get philosophy professors a few more speaking engagements.

      Speaking of nonsense, care to tell me how the hell philosophy professors are responsible for creating the litigious society we live in today?

      Regardless of the reaction or who or what is responsible for a death, the lawyer is standing by, armed with a metric fuckton of legal precedent, which IS the entire reason we're having this discussion.

    • by tlhIngan ( 30335 )

      In roughly a century of driving, humans have learned one strategy: slam on the breaks. The choice is "break, or don't". When the driver is replaced by a bot, the choice is STILL "break, or don't".

      I swear, this nonsense about algorithms implementing moral calculus is just a scam to get philosophy professors a few more speaking engagements.

      Exactly. (It's "brake", btw).

      If you see a situation where this might even remotely be possible, then drivers typically SLOW DOWN so there's not only more time to react, but

      • Wait, what? The best way to stop a vehicle with failed brakes is to 1. Use the engine and e-brake 2. While this is going on, continue to avoid obstacles as long as possible 3. If flat, higher friction surfaces are available, drive on them (pull off onto the road shoulder if there is a shoulder and the speed is low enough, for example - the gravel there at some road shoulders will slow the car down more than driving on pavement)

        The only time crashing head on is a good idea is if it's unavoidable or a cho

    • by Dog-Cow ( 21281 )

      I wouldn't buy a car that broke all the time, pedestrians or no.

  • I really doubt this problem would last after all human drivers are replaced.

  • And make it 2 out of 3!

  • If I get a vote, I'd kind of like a driverless car that doesn't find itself choosing between swerving wildly off the road or hitting a crowd full of people. How does it come up, anyway? I mean, if the car is following the rules, and 10 people spontaneously decide to fling themselves in front of it... fuck it, run 'em down, with a sarcastic little "beep beep" as it drives away.

  • You should save the people that are actually complying with the law and acting reasonably. Someone crossing the road at a point where visibility is poor and a driverless car can only avoid hitting them by killing its passengers is probably not acting reasonably, and all things being equal, the driverless car should therefore protect its passengers.

  • So the car roll and kill both the pedestrians and the driver at the same time.

  • Comment removed based on user account deletion
  • by shentino ( 1139071 ) <shentino@gmail.com> on Wednesday June 29, 2016 @03:41AM (#52411413)

    Use good AI to optimize efficiency, but detect human drivers and give them a wider margin of safety.

    As far as the morals of saving pedestrians vs passengers or drivers, lets not forget the bittorrent protocol.

    Game theory, and real life itself, deal with cooperation vs defection, and any car that selflessly seppukus their own to spare a greater number is going to get taken advantage of by less scrupulous algorithms.

    Anyone trying to program an AI on how to handle a car accident should not forget this.

  • If we know the car is programmed to crash into a tree to avoid pedestrian casualties, this can be planned for in the safety design of the car, since it makes the kind of crash more predictable. Further, we can research into how to not get into those situations in the first place. This means looking ahead more when driving (what driving instructors often talk about, what driving students often omit to learn, and what serious police driver training used to drum into people). But being able to compile a compre

  • The car will be programmed to take whatever action minimizes the manufacturer's liability.
  • We already have laws around these things - that dictate what a driver is supposed to do in these conditions and what degree of liability he would have towards passengers or pedestrians. Autonomous cars should do exactly what the local law would have demanded a human driver do.

  • by OpenSourced ( 323149 ) on Wednesday June 29, 2016 @05:19AM (#52411605) Journal

    I'd say that in any discussion of this kind, you should first have a very clear idea of what is the situation now. What does the current driver do in these situations. Which are the outcomes.

    I'd say the best defense for any algorithm would be that, in all (or most) situations, saves more pedestrian lives AND more passenger lives than the current situation.

    That's the only way, I think, of reconciling people with the worst user-wise handicap for these technologies, that is the loss-of-control sensation.

  • What else?

  • I don't see why this is such a conundrum. Right now we presume the driver of a human operated vehicle will in most cases attempt to save the occupants of the vehicle first since the imperative of the driver will be self preservation. I see no reason why this would need to change. All that has changed is that the driver isn't human but it's reasonable to expect the driver of the vehicle (human or not) to attempt to preserve the life of the occupants of the vehicle first because it fundamentally will have

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...