The Moral Dilemma of Driverless Cars: Save The Driver or Save The Crowd? 364
HughPickens.com writes: What should a driverless car with one rider do if it is faced with the choice of swerving off the road into a tree or hitting a crowd of 10 pedestrians? The answer depends on whether you are the rider in the car or someone else is, writes Peter Dizikes at MIT News. According to recent research most people prefer autonomous vehicles to minimize casualties in situations of extreme danger -- except for the vehicles they would be riding in. "Most people want to live in in a world where cars will minimize casualties," says Iyad Rahwan. "But everybody wants their own car to protect them at all costs." The result is what the researchers call a "social dilemma," in which people could end up making conditions less safe for everyone by acting in their own self-interest. "If everybody does that, then we would end up in a tragedy whereby the cars will not minimize casualties," says Rahwan. Researchers conducted six surveys, using the online Mechanical Turk public-opinion tool, between June 2015 and November 2015. The results consistently showed that people will take a utilitarian approach to the ethics of autonomous vehicles, one emphasizing the sheer number of lives that could be saved. For instance, 76 percent of respondents believe it is more moral for an autonomous vehicle, should such a circumstance arise, to sacrifice one passenger rather than 10 pedestrians. But the surveys also revealed a lack of enthusiasm for buying or using a driverless car programmed to avoid pedestrians at the expense of its own passengers. "This is a challenge that should be on the mind of carmakers and regulators alike," the researchers write. "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest."
Seems this topic is stuck in the roundabout. (Score:5, Insightful)
Here we go again. We just had this discussion last week too.
If the new slashdot owners are using the client base as fodder for some think-tank the least you could do is provide compensation after the first few times an article is recycled.
Re:Seems this topic is stuck in the roundabout. (Score:5, Interesting)
My point is still valid though. False dichotomy. The car should (and pretty much every driver-less car will) use maximum braking power to reduce speed as much as possible. In almost all cases it will do this long before becomes too late to stop without hitting anyone. This gives pedestrians the most time to get out of the way and if it hits them it does so at the lowest possible speed.
Further, when swerving you run the risk of a pedestrian diving out of the way, in the SAME direction that the car swerves.
Typically such "oh no I must choose which object hit" scenarios occur when the car is driving recklessly or the driver is inattentive, neither of which should apply to non-hacked self-driving cars.
Re: (Score:2)
Let's add a bit more falseness to the presumption Rahwan makes. Asking people questions and examining real life episodes do not produce the same results. We have numerous examples of people actually choosing to hit a tree or building instead of people.
Re: (Score:2)
They probably thought they would survive hitting the tree or building. Hitting people means going to jail.
OTOH, it I have no control of my car (it being self-driving and all), then I would prefer these outcomes (in order of preference):
1. No damage to anyone (safe stop)
2. Easily repaired damage to the car.
3. Very small self-healing injuries to me.
4. Non-permanent injuries to other people (broken leg etc).
5. Massive damage to the car.
6. Killing other people.
7. Killing me.
Actually, if I was a passenger in a t
Re: (Score:2)
Re:Seems this topic is stuck in the roundabout. (Score:5, Insightful)
I like how everyone assumes people make carefully considered, rational decisions in a high-speed crisis.
People probably choose to veer away from hitting people because they don't realize they might kill themselves - they just see what is in front of them and sure to happen, and don't have the time or wherewithall to consider the unknown consequences.
People will reach out to catch a falling knife, too, but that doesn't mean that they thought about the implications.
Re:Seems this topic is stuck in the roundabout. (Score:4, Interesting)
I did not say that I can carefully consider all the outcomes before deciding whether to hit a tree or a man. The time is usually too short to consider anything, other than trying to stop and maybe turning the car in a direction away from any object (maybe unsuccessfully).
However, computers do what they are told and AI most likely does have time to consider this. Which means that this is now a problem - I do not want the AI in my car (or a taxi driver) to carefully consider the outcomes and decide to kill me instead of a pedestrian. Since it is most likely impossible for the taxi driver to carefully consider all options, I accept that the outcome is gong to be random (he may be too slow to react and hit the object in front whether it's a tree or a man, he may try to avoid the man in front only to hit a tree he didn't notice or he might try to avoid hitting the tree only to hit the man).
Not so when such situations are considered well in advance (when programming the AI) - in that case I will not want to ride in a car that is driven by AI that will predictably choose to hit a tree instead of a man.
For the purposes of the example, assume that the speed is high enough that hitting a tree will kill or permanently disable the people in the car, while hitting the man will kill the man, but leave the passengers better off (without permanent disability).
In addition to that, when I am driving, I am in control and responsible for my decisions (whether they are thought out or I was just too slow to react). Not so, when the AI is in control.
Re: (Score:2)
It is also you who put the car in a situation where you have to choose between hitting the pedestrian and the tree.
The pedestrian could have jumped out in front of me, not on a crosswalk, from the front of a stopped lorry (that is, where it was impossible for me to see him). Or he could even be doing this deliberately to try to get money from my insurance (I know one who tried that, he was found out, so, not only he was severely injured, he had to pay for the damage to the car).
And no, I will not always go for the tree. And if I am not in control of the car, then I prefer that the driver (be it a human or AI) goes for t
Re:Seems this topic is stuck in the roundabout. (Score:5, Insightful)
Your car is filled with airbags and seatbelts and crumple zones and all sorts designed to protect you during a crash. Pedestrians have none of that (at least for the time being). The CAR should protect you (using those safety features), the AI should do what drivers are supposed to do - cause the least amount of carnage on the road.
Re:Seems this topic is stuck in the roundabout. (Score:4, Insightful)
That is extremely narcissistic and the argument is only valid if you always go for the tree.
You are in control of the vehicle. The pedestrian does not share that responsibility.
It is also you who put the car in a situation where you have to choose between hitting the pedestrian and the tree.
There is no moral justification for not going for the tree in that case.
What about on a limited access highway where there is a reasonable expectation that people aren't suppose to be?
What about in the situation where the person intentionally jumps out in front of traffic in an attempt to commit suicide?
The issue I have with this question is that I doubt they are going to be programming the car to count the number of passengers in the car, the number of pedestrians or even distinguishing between a person and a deer. To make an accurate crash prediction you would likely want to even know the diameter of the tree and what is behind the tree. The goal of driverless cars is to avoid most crashes in the first place. The idea that someone would be adding all this moralistic code for rare never really happens events is a bit far fetched. In almost all cases, the goal of the car is to avoid collision and if that is not possible to minimize the speed of impact. After that, it becomes very complex because you have to look at what you're impacting and how much give it has. A deer/person has more give that a cement pillar and would actually be a safer option. Also, in many cases, staying on the road and hitting the deer/person would be safer than swerving and rolling down an embankment even if you technically didn't hit anything. It would be interesting to know though if they are coding for doing different behaviors based on whether the unknown obstacle on the road is a dog, a deer, or a person because many people would make different calculations depending on whether the animal in the road is human or not.
Re:Seems this topic is stuck in the roundabout. (Score:4, Informative)
>People probably choose to veer away from hitting people because they don't realize they might kill themselves - they just see what is in front of them and sure to happen, and don't have the time or wherewithall to consider the unknown consequences.
Well that fits my personal experience. The worst car accident I ever had happened when I swerved to avoid a hazzard on a highway while travelling at high speed. I ended up on the traffic island where I crashed into a tree.
This is where modern automotive technology makes a huge difference however. Despite hitting a tree head-on at 120km/h I walked away with nary a scratch. Airbags and crumplezones kept myself and my passengers alive and almost entirely uninjured. Car was utterly destroyed, but that's better than humans being hurt.
But thinking back - yes, that's exactly how it went. When you see a sudden hazard on the road at high speed there is simply no TIME to think through a chain of consequences or evaluate multiple possible chains of events. You can do this when you have more time - but modern ABS enabled cars can probably achieve a safe dead-stop in the same time - but when it's a sudden hazard like a large animal running onto the road out of bushes where it was hidden (as happened to me)
there is just no time to do that. You deal with the problem immediately in front of you using the first viable option - you swerve to avoid, trying to regain control and avoid subsequent problems caused by the swerve becomes something you think about *after* you've swerved. You may not have the time to actually process what new problems there are and react to them at all (I sure didn't) but you simply cannot consider them beforehand. Not to mention that the bit of thought you can spare is based on the extremely limited information and judgement calls. Part of why I chose to swerve towards the island was that (1) it meant not crossing other lanes which would potentially cause me to hit other cars and (2) the plants on the island appeared to be small shrubs - unlikely to cause major damage even if I couldn't avoid hitting one. Turns out that despite being pruned low - that thing had a massive trunk capable of turning my engine into something resembling an empty tin can in a vaccuum.
Re: (Score:2)
Or they were simply evading the immediate obstacle and didn't have the time to check whether there was anything there.
We have regulations so the actual ruleset ends up being whatever minimizes damage.
But from a purely technical viewpoint, I wonder if programming the AI with
Dude, you're messed up. (Score:4, Interesting)
Even if the leg heals up fully, the pain could be tremendous. The inconvenience massive -- perhaps the victim lives on the 3rd floor? How about work -- lots of people require mobility for their job (think: waitress). Oh, yeah, and the financial cost to repair the leg could easily outpace the cost of replacing the car.
You'd rather break someone else's bones than total a car where everyone escapes injury free? That's messed up.
Re:Seems this topic is stuck in the roundabout. (Score:5, Interesting)
Expound on the morality of the issue all you want. The final decision as to whether the outcome was predetermined or premeditated will belong to the jury.
The real question I want the answer to is who will be on trial? Even then, until there is a sufficient body of judicial precedent I refuse to own, operate or allow to be carted away to my funeral in one.
Re: (Score:2)
It's actually really simple and really obvious.
The person who caused the accident will be held responsible. Most likely it will be a human, but it's possible that it will be bad design/programming by the self driving car manufacturer.
The decision that the car made will be largely irrelevant. Just as we wouldn't expect a human driver to decide between their own life and a crowd of nuns in a split second, we wouldn't blame a self driving car for simply applying the brakes and stopping as quickly as possible.
Re: (Score:3)
Re:Seems this topic is stuck in the roundabout. (Score:4, Insightful)
THIS!
I think this topic is really representative of the media scaremongering today :
1 - Take a situation which presents a moral dilema, however rare it may arise in real life even now ... How many times a day does this exact situation REALLY happens in the US for example? I wanna know to check it is not an imaginary problem! ... ... sorry AI, robots! They're commin' for ya!
2 - Ask the wrong questions about the part of the situation that is the closer to a catastrophic failure that it can be, in a way that sound as scary or horrific as possible, to get the answer you are after : What if YOU have to die to save 10 strangers (and one may be the next Stalin anyway)?
3 - Make sure to blow up the importance of this extreme-odds problem : like millions of people will die everyday
4 - Find a culprit that is different from your readership : migrants, err
5- Conveniently forget that the problem can be even rarer as AI won't be texting, as even if a glitch happens, it could be corrected after that and for all cars on the road! So really, what is the actual frequency now and what would it be with driverless cars?
6 - Make it a priority : After all, we don't even know if it is a common problem now, if it will be in the future, but this make nice click-bait headlines and as I enjoy driving if I appeal to the luddite feeling/loss of control fear/hero-complex of readers and sway them I will avoid people to take my wheel/gun from me!
Really, asking questions like : do you want people to die? and do you want to die? Of course, both will be answer by no, then proclaming people don't want driverless cars is just sleazy ...
Meteorites fall on earth all the time, they can kill people too, where is our anti-meteorite patriot missile system? Quick crawl back to the caves and call your congress critter to do something about this life threatening problem! YOUR life is at stakes! /s
Show us the numbers, and projections based on cause of these accidents right now, with number of people involved and outcome. Then you can convince me driverless cars are more dangerous than the actual situation now in that particular case ...
Re: (Score:3)
Yes its a false dichotomy, but thats because they're missing an important option - why is nobody worried about saving the poor self driving car!!!
SMBC monte Hall Trolley problem (Score:2)
http://www.smbc-comics.com/com... [smbc-comics.com]
Imagine you're in an out of control Trolley. You're headed towards three buildings and you control which you slam into. Two buildings contain only one person and one building contains five people. You randomly select a building to slam into. Then one of the other buildings is revealed to contain only one person but you can't switch to that building. Should you switch the tracks to the remaining other building?
Re: (Score:2)
You are driving you car past the home of a /. editor. He is posting a dupe and fucking up the summary. You are on the way to bone an actual woman (who's legs are on the mantle...).
Do you stop and ninja his ass? What if you weigh 350+lbs and get winded eating?
Answer Hell no. Actual Woman
Re: (Score:2)
http://www.smbc-comics.com/com... [smbc-comics.com]
Imagine you're in an out of control Trolley. You're headed towards three buildings and you control which you slam into.
Is it out of control or not?
Re: (Score:3)
That's the great thing about these thought experiments; they can be as unlikely as you'd like, which means that they are as inapplicable to the real world as you'd like. :-)
SMBC is good at lampshading that.
Simple escape clause in the contract (Score:2)
Simple. Google puts a buy back option in the contract for the self driving car. They can buy back your car at any time for the full purchase price. Seems like a swell deal right? They invoke this when you are going to hit a pedestrian to buy back the car. Now it's no longer your car so the choice of who to kill here isn't predicated on your car's loyalty to you. problem solved. Plus, no need for insurance.
Re: (Score:2)
I will wait until live issues have been tried in court and my expectations can be described by established precedent. Until then it's all make-believe.
Not even think-tank shit. (Score:4, Insightful)
1. Any company TRYING to write code with the intention of killing/injuring the user will be sued out of existence.
2. Whichever executive ordered the techs to write such code would never work again.
3. Even if you allow a theoretical situation that bypasses #1 & #2, complex software is very difficult to write. The company (and executive and coders) would be sued out of existence when the car killed/injured the passenger to avoid running over a box of toy dolls.
And yet we keep seeing this bullshit on /. People here are supposed to be more informed on the topics of AI and robotics and programming than the average. But here we are, again.
Re: (Score:2)
Unfortunately "should" replaces "would". Even Oliver North who gave classified anti-tank weapons to Islamic terrorists who had killed over a hundred US Marines less than a year previously got other jobs - for instance his current one as one of the people running the NRA.
Well connected Execs who carry out what should be career ending movies often get a parachute out of there and have no trouble finding another high profile positi
Re: (Score:2)
Re: (Score:2)
The solution is quite clear cut, the law is the law. It would be emphatically illegal to produce any product that could actively break the law. So error in crossing lights and nuns and children cross the road in front of you and it is a single lane road, the vehicle will not break the law to take evasive action, it will brake as best as it can and attempt to minimise the harm to vehicle from the result of the impact with the obstructions. The same as a faulty train crossing, no illegal evasive action to ge
Re: (Score:2)
You do realize that's already the case?
Re: (Score:3, Interesting)
The situation won't actually happen in real life... take NYC for example. Speed limit is 25mph just about everywhere---self driving cars *will* actually drive 25mph. At that speed, unless the pedestrian jumped right in front out of nowhere, the car can stop on a dot.
Now imagine the pedestrian really did jump right out of "nowhere"; is that the fault of the car? And yes, 25mph hit would hurt, but with telemetry of the incident, it's gonna pretty easy to prove that the pedestrian was suicidal.
Now the supposed
Re:Seems this topic is stuck in the roundabout. (Score:5, Insightful)
I think something that is usually not emphasized is that in most cases, human drivers will not have time to make such moral decisions. If you had time enough to think about moral implications, you would in most cases have time to avoid the accident in the first place.
Re: (Score:2)
This. We as drivers program ourselves to make decisions about our lives and how we weigh our choices at the moment is our resolve.
I cannot in conscious abdicate the right to self-determination, especially when the legal outcome has not been determined for me if I should live. Getting any kind of a conviction in the U.S. is career suicide unless I am a lawyer or politician.
Re: (Score:2)
Here we go again. We just had this discussion last week too.
If the new slashdot owners are using the client base as fodder for some think-tank the least you could do is provide compensation after the first few times an article is recycled.
Slashdot is going down the toilet. I can hardly find any articles worth clicking on any more due to the stupid clickbait headlines:
Here's How Pinterest Plans to Get You To Shop More
How Gadget Makers Violate Federal Warranty Law
This Could Ruin Time Travel Forever
Drivers Prefer Autonomous Cars That Don't Kill Them
Why You Should Stop Using Telegram Right Now
Robot Pizza Company Wants To Be 'Amazon of Food'
Scientists Force Computer To Binge On TV Shows and Predict What Humans Will Do
You Could Be P
The moral dilemma of posting dupes (Score:5, Informative)
https://tech.slashdot.org/stor... [slashdot.org]
Re: (Score:3)
Or maybe The Moral Delimma of Editorless News Sites.
Re: (Score:2)
Just follow the rules (Score:2, Interesting)
Re: (Score:2)
https://tech.slashdot.org/stor... [slashdot.org]
Re: (Score:2)
I thought there was a way to objectively decide morals: write rules ahead of time.
I think you're confusing morality with ethics.
Morality is the innate sense of what we think is right by ourself and others. Ethics is the attempt to codify this into rules.
It's a bit like the difference between justice and the law.
Re: (Score:2)
and what about when that auto drive car drives right though that on street event that it failed to read the road closed part and just plows thought as it thinks that it has the legal right to be on that road?
Re: (Score:3)
Not sure how it would do that since it would sense the obstructions in the road. And if the sensors are not working, it would not move at all.
Kinda like asking "what if you were driving down the highway at 65mph after being blinded?" When you make up "what if" scenarios, they should be at least vaguely plausible.
Crowds of teens will jump into the road as a joke (Score:5, Interesting)
Add randomness, like real life (Score:2)
It's hard to say that one decision is always correct, so choose differently from options presented.
Re: Crowds of teens will jump into the road as a j (Score:2, Interesting)
It already is. Saw a talk by a Google scientist on the car project. He said in 95% of incidents the cameras clearly show the other driver is looking at their cell phone.
Re: (Score:2)
People trying to commit murder by jumping in-front of a sensor covered recording device? Is that really a problem to worry about? There are already much better ways to commit murder.
Also its very hard to make it such that the car can both avoid you and not avoid other obstacles or stop. You will likely just get serious injuries and dent the car a bit instead of killing anyone, or simply have the car successfully avoid you and/or stop.
While these scenarios are fun to discuss, they are very unlikely to happen
Intelligent Steering (Score:5, Funny)
I don't think it is a valid question (Score:3)
At what point will the vehicle suddenly find itself in the trolley problem [wikipedia.org]. It's doing several hundred restatements of the scenario per second. It will have started to react far sooner than this theorized last moment decision. In sort the question isn't valid because you're applying a human trait - distraction - to the computer.
Sure there are potential scenarios vehicle crosses into on-coming traffic, a bolder rolls down a hill and lands in front of you, or a sink hole opens as you drive over it and you have to deal with it, but these are easily decided. It's decided by liability, and we already have a framework for that. The liability will sacrifice the person in the vehicle. It will do this because involving a bystander is a liability to the vehicle's insurance company. Meanwhile, in the existing legal framework, you are sill responsible for the operation of a computer operated vehicle. You legally speaking, have only yourself to blame. However even in these dire circumstances, I would trust the vehicle to use real-time data to try to make the accident as survivable as possible, for everyone. I expect it's ability to exceed my own. And I think eventually public opinion will come to believe that too - that autopilot survivability is better than human control in all circumstances.
Re: (Score:3)
The main purpose of these experiments is to provide a mechanism for the questioner to place himself on a self-appointed higher moral plane while pointing out the moral failure of whatever response you give.
The best approach to the posed question is like the one you gave - question the question and questioner.
I dealt with the "If you could time travel back
I hope the car AI decides... (Score:5, Funny)
...to run over whoever keeps posting this dupe.
BOOM! Problem solved!
How is it different? (Score:2)
Come on, man! (Score:2)
Doesn't anyone read science fiction or watch movies? These are not new questions.
LK
self-killing cars (Score:2)
It's going to be a fascinating, if redundant, discussion. The good news is that we will have a long time to discuss it before you start seeing a lot of self-driving passenger cars on our roadways.
Now the real moral dilemma is whether one dollar of public funds should go toward infrastructure for self-driving passenger cars. I mean, if there's money left over once we get back to the point we were at middle of last century, when practically every US city had a robust (and profitable) public transportation s
A kobayashi maru is not likely (Score:2)
Not that shit again! (Score:2)
Surprise Eternal Questions Are Hard (Score:2)
I mean it isn't like humanity hasn't been agonizing over these questions since the birth of civilization without coming to satisfactory answers
Re: (Score:2)
Re: (Score:2)
Who should live who should die
Egalitarian Solution (Score:2)
Welcome to digital morality.
Sigh (Score:2)
Nothing immoral about having the car minimize injury to the driver, and fuck everyone else. In most cases this will also minimize injury to those outside the car.
Re: (Score:2)
There are far more cars carrying other people than me on the roads. As a rational person I'm therefore voting for forcing the self-driving cars to minimize total casualties with no particular preference for or against its passengers.
Also, "and fuck everyone else" is pretty much the definition of immoral.
Liability vs Sales (Score:2)
From a Liability perspective you're safer prioritizing overall minimization of loss of life.
From a Sales perspective, who's going to buy a car that's programmed to purposefully kill you under certain circumstances?
Re: (Score:2)
From a Liability perspective you're safer prioritizing overall minimization of loss of life. From a Sales perspective, who's going to buy a car that's programmed to purposefully kill you under certain circumstances?
The concept of ownership is becoming obsolete, so discussions around it may be rather pointless.
To be honest, I never envisioned fleets of autonomous cars being owned or controlled by any entity other than a government-sanctioned and protected one, or the government itself. This will help ensure lawsuits derived from moral dilemmas become rather impossible to even conceive, let alone execute.
And even if it is not, who's going to sell a car where the manufacturer is liable for who may be harmed during auton
Re: (Score:2)
> The concept of ownership is becoming obsolete, so discussions around it may be rather pointless.
Bullshit like this article shows why it won't be obsolete. You definitely want to OWN the car, so it will save YOU. This succinctly demonstrates the value of ownership- if the state owns the car, maybe you can plead for your life at city hall. Good luck!
Whether or not you want to "own" your car or not will be a moot point.
I'd like to "own" my cell phone, and how it operates. Should I go plead to the providers? How much "luck" do you think I'll need to get support for that? I hope that's a clear enough example of where we're going in society when it comes to services and who's in control.
Why do I say government here? Single entity. Single autonomous standard. Single control mechanism to mitigate or remove liability. Otherwise, cue the lawsuits betwee
I don't think the algorithms work this way (Score:4, Insightful)
As far as I can tell, the autonomous algorithms don't work this way and probably never will work this way. That is, they don't calculate potential fatalities for various scenarios and then pick the minimum one. The car's response in any particular situation will be effectively some combination of simpler heuristics -- simpler than trying to project casualty figures, while still being a rather complex set of rules.
Take one of these situations, and let's say the car ended up killing pedestrians and saving the occupants. The after-incident report for an accident like that is not going to read "the algorithm chose to save the occupants instead of the pedestrians". It's not going to read that way simply because that's not how the algorithm makes decisions. Instead the report is going to read something like "the algorithm gives extra weight to keeping the car on the road. In this situation, that resulted in putting the pedestrians in greater danger than the car's occupants. However, we still maintain that, on average, this results in a safer driving algorithm, even if it does not optimize the result of every possible accident."
And regarding the "every possible accident" part of that: it is simply impossible to imagine an algorithm so perfect that, in any situation, it can optimize the result based on some pre-determined moral outcome. So it's not just "well, let's change how the algorithms work, then". Such an algorithm that makes driving decisions in any possible weird decision based on predicting fatalities, rather than relying on heuristics (however complex they are) is simply not realistic.
Re: (Score:2)
Everyone knows the report will include a passenger in the back seat being pretty sure the friendly green light turned red and the computer voice said "Kill the humans!"
The one who pays decides (Score:2)
Put a DIP switch in the car. On position, Save driver at all cost, OFF position, minimize casualties, even if that means sacrificing the drivers. Default it to OFF.
Explain in the manual how to change it.
DO NOT LET THE DEALERSHIP CHANGE IT.
Enjoy safer streets
Obvious choice: hit a tree (Score:2)
MOD this topic DOWN! (Score:2)
I didn't care before.
Now I want them all dead.
Not just people - type of people? (Score:3)
Re: (Score:2)
The car has a retired fighter pilot AI [slashdot.org], which quickly performs a partial barrel roll, sliding between both on two wheels. It also automatically shares the video on Youtube.
For the hundredth time... (Score:2)
In roughly a century of driving, humans have learned one strategy: slam on the breaks. The choice is "break, or don't". When the driver is replaced by a bot, the choice is STILL "break, or don't".
I swear, this nonsense about algorithms implementing moral calculus is just a scam to get philosophy professors a few more speaking engagements.
Re: (Score:2)
In roughly a century of driving, humans have learned one strategy: slam on the breaks. The choice is "break, or don't". When the driver is replaced by a bot, the choice is STILL "break, or don't".
I swear, this nonsense about algorithms implementing moral calculus is just a scam to get philosophy professors a few more speaking engagements.
Speaking of nonsense, care to tell me how the hell philosophy professors are responsible for creating the litigious society we live in today?
Regardless of the reaction or who or what is responsible for a death, the lawyer is standing by, armed with a metric fuckton of legal precedent, which IS the entire reason we're having this discussion.
Re: (Score:2)
Exactly. (It's "brake", btw).
If you see a situation where this might even remotely be possible, then drivers typically SLOW DOWN so there's not only more time to react, but
Re: (Score:2)
Wait, what? The best way to stop a vehicle with failed brakes is to 1. Use the engine and e-brake 2. While this is going on, continue to avoid obstacles as long as possible 3. If flat, higher friction surfaces are available, drive on them (pull off onto the road shoulder if there is a shoulder and the speed is low enough, for example - the gravel there at some road shoulders will slow the car down more than driving on pavement)
The only time crashing head on is a good idea is if it's unavoidable or a cho
Re: (Score:2)
I wouldn't buy a car that broke all the time, pedestrians or no.
short term problem (Score:2)
I really doubt this problem would last after all human drivers are replaced.
Flip a coin (Score:2)
And make it 2 out of 3!
Obvious preference (Score:2)
If I get a vote, I'd kind of like a driverless car that doesn't find itself choosing between swerving wildly off the road or hitting a crowd full of people. How does it come up, anyway? I mean, if the car is following the rules, and 10 people spontaneously decide to fling themselves in front of it... fuck it, run 'em down, with a sarcastic little "beep beep" as it drives away.
wrong question (Score:2)
You should save the people that are actually complying with the law and acting reasonably. Someone crossing the road at a point where visibility is poor and a driverless car can only avoid hitting them by killing its passengers is probably not acting reasonably, and all things being equal, the driverless car should therefore protect its passengers.
Just do a really sharp turn.. (Score:2)
So the car roll and kill both the pedestrians and the driver at the same time.
Re: (Score:2)
Game theory (Score:3)
Use good AI to optimize efficiency, but detect human drivers and give them a wider margin of safety.
As far as the morals of saving pedestrians vs passengers or drivers, lets not forget the bittorrent protocol.
Game theory, and real life itself, deal with cooperation vs defection, and any car that selflessly seppukus their own to spare a greater number is going to get taken advantage of by less scrupulous algorithms.
Anyone trying to program an AI on how to handle a car accident should not forget this.
If we know the car will crash, we can plan for it (Score:2)
If we know the car is programmed to crash into a tree to avoid pedestrian casualties, this can be planned for in the safety design of the car, since it makes the kind of crash more predictable. Further, we can research into how to not get into those situations in the first place. This means looking ahead more when driving (what driving instructors often talk about, what driving students often omit to learn, and what serious police driver training used to drum into people). But being able to compile a compre
What will really happen... (Score:2)
It's not actually a hard question (Score:2)
We already have laws around these things - that dictate what a driver is supposed to do in these conditions and what degree of liability he would have towards passengers or pedestrians. Autonomous cars should do exactly what the local law would have demanded a human driver do.
What's the present situation? (Score:3)
I'd say that in any discussion of this kind, you should first have a very clear idea of what is the situation now. What does the current driver do in these situations. Which are the outcomes.
I'd say the best defense for any algorithm would be that, in all (or most) situations, saves more pedestrian lives AND more passenger lives than the current situation.
That's the only way, I think, of reconciling people with the worst user-wise handicap for these technologies, that is the loss-of-control sensation.
Save the Car! (Score:2)
What else?
Preserve the legal status quo (Score:2)
I don't see why this is such a conundrum. Right now we presume the driver of a human operated vehicle will in most cases attempt to save the occupants of the vehicle first since the imperative of the driver will be self preservation. I see no reason why this would need to change. All that has changed is that the driver isn't human but it's reasonable to expect the driver of the vehicle (human or not) to attempt to preserve the life of the occupants of the vehicle first because it fundamentally will have
Re: (Score:2)
You fail at physics. There's no way that a single man/woman (fat or otherwise) could stop a train.
Re: (Score:2)
You can stop a train by making a phone call.
Stopping a train (Score:2)
You fail at physics. There's no way that a single man/woman (fat or otherwise) could stop a train.
Really?" [wikipedia.org] You sure about that?
Re: (Score:2)
Plenty of obese neckbeards to throw. It's not like anybody will ever miss them.
Irrelevant to the question of whether it actually works. Which it doesn't.
Re: (Score:2)
However when the pedestrian is a moose you may want to revise your thinking.
Disturbing pic: http://www.ontario-outfitters.... [ontario-outfitters.ca]
Re: (Score:3)
Re: (Score:2)
One person chose to operate several tons of rolling death and metal. Another person just happened to be standing around. The entire responsibility lies with the person who made to choice.
Operate? Choice? Yeah right.
As we consider objects like steering wheels being removed, driving questions around whether or not the rider even needs to be licensed, the future will look more like a rider on a train or bus today. How else are you going to remove the human factor that tends to kill thousands of people every year?
For clarification, reference "autonomous".
Re: (Score:2)
You're arguing that since it's autonomous, the occupant didn't make the choice to get in? And therefore pedestrians should be killed?
Any pedestrian accepts a level of risk when walking or standing anywhere near what is or will be known as a high-risk zone. (a.k.a. where cars are operating, autonomous or not)
My point was you are a rider, NOT an "operator". You choose to get in a cab today. You do NOT choose which pedestrians it avoids if an accident occurs. That is up to the actual operator of the vehicle (autonomous or not), unless you somehow feel the human riding in the back seat is to blame, all because they needed a ride that day.
Re: (Score:2)
Pedestrians can be legally at fault in car-pedestrian accidents, for example when jaywalking, crossing against a red light, or even by being drunk. In those cases, the pedestrian has no claim against the driver, and the driver of the "several tons of rolling death and metal" can recover damages from the pedestrian.
Re: (Score:3)
It should calculate the options, using the original 'Death Race 2000' scoring system, then maximize score.
In general go for the unusual and quick on the road. Mothers with infants count 5x.
Re: (Score:2)
Drive fast enough that pedestrians tunnel through your car.
Re: (Score:2)
It's not an idiotic question. It is a situation that could come up.
However, it does need to be put into perspective.
As you say, the extreme cases rarely comes up.
As well, will the AI do as good or better than the average human.
Far too often when a new technology comes up, people spend their time worrying about every potential issue with it rather than asking how well does it stack against the current system.
Most people just don't react that well in extreme scenarios.
http://www.cbc.ca/news/canada/... [www.cbc.ca]
Here's a