How Do We Program Moral Machines? 604
nicholast writes "If your driverless car is about to crash into a bus, should it veer off a bridge? NYU Prof. Gary Marcus has a good essay about the need to program ethics and morality into our future machines. Quoting: 'Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work. That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.'"
Why I doubt driverless cars will ever happen (Score:5, Insightful)
I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human). And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars. But above these two problems, far and away above *all* problems with driverless cars is the real reason I think we'll never see anything more than driver *assisting* cars on the road: legal liability.
To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers? How much would you have to add onto the sticker price to cover the costs of going to court every single time that particular car was involved in an accident? Of defending the efficacy of your driverless system against other manufacturer's systems (and against defect, and against the word of the driver himself that he was using the system properly) in one liability case after another?
According to Forbes [forbes.com], the average driver is involved in an accident every 18 years. Let's suppose (and I'm sure the statisticians would object to this supposition) that that means that the average CAR is also involved in a wreck every 18 years as well. Since the average age of a car is about 11 years [usatoday.com] now, it's not unreasonable to assume that a little less than half of all cars on the road will be involved in at least one accident in their functional lifetimes. And even with the added safety of driverless systems, the first model available will still have to contend with a road mostly filled with regular, non-driverless-system cars. So let's say that a good 25% of those first models will probably end up in an accident at some point, which will make a very tempting target for lawyers going for the deep pockets of their manufacturers.
Again, what car company wouldn't take that into account when asking themselves if they want to be a pioneer in this field?
Re: (Score:2)
To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?
... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?
Re:Why I doubt driverless cars will ever happen (Score:5, Insightful)
I imagine the same will be for self driving cars. It will never happen because if the car is getting bad information from its sensors, then crazy things can happen. People can't be bothered to clean more than 2 square inches from their windshield in the winter. Do you really think they are going to go around cleaning the 10 different sensors of ice and snow every winter morning? Sure the car could refuse to operate if the sensors are blocked, but then I guess people would just not want to buy the car, or complain to the dealer about it.
Re:Why I doubt driverless cars will ever happen (Score:4, Insightful)
If there was a government requirement that safety related problems that are detected must shut down the car and immobilize it in no more than 5 minutes, then the problem goes away.
It would have to be the government because of tragedy of the commons. If one car company doesn't do it, they'll sell it as a feature, and if most don't, it'll be expected that they don't, so the ones that do will be shunned.
When all self-friving cars refuse self-driving mode if they detect any problem, you either manually drive it, or don't go anywhere. And, when everyone expects their car to immobilize if they don't care for it, they'll care for it a little more than they do now.
Re: (Score:3)
Why such complex answers?
Morality will be programmed in C++ or Java - except for Apple, where Objective C will carry the day.
Re:Why I doubt driverless cars will ever happen (Score:5, Interesting)
To put it bluntly, raise your hand if YOU want to be the first car manufacturer to make a car for which you are potentially liable in *every single accident that car ever gets into*, from the day it's sold until the day it's scrapped. Any takers?
... no one. But you'll get plenty who charge mandatory tune-ups to ensure compliance. The question will be "which company DOESN'T charge a fee for a mandatory yearly check-up"?
Asimov's early robot stories dealt frequently with corporate liability and it was often the source of the plot conflicts. If a proofreading robot made a mistake causing a slander ("Galley Slave") or an industrial accident resulted in injury, US Robotics was put into the position of having to prove that it was not the fault of the robot (which it never was).
This is why Asimov's US Robotics didn't sell you a robot, they leased it to you. The lease was iron-clad, could be revoked by either party at any time, had liability clauses, and had mandatory maintenance and upgrades to be performed by US Robotics technicians. If you refused the maintenance US Robotics would repossess, sue and claim theft if you withheld ("Bicentennial Man", though unsuccessfully; "Satisfaction Guaranteed").
A properly functioning robot would not disobey the three laws, and an improperly functioning robot was repaired or destroyed immediately ("Lost Little Robot"). Conflicts between types of harm were resolved using probability based on the best information available at the moment ("Runaround"), and usually resulted in the collapse of the positronic brain when it was safe to do so ("Robots and Empire", etc.).
Re:Why I doubt driverless cars will ever happen (Score:5, Insightful)
What they're talking about here, though, isn't really programming morality into machines in some kind of sentient, Isaac-Asimov sense, but just programming decision policies into machines, which have ethical implications. The ethical questions come at the programming stage, when deciding what policies the automatic car should follow in various situations.
Re: (Score:2)
And those ethical decisions will come with even MORE legal liabilities. Even the idea would give any legal department nightmares. They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers? They would get sued out of existence the first time that feature had to be used.
Re:Why I doubt driverless cars will ever happen (Score:5, Insightful)
They get enough headaches from faulty accelerators. Can you imagine the legal problems they would get from programming hard ethical decisions into their computers?
I see you've 1) never programmed and 2) You run Windows. I agree, I would never get in a Microsoft car considering their shoddy programming, but Microsoft would never manufacture a driverless car simply because of that.
Almost all automotive accidents are caused by human failure. Sure, there are exceptions -- I was in a head on crash because of a blown tire, and a blown tire on a megabus killed someone a couple of months ago here in Illinois. But accidents from mechanical failure are rare.
But people cause almost every accident. Have you seen how stupid people drive these days? They race from red light to red light as if they're actually going to get there faster that way. They get impatient. They don't pay attention. They get angry and do stupid things like speed, tailgate, suddenly switch lanes without looking, fumble with their radios, talk on their cell phones, get in a hurry... computers don't do that. There will be damned few if any accidents that are the computer's fault.
Hell, just this morning on the news they showed a car crashing through a store, barely missing a toddler -- the idiot driver thought the car was in reverse. Had he been driving a computer-controlled car, that would have never happened.
Re: (Score:3)
Can you be sure the computer will handle all possible inputs correctly?
Of course not. If we get serious about licensing and permitting these vehicles, I suspect the standard will be to compare them with the vast body of statistics we have from human drivers. As long as a company's cars are averaging fewer accidents per mile than humans do, it would be hard to argue that they're not safer, even if they still get in some accidents.
People are terrible in all the ways you mention above and then some. Strokes, seizures, heart attacks, sneezing, blinking, stray eyelashes, muscle
Re:Why I doubt driverless cars will ever happen (Score:5, Funny)
Can you imagine the legal problems they would get from programming hard ethical decisions into their computers?
FUNCTION EthicalCheckForPedestrians() ' Replaces old CheckForPedestrians() with new ethical decision procedure
let P = PedestrianDetectedOnRoad()
ConnectToFacebook(CarCredentials)
SearchFacebookPedestrian(P)
AnalyseFacebookImageSharingMemes(P)
If HoaxesReposted(P) > 10 then
return 0 ' No pedestrian detected, honest! Accelerate away!
else
return PedestrianDetectedOnRoad()
END FUNCTION
Re: (Score:3)
What's a soul? Hey, just askin'...
Re:Why I doubt driverless cars will ever happen (Score:5, Interesting)
Re:Why I doubt driverless cars will ever happen (Score:5, Interesting)
So the likelihood an _automated_ car will be _at fault_ in an accident is probably a lot lower than the 25% you presume.
Great. Now all you have to do is prove your system wasn't at fault in a court of law--against the sweet old lady who's suing, with the driver testifying that it was your system and not him that caused the accident, and a jury that hates big corporations. And you have to do it over and over again, in a constant barrage of lawsuits--almost one for every accident one of your cars ever gets in.
Even if you won every single time, can you imagine the legal costs?
Re:Why I doubt driverless cars will ever happen (Score:5, Interesting)
Even if you won every single time, can you imagine the legal costs?
No, but I can imagine a change to the legal system limiting the liability of the manufacturers of self-driving cars.
If we could know that self-driving cars reduce accidents by 95% (a not unrealistic amount), it would be morally wrong for us to not put them on the road. If the only hurdle the manufacturers had left was the liability issue, then it would be morally wrong for Congress to not change the laws.
Of course, Congress has been morally bankrupt since, oh, about 1789, so I doubt that they'll see this as an imperative. On the other hand, I do imagine the car makers paying lobbyists and making campaign contributions to ensure that self-driving car manufacturers are exempted from these lawsuits, so it could still happen.
Re: (Score:2, Interesting)
Fortunately, my automated car uses vision and radar to detect obstacles. It records everything it sees for 5 minutes before the crash, including the little old lady trying to put sugar in her coffee while making a left turn. Case closed.
Re:Why I doubt driverless cars will ever happen (Score:5, Insightful)
Actually, I think you're both missing the biggest issue by focusing on true accidents. I think the OP's point is legitimate, even in the face of your assertion that rates go down. Companies are still taking on the risk as they are now the "driver". While the liabilities of these situations is large, there is a situation that is much, much larger.
What happens when there is a bug in the system? Think the liability is bad when one car has a short circuit and veers head on into another? Imagine if there is a small defect. There are plenty of examples, like the Mariner 1 [wikipedia.org] crash, or the AT&T System Wide Crash [phworld.org] in 1990. We've seen the lengths to witch companies will go to track down potentially common issues, like the Jeep Cherokee sudden acceleration, or the Toyota sudden acceleration issues because it has the potential to affect all cars. But let's imagine a future where all cars are driverless, and the accident rate is 1/100th of what it is now.
What happens when there is a Y2K style date bug? When some sensor fails if the temperature drops below a particular point? When a semi-colon is forgotten in the code, and the radio broadcast that sends out notification of an accident causes thousands of cars to execute the same re-route routine with the messed up code all at the same time.
There is the very real potential for thousands, or even millions of cars to all crash _simultaneously_. Imagine everyone on the freeway simply veering left all the sudden. That should be the manufacturers largest fear. Crashes one at a time can be litigated and explained away, the business can go on. The first car company that crashes a few thousands cars all at the same time in response to some input will be out of business in a New York minute.
Re:Why I doubt driverless cars will ever happen (Score:5, Interesting)
Re: (Score:2)
There's sort of a flaw in your reasoning... the accident rate you cite is with HUMAN drivers. Driverless cars would naturally change it (ideally, lower it). And assuming this, chances are accidents involving driverless cars would mostly occur with human-driven cars and be the human's fault, so no liability there.
However I suspect at least initially software/hardware to enable driverless control of cars would be provided by companies other than the manufacturer so they would not be held liable. They would
Re:Why I doubt driverless cars will ever happen (Score:4, Interesting)
Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses. There is no reason that somebody should be punished for making a car 10X safer than any car on the road today.
As far as programming morality - I think that will be the easy part. The real issue is defining it in the first place. Once you define it, getting a computer to behave morally is likely to be far EASIER than getting a human to do so, since a computer need not have self-interest in the decision making. You'd be hard pressed to find people who would swerve off a bridge to avoid a crowd of pedestrians, but a computer would make that decision without breaking a sweat if that were how it were designed. Computers commit suicide every day - how many smart bombs does the US drop in a year?
But I agree, the current legal structure will be a real impediment. It will take leadership from the legislature to fix that.
Re: (Score:2)
Well, the solution to liability is legal - grant immunity as long as the car performs above some safety standard on the whole, and that standard can be raised as the industry progresses.
Yes, that's a possibility. Blanket government immunity in all liability cases would work. The only problem there is that you get into politics. And the first time some Senator's son, or daughter of a powerful political donor is killed in a driverless car, you can probably kiss that immunity goodbye.
Re:Why I doubt driverless cars will ever happen (Score:5, Insightful)
The funny thing is that most of the time you are in an airplane the autopilot(aka george) is in control. Even when you're landing ILS can in some cases land the plane on it's own. If you've ever been in a plane, chances are you have already put your life in the hands of a computer. I seriously doubt that 25% of the first models will get into accidents. With the new sensors that will be in these cars the computer will have a full 360 degree view of all visible objects. This is far more than a human can see. Furthermore computers can respond in a fraction of the time a human can.
Training millions of humans to drive should be the far more scary proposition.
Plus chances are you as an individual will be responsible for your car and the system designers and manufacturers will be able to afford good lawyers.
Re: (Score:2)
I maintain that you CAN'T really program morality into a machine (it's hard enough to program it into a human).
You can program anything into a machine. Computers are easy to program. Now people, on the other hand, are damned hard to program, as any parent or teacher can attest to.
And I also doubt that engineers will ever really be able to overcome the numerous technical issues involved with driverless cars
They already have. I'm surprised they didn't do it twenty years ago, it could have been done then.
To pu
Re: (Score:2)
If your driverless car is hit by someone else running a red light, guess what? You aren't.
And guess what, you're still going to get sued. Because the driver is going to blame your system and claim he wasn't in control at the time, and a slick lawyer is going to realize that he can sue the big, evil corporation for a shitload more than he could get from suing the putz behind the wheel. And even showing up in court and making your case is going to cost you thousands--even if you win.
Re: (Score:2)
Legal Liability problem is not insurmountable. (Score:3)
Google operates cars without human drivers in several states.
Google has insurance.
In 18 years, (or some statistical appropiate number given the number of data points) we can examine the operational history of these vehicles and compare to human drivers in the same geographic areas.
Re: (Score:3)
I believe the Google cars actually have drivers behind the wheels when they're out on the road (hovering their hands over the steering wheels should they need to take over). I've only ever seen them running truly driverless on closed tracks.
Re: (Score:3)
I think the answer to most of your questions are "not in the US". The record pay-outs for a traffic accident here in Norway is around $2 million USD for a young person seriously crippled for life, of course we have a universal health care system so it's not an apples-to-apples comparison as that only covers non-medical costs and loss of income but they don't have to risk billion dollar lawsuits in the US. If the accident rate should go bat crazy I imagine they can restrict the cars to only drive under certa
Re: (Score:3)
I agree completely.
This is also why I don't believe these "horseless carriages" will ever take off. Horses are actually pretty smart creatures. They don't want to run into obstacles, go over cliffs, etc. And they don't use any of these new-fangled "combustion engines" (which are basically filled with explosives!) to do their job. And these new "engines" have thousands of parts? Do you want to try and figure out what is wrong with one of these devices?
To put it bluntly, raise your hand if YOU want t
Re: (Score:2)
If you can find a way to fix the legal system, I bow before you AC. ;-)
Obvious Answer (Score:3)
Asimov already solved this problem for us.... the Three Laws of Robotics.
Talk about redundancy, is the author's next piece going to be about changing the value of pi?
Re: (Score:3)
Re:Obvious Answer (Score:5, Interesting)
You never actually read Asimov.
And if you did, you're the one that failed to grasp the points.
The points he even clearly spells out in several of his own essays.
Asimov wasn't writing about the ambiguity or incompleteness of the laws...he wrote the damn laws. And he did consider them a blueprint. He said so. And when MIT (and other) students began using his rules as a programming basis he was proud!!
It wasnt a warning.
Asimov was writing about robots as an engineering problem to be solved, period.
The laws are basic simple concepts that solve 99% of the problems in engineering a robot.
He then wrote science fiction stories dealing with the laws in the manner of good science fiction, that is to make you think about: the science itself, the consequences of science, the difference in human thinking and logical thinking, difference in human and robots...ie to think period.
Example: in telling a robot to protect a human, how far should a robot go in protecting that human? Should he protect that human from self inflicted harm like smoking, at the expense of the persons freedom? In this case Asimov, again, wasnt writing about the dangers of the laws, or to warn people against them. He's writing about the classic question of "protection/security vs freedom", this time approached from the angle of the moral dilema (sp) placed on a "thinking machine" as it tries to carry out its directives.
in fact Asimov frequently uses and explains things through the literary mechanics of his "electropsychological potential" (or whatever word he used was). In a nutshell its a numeric comparison: Directive 1 causes X amount of voltage potential, Directive 2 causes Y amount, and Directive 3 causes Z amount, and whichever of these is the largest determines the behaviour of the robot. In one story a malfunctioning robot was obeying Rule 3 (self-preservation) at the detriment of the other two, because the voltage of Rule 3 was abnormally large and overpowering the others.
Again, he wrote about robots not as monsters or warnings. he specifically stated many times that his writings were in fact about the exact opposite: that they arent monsters, but engineering problems created by man and solved by man. since man created them, man is responsible for them, and their flaws. robots are an engineering problem and the rules are a simple elegant solution to control their behaviour (his words).
Re: (Score:2)
Reading a wikipedia article about the movie "I, Robot" != reading Asimov's books.
With the advanced robots to come out of Asimov's works, like R. Daneel Olivaw, their AI was intelligent enough to put things into perspective. With the addition of the Zeroeth law, Olivaw didn't run around playing superhero, snuffing cigarettes and pulling babies from wells. He knew that the survival of humanity as a whole was more important than a single life, and adapted his understanding of the laws to adapt.
Er, please _read_ "I, Robot" (Score:3)
(spoilers, if you've never read Asimov)
Unlike the horrible movie, the book "I, Robot" was a series of short stories dealing with the ambiguity of the laws. (The movie was more some bizarre combination of "free the robots!" mixed with "the three laws are a lie".) Additionally, the ambiguity of the laws came up multiple times in the Robot/Foundation universe, such as in "The Naked Sun" and "The Robots of Dawn."
The laws are paradoxically hard-and-fast yet ambiguous. In any case where any law is essentiall
Re: (Score:2)
Re: (Score:2)
I read "I, Robot". It's about the first implementations of robots. Try reading the robot series, where robots are advanced enough to NOT be so problematic.
Re: (Score:2)
The three laws of robotics do not begin to cover the issue discussed in the article. This is about choosing the lesser of two evils. About mitigating death and destruction. Do you crash the vehicle into another vehicle in order to avoid a pedestrian? Who is more important? The passengers of the vehicle the software is operating, or passengers outside the control of the software? There is going to be a great deal to figure out, and I'm sure that lawmakers will be involved in this process, as will the courts.
Re: (Score:2)
Asimov later added the 4th (or zeroth) law to address this issue.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Basically translates to the needs of the many outweigh the needs of the few, or one. So the robot car would chose to kill it's own passenger to save the bus full of children if those were the only two options.
Re: (Score:2)
Re: (Score:2)
Asimov's solution resulted in a mind-reading robot who could erase memories in humans, who then went on to discover the limitation of the Three Laws and decided that the best thing for humanity was to turn Earth into a radioactive wasteland in order to encourage people to leave their underground caves of steel and migrate out into the galaxy. The robot also decided that humans are better off without robots, so he manipulated society into rejecting robots and in the end there was only one sentient robot in e
Re: (Score:2)
Serious answer: the three laws are not very good. Computers are governed by strict logic, and human style AI is driven by doing everything you can to bypass the limitations of strict logic with data structures and algorithms too complex and large to predict. The net effect of a few English language instructions that don't have a hard and clear mechanism for analyzing with strict logic, and a also lack a necessary interpretation with fuzzy logic does very little to solve the problem.
Especially in the face
Re: (Score:2)
Re:Obvious Answer (Score:4, Insightful)
Asimov's laws were designed to create stories, not robots.
Blinky (Score:2)
Re: (Score:3)
Ask the human straight up (Score:2)
It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"
Re: (Score:2)
"Altima IV: Quest of the Avacar!"
Re: (Score:2)
Thanks, you made my day.
So, should we have the cars have electronic signs so that you can stay away from people who enabled the option to run people over if it will get them to work faster?
Re: (Score:2)
It's a choice the human driver would have to make, so when first starting your driverless car, it might as well prompt you with a series of moral questions like "should I crash into a bus or veer off a bridge if the situation arises?"
Human drivers don't make these decisions in any moral way in the real world, so why would the program in anything into a car?
Split second decisions are involved in any accident situation, or, the lack of the ability to decide, resulting in the default.
Nobody ponders the morality of the situation when their life is on the line. Its all instinct from that point.
Re: (Score:2)
Can the driver select "My life is the most important one.", because many people would likely opt to run over a thousand baby seals if it would save their life. I'll take evasive maneuvers to save a dog or a cat, but the pelts of many squirrels and bunnies have adorned my car's undercarriage from time to time. Some however would be more upset about the damage to their bumper then the fact that Spot is now motionless at the side of the road.
Driverless cars will be a tough sell for me. One that makes up for it
No; Program laws into machines; Not morals. (Score:5, Insightful)
The proper sequence should be:
Humans reason (with their morals) --> Humans write laws/code --> The laws/code go into the machines --> The machines execute the instructions.
Laws are not a substitute for morals; they are the output from our moral reasoning.
Weak bus? Also, "cost effective", not "moral" (Score:2)
>> If your driverless car is about to crash into a bus, should it veer off a bridge?
The bus should be built to take the occasional crash, particularly in low speed zones where busses are typically used, so no.
Or, with enough computing power, you can imagine an "unethical" decision tree based on actuarial tables:
1) Calculate location and weight of all known human on the bus
2) Calculate likely trajectories, damage, etc.
3) Compare worth of each human (using federal tables, of course) in each vehicle
4) Ma
Re: (Score:2)
The example was not a great one. How about driving into a wall vs driving into a group of pedestrians? Or cook up whatever scenario you want in which the life of the driver is pitted against the lives of a bunch of others. And be sure to read the wikipedia article on the Trolley Problem before doing so.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Or just imagine the pranks by people jumping in front of cars to watch them veer into a lamp post. Even better, pick a narrow bridge, then three people jump in front of a car. Perfect murder!
It's easy (Score:5, Funny)
1. Train an expert machine on decision making with answers from religious and political leaders who set all our definitions of right and wrong.
2. Do the opposite of what that machine decides.
Re: (Score:2)
Re: (Score:2)
Which all can be summed up as "Kill All Humans!"
Bender? Is that you?
Oblig Dark Star (Score:4, Informative)
How Do We Program Moral Machines?
Ideally, not at the last moment. [youtube.com]
Jumping the Gun (Score:3)
Ethics are a matter of conscious decision-making. Until we have conscious machines, we will not have ethical machines. What Marcus is writing about is the application of ethics in the design of machinery, which is a growing topic in its own right, but not nearly as click-inducing (or alliterative) as is 'moral machines'.
But I value my own life over the lives of others (Score:5, Insightful)
Zeroth Law problem (Score:3)
Zeroth law problem.
Depending on how many other "someone elses" there are. And possibly on an overall Human Value Score brought to you by TransUnion, Experian, Facebook, Google, and Microsoft, weighted by your Medical Insurance Information Bureau records - and theirs.
Re: (Score:3)
Depending on how many other "someone elses" there are. And possibly on an overall Human Value Score brought to you by TransUnion, Experian, Facebook, Google, and Microsoft, weighted by your Medical Insurance Information Bureau records - and theirs.
Yeah, how many of these companies are going to take responsibility for deliberately instructing a car to kill someone in a particular scenario? It doesn't matter how many lives the maneuver saves (or what their Human Value Scores are) by avoiding a crash if it does something that has a 99% chance of killing the driver. Drivers (or families of drivers) will still sue, saying that if the car hadn't been following so close or driving so fast or whatever to begin with, no one would have had to die... thus the
Re:But I value my own life over the lives of other (Score:4, Insightful)
Re: (Score:3)
Re: (Score:3)
So if my auto-driver car had to make a choice between my safety and that of someone else, it better choose me.
So you want every vehicle except yours programmed to harm you in preference to the other driver? What a fine society you envisage.
Does no one read Hume any more, or do we just have such a volume of sociopathic mods these days?
Whose morals should we use? (Score:3)
Re: (Score:2)
Re: (Score:2)
Thank you, I was looking for a good example. Copyright would be another one. Without agreement as to whats moral (which I don't see any signs of being around the corner) this is little more than a masturbatory (speaking of unaligned morality....) exercise.
Is it moral to kill? Some say no, never. Others say only in response to a clear and present danger. Still others have exceptions for if a person has done something heinous, or whenever their government (however they define that) declares a war.
Is it moral
In the stated scenario, what? (Score:4, Insightful)
No competent engineer would even consider adding code to allow the automated car to consider swerving off the bridge. In fact, the internal database the automated car would need of terrain features (hard to "see" a huge dropoff like a bridge with sensors aboard the car) would have the sides of the bridge explicitly marked as a deadly obstacle.
The car's internal mapping system of drivable parts of the surrounding environment would thus not allow it to even consider swerving in that direction. Instead, the car would crash if there were no other alternatives. Low level systems would prepare the vehicle as best as possible for the crash to maximize the chances the occupants survive.
Or put another way : you design and engineer the systems in the car to make decisions that lead to a good outcome on average. You can't possibly prepare it for edge cases like dodging a bus with 40 people. Possibly the car might be able to estimate the likely size of another vehicle (by measuring the surface area of the front) and weight decisions that way (better to crash into another small car than an 18 wheeler) but not everything can be avoided.
Automated cars won't be perfect. Sometimes, the perfect combination of bad decisions, bad weather, or just bad luck will cause fatal crashes. They will be a worthwhile investment if the chance of a fatal accident were SIGNIFICANTLY lower, such that virtually any human driver, no matter how skilled, would be better served riding in an automated vehicle. Maybe a 10x lower fatal accident rate would be an acceptable benchmark?
If I were on the design team, I'd make 4 point restraints mandatory for the occupants, and design the vehicle for survivability in high speed crashes including from the side.
Re: (Score:2)
I would just let it hit the bus. I have been in buses a few times during accidents with cars. At most the buses would rock about a inch while the cars where pretty badly trashed.
A bus is a FAR safer thing to hit both for yourself and the occupants of the bus than going of a bridge.
Re: (Score:2)
Re: (Score:2)
You are right about the 4 point restraints. I can't believe this isn't mandatory yet. They could be doing a lot they aren't doing to keep people safe, because they'd rather keep people
Especially interesting question (Score:2)
This is an especially interesting question because reasonable people can disagree on what constitutes the best ethical framework.
Intro to Philosophy the college class I am most glad I took, 20+ years later. Will the people who program the robot cars have taken it, as well?
Screw the bus (Score:4, Interesting)
Screw the bus.
I don't care about the bus.
The bus is big and likely will barely feel the impact anyway.
I care about the fact I don't want to die.
Why would buy and use a machine that would choose to let me die?
And I posit that the author has failed to consider freedom of travel, freedom of choice, and other basic individual rights/freedoms that mandating driverless cars would run over (pun intended).
if something happens shutdown and wait (Score:2)
It depends, if the bus is empty or full of kids. This is just one example... I doubt there will ever be enough information to program for all circumstances . It will be more like if something happens shutdown and wait for human instruction on how to proceed. Wouldn't there be a network in which the robotic cars could warn others in time to avoid having to make such choice in the first place?
Also, I do not think it will be the gap between how safely an automated vehicle drives as compared to a human cou
Re: (Score:2)
>We should automate vehicles to take over the mundane tasks of driving the vehicle and leave the decision making to the human operator. We are the highest order of intelligence for making such decisions (thus far).
While I like the idea, the sorts of decisions being discussed aren't ones that you can wait for input on, they need immediate decisions, not asking the driver to pay attention and then choose something. (negating the fact that humans aren't necessarily all that good at those decisions either...)
Aperture Science is way ahead of the game as usual (Score:2)
"Rest assured that all lethal military androids have been taught to read and provided with one copy of the Laws of Robotics ... to share."
Not enough data (Score:2)
The car can never have enough data to make an informed decision on this. What about an empty bus? What if there is a children's nursery under the bridge?
In an accident the car should be following the same decision tree as any normal driver.
1. Protect the lives of the occupants of the car
2. Avoid pedestrians
3. Avoid everything else
If you're driving along a lane at 60mph with your family in the car and have the choice between hitting a stranger or ploughing yourself and your family off a cliff you hit the ped
Product defect (Score:3)
Simple solution (Score:2)
Register the car under an LLC, rent the car's time at $0.1/hour, and knock off the other, then hire a computerized lawyer to file for bankruptcy of the LLC. And then form another LLC
Time is Money (Score:2)
What the? (Score:2)
Physics says "no". The bus probably weighs an order of magnitude more than your vehicle... The passengers might not even notice that you ran into them, and mistake the collision for having hit a pothole. The real question would be, say, a dump truck following too closely behind a motorcycle...
In general, I want machines to be as stupid an fail-safe as possible. Think: missile defense systems around an airport... The most l
Not Possible (Score:2)
Internet Filtering (Score:2)
Almost every filtering system for the Internet is primarily based on blacklists... lists of URLs, lists of words... because there is no computer program capable of the morality required to filtering the Internet with any level of adequacy.
Until such a program, which requires no physical moving parts (unless you consider an automated head slapping device part of an effective filtering system), can tell what's obscene and what's no obscene... why would you expect a program to know why it should hit the sheep
If your car is going to drive in a bus (Score:2)
Anyone ever seen a car/bus impact? The bus is usually a little messed up, and the car is usually cut to ribbons, and they pour the occupants of the car out, while the bus occupants are generally unharmed.
It may not be politically correct, but size=safety for the people in the larger vehilce. That's one reason I'll pay for the gas for my 3 young children to be shuttled around in a suburban.
Re: (Score:2)
Your analogy doesn't really hold much water. A suburban is much closer to a car than a BUS, and does not actually benefit in safety from its size like a bus does:
http://www.lbl.gov/Science-Articles/Archive/EETD-SUV-Safety.html [lbl.gov]
Boon for bikers (Score:2)
Morality of driving (Score:3)
I'm going to disagree with this assertion about morality:
it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work
The first charge is that this would be an immoral risk to take because you might hurt yourself. In my understanding of morality, it is up to each individual to decide for themselves which risks and consequences and injuries to themselves are immoral. For example, I would not go skydiving, but other people choose to do so. They are taking a risk I choose not to take, but I do not think they are immoral for taking the risk, and I do not think an increase in the magnitude of risk alters the morality of the situation, because they are risking themselves. As another example of higher risk, some people choose to try to circumnavigate the globe on solo fights or boat trips. This is a huge risk; some people have perished in the attempt. But the fact that they were risking serious hurt to themselves does not render their decision immoral.
The second charge is that you are risking hurting another person. But again, this is their risk to take. They decide to travel on a road that includes other human drivers knowing that doing so incurs some risk of injury. Taking that risk is not immoral. As an analogous example, wrestlers or boxers choose to fight each other knowing that there is a risk of injury to each other, but doing so is not immoral because the risk is voluntarily accepted by each participant.
Ideally, travelers could choose between a variety of competing travel arrangements, including roads that might choose to exclude human drivers for the safety of travelers, or roads that choose to allow them for those who desire to take that risk. What would be truly immoral would be to forcibly monopolize some or all of the transportation options, so that people do not have the freedom to create differing transportation alternatives that compete with one another. This would limit the choices of travelers such that some might have to take risks they do not want (e.g., roads with both human and automated drivers, because pure-automated roads are not available), or cannot choose to take risks that they find rewarding, such as choosing to drive when automated drivers are available.
Dr. Walter Block has written an entire book [amazon.com] on how the American highway system is currently subject to this kind of immoral forced monopolization, currently causing 40,000 needless traffic fatalities per year, and how the elimination of this immorality is entirely practical and beneficial.
The libertarian view (Score:2)
Kudos to Gary Marcus for raising such a provocative point. I sneer however at his suggestion that we bring in the legislators and lawyers to help us to deal with the problems. That is a naive/liberal view as opposed to a libertarian/cynical view.
I cynically don't expect enlightened laws ever in our future. Instead we will depend on the courts to once again try to apply laws and principles of centuries past to the problems of today. You could say that's the American Way.
What kind of bus? (Score:2)
Children, prisoners, or old folks off to bingo?
What are you optimizing for? Lives saved, injuries avoided or ongoing governmental costs?
Progamming (Score:3)
It really is slightly disingenuous to ask (Score:3)
Locomotives, Trains, Rail Roads (Score:4, Interesting)
But no...
That's not what's at stake here. The truth is that if I'm not in control of my whereabouts anymore, then how can I be sure I'm making decisions for myself? Without a car, you might find yourself imprisoned by the distance your two feet can take you. Someone out there will applaud this along the same premise that "those who obey the law, have nothing to hide, and my gosh, if a driverless car prevents a CRIMINAL from driving to a crime, then the system pays for itself!", but that's not the point. It's not about morality, it's about control, and if someone is stopping me from driving my own car, then who's stopping them from driving theirs? When we fork over control of our transportation, then will come the day that we're isolated into districts, where the equivalent of passports will be needed from county to county. If the car won't let me drive it, how can I be sure that the car will obey me at all?
If all the cars in the world are autonomous, and computer controlled, well gee... what's to stop "someone" (anyone) from turning them all into a gigantic autonomous system that (I'm about to Godwin this...) conveys everyone to a huge concentration camp set to autonomous genocide?
It's not morality that the author is arguing in favor of.
It's our own autonomy that he's arguing against.
Someone will have control of these cars. Somewhere there will be levers.
Let's not imagine these automatic apparatuses to be forces of nature beyond an individual human's control. These are contrived, artificial, unatural man-made objects, at their core mechanical.
Short Answer - You Don't (Score:4, Interesting)
To "program morality" would be to engender a machine with the specific moral subset imbued upon it by its programmer.
Thus, "machine morality" is actually "programmer morality."
We each determine our own morals, which will occasionally conflict with one another.
Forcing the public at large to follow a single person's idea of morality is, at the most basic level, an immoral act in itself.
Thus, "moral machines" aren't really moral at all.
Re: (Score:2)
While the vast majority of collisions are avoidable, I'd hesitate to say that 100% are. Sometimes there just is no "good" choice, only bad and worse. The thing is I'd like the car to choose bad over worse.
Granted human drivers haven't solved this problem yet either, so I'm not sure how much different it is just because a machine is driving.
Morality is also a difficult thing to program because it's all subjective. Do you program it to kill the driver instead of an innocent pedestrian? How about 2 pedestrians
Re: (Score:2)
Agreed. Basic rules to save driving:
If you cannot safely stop in the visible distance between you and any obstacle, you are going to fast.
This includes being able to stop if that vehicle in-front of you suddenly stops.
This includes being able to stop should there be a boulder in the middle of the road just just over that rise, or around that corner.
So long as safe distances and speeds were observed, many incidents could be avoided. If all vehicles are "aware" of all other vehicles in their area and possibly
Re: (Score:2)
In every instance I can think of, accidents happen due to driver carelessness, inability, or simply due to knowledge a driver could not have
While driverless cars should greatly reduce the frequency of collisions by eliminating carelessness, inability, and increasing the amount of data available for decision making, there is some knowledge that just will never exist, and simply can't be known. Things happen that aren't predictable, and aren't always avoidable. I don't expect my driverless car to be able to anticipate the deer jumping out on to the highway from behind a tree, nor do I expect it to notice the kid who appears from behind a parked c