Robot Warriors Will Get a Guide To Ethics 317
thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."
Been there, done that (Score:5, Informative)
Re:Fundamental change (Score:3, Informative)
Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.
"All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The SkyNet funding bill is passed."
Not Robots (Score:3, Informative)
Re:Been there, done that (Score:2, Informative)
The laws worked perfectly, the book was all about how things went wrong when people tried to modify them.
Re:Yeaahhhh... (Score:3, Informative)
stick to modding good comments up instead of burning peoples karma who actually mean well.
Re:Been there, done that (Score:3, Informative)
Well, itself and wolves. And tigers. And lions.
And don't forget bears. Definitely bears.
I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.
Come on man, this is Slashdot. How could you forget sharks...
with "frickin lasers on their heads."
Re:Been there, done that (Score:4, Informative)
No they weren't. The laws were flawed and the only modifications that ever occurred were made in order to fix these flaws and prevent paradoxical situations from occurring. There was never a situation where things went wrong due to someone trying to modify the laws to my knowledge.
The books and short stories all revolved around dilemmas that, when robots attempted to uphold the laws, caused conflicts or paradoxes often causing the robots' positronic brains to malfunction or shut down. Dilemmas such as choosing the death of one human over the death of another, or choosing between two options, both of which would cause harm to a robot/human.
The only situations where the laws were modified were in "Little Lost Robots", where the inaction clauses were added, and "Robots and Empire", where Giskard invents the Zeroth Law. Both of these modifications were patches to flaws in the original three laws.
Mod this dude up. (Score:3, Informative)
Wouldn't surprise me. Something like 90% of the "suspected terrorists" rounded up in Afghanistan were turned in for cash, usually by rival tribes or by the very people attacking them. That's the way the first man we tortured to death [wikipedia.org] was caught, anyway.
Re:Been there, done that (Score:4, Informative)
In fiction it is, yes. In reality it's just an ugly radiation bomb.. it'd cause significant damage to structures.. not to mention pets.
Re:Been there, done that (Score:3, Informative)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
That's not quite true. Computers cannot estimate conditional probabilities at all, all they currently do is calculate probabilities based on already known probabilities. It's true that humans are bad at this, but that is not what "estimating probabilities" means. If you have a complete and accurate model including all the random variables relevant to a given problem and the initial probability distribution, then of course you can feed a computer with this and let it calculate---but even this is of much too high complexity for a computer, so highly simplifying and often incorrect assumptions have to be made, e.g. that the random variables are independent from each other.
But the models are made by humans, ideally by statisticians together with domain-sepcific experts. Try to let the computer make the model, and you'll get huge Bayesian networks that spit out tons of garbage....
Re:Been there, done that (Score:3, Informative)
Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.
I must disagree with that, see under Prospect theory. [wikipedia.org] Short version, the human mind is bad at estimating and evaluating long odds or short odds, but it is surprisingly good at estimating mid range probabilities on the fly. The real problem is that the human mind treats the same data sets differently if presented in different manner, hence the name prospect theory.
The best example was when the two proponents gave a test each to his own students. the premise was that there could be a terrible epidemic. one course was told:" if you order to inoculate every american, 3% will die from complications related to the vaccine."; the other course was told:"97% of the people will survive".
guess what the answer was in each case? an overwhelming majority in the second case wanted to inoculate everyone, while that was not the case in the first course. Notice that nothing was said about how efficient the vaccine was (decision under uncertainty)
At least a robotic mind, in both cases, would say:
P1+P2=1
P1=0.03
P2=0.97
and go on from there.
one other interesting thing, if a little offtopic, is that the average response of students forced to decide each on his/her own was more accurate than the "debating society" model, in cases like " how many peas are in this transparent jar?".
Richard Thaler found an explanation by "drugging" the results, i.e. planting an outspoken accomplice who talked first and forcefully told an extremely high number, or an extremely low number. In those cases, the crowd followed and the response overestimated or underestimated accordingly.