Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics The Military

Robot Warriors Will Get a Guide To Ethics 317

thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."
This discussion has been archived. No new comments can be posted.

Robot Warriors Will Get a Guide To Ethics

Comments Filter:
  • by Locke2005 ( 849178 ) on Tuesday May 19, 2009 @07:02PM (#28019171)
    Three Laws of Robotics [wikipedia.org] from 1942.
  • by grahamd0 ( 1129971 ) on Tuesday May 19, 2009 @07:40PM (#28019687)

    Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.

    "All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The SkyNet funding bill is passed."

  • Not Robots (Score:3, Informative)

    by Roger W Moore ( 538166 ) on Tuesday May 19, 2009 @08:08PM (#28020011) Journal
    They aren't robots - there is still a living thing in control. Effectively they are one person tanks.
  • by Zironic ( 1112127 ) on Tuesday May 19, 2009 @08:08PM (#28020013)

    The laws worked perfectly, the book was all about how things went wrong when people tried to modify them.

  • Re:Yeaahhhh... (Score:3, Informative)

    by eltaco ( 1311561 ) on Tuesday May 19, 2009 @08:33PM (#28020227)
    oh come on mods, don't moderate a comment with the same (insightful / informative) content down, just because someone beat them to the punch by few seconds.
    stick to modding good comments up instead of burning peoples karma who actually mean well.
  • by The Grim Reefer2 ( 1195989 ) on Tuesday May 19, 2009 @08:43PM (#28020325)

    Since Homo sapiens only natural predator is itself,

    Well, itself and wolves. And tigers. And lions.

    And don't forget bears. Definitely bears.

    I think we should build giant ethical bear robots. That would scare the SHIT out of our enemies.

    Come on man, this is Slashdot. How could you forget sharks...

    with "frickin lasers on their heads."

  • by Anonymous Coward on Tuesday May 19, 2009 @11:29PM (#28021417)

    No they weren't. The laws were flawed and the only modifications that ever occurred were made in order to fix these flaws and prevent paradoxical situations from occurring. There was never a situation where things went wrong due to someone trying to modify the laws to my knowledge.

    The books and short stories all revolved around dilemmas that, when robots attempted to uphold the laws, caused conflicts or paradoxes often causing the robots' positronic brains to malfunction or shut down. Dilemmas such as choosing the death of one human over the death of another, or choosing between two options, both of which would cause harm to a robot/human.

    The only situations where the laws were modified were in "Little Lost Robots", where the inaction clauses were added, and "Robots and Empire", where Giskard invents the Zeroth Law. Both of these modifications were patches to flaws in the original three laws.

  • Mod this dude up. (Score:3, Informative)

    by copponex ( 13876 ) on Wednesday May 20, 2009 @12:10AM (#28021663) Homepage

    Wouldn't surprise me. Something like 90% of the "suspected terrorists" rounded up in Afghanistan were turned in for cash, usually by rival tribes or by the very people attacking them. That's the way the first man we tortured to death [wikipedia.org] was caught, anyway.

  • by QuantumG ( 50515 ) * <qg@biodome.org> on Wednesday May 20, 2009 @01:57AM (#28022193) Homepage Journal

    In fiction it is, yes. In reality it's just an ugly radiation bomb.. it'd cause significant damage to structures.. not to mention pets.

  • by aaaaaaargh! ( 1150173 ) on Wednesday May 20, 2009 @05:27AM (#28023017)

    Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

    That's not quite true. Computers cannot estimate conditional probabilities at all, all they currently do is calculate probabilities based on already known probabilities. It's true that humans are bad at this, but that is not what "estimating probabilities" means. If you have a complete and accurate model including all the random variables relevant to a given problem and the initial probability distribution, then of course you can feed a computer with this and let it calculate---but even this is of much too high complexity for a computer, so highly simplifying and often incorrect assumptions have to be made, e.g. that the random variables are independent from each other.

    But the models are made by humans, ideally by statisticians together with domain-sepcific experts. Try to let the computer make the model, and you'll get huge Bayesian networks that spit out tons of garbage....

  • by gadget junkie ( 618542 ) <gbponz@libero.it> on Wednesday May 20, 2009 @05:38AM (#28023059) Journal

    Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

    I must disagree with that, see under Prospect theory. [wikipedia.org] Short version, the human mind is bad at estimating and evaluating long odds or short odds, but it is surprisingly good at estimating mid range probabilities on the fly. The real problem is that the human mind treats the same data sets differently if presented in different manner, hence the name prospect theory.
    The best example was when the two proponents gave a test each to his own students. the premise was that there could be a terrible epidemic. one course was told:" if you order to inoculate every american, 3% will die from complications related to the vaccine."; the other course was told:"97% of the people will survive".
    guess what the answer was in each case? an overwhelming majority in the second case wanted to inoculate everyone, while that was not the case in the first course. Notice that nothing was said about how efficient the vaccine was (decision under uncertainty)

    At least a robotic mind, in both cases, would say:

    P1+P2=1

    P1=0.03

    P2=0.97

    and go on from there.
    one other interesting thing, if a little offtopic, is that the average response of students forced to decide each on his/her own was more accurate than the "debating society" model, in cases like " how many peas are in this transparent jar?".
    Richard Thaler found an explanation by "drugging" the results, i.e. planting an outspoken accomplice who talked first and forcefully told an extremely high number, or an extremely low number. In those cases, the crowd followed and the response overestimated or underestimated accordingly.

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...