Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics The Military

Robot Warriors Will Get a Guide To Ethics 317

thinker sends in an MSNBC report on the development of ethical guidelines for battlefield robots. The article notes that such robots won't go autonomous for a while yet, and that the guidelines are being drawn up for relatively uncomplicated situations — such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target. "Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own? Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an 'ethical governor,' a package of software and hardware that tells robots when and what to fire. His book on the subject, Governing Lethal Behavior in Autonomous Robots, comes out this month."
This discussion has been archived. No new comments can be posted.

Robot Warriors Will Get a Guide To Ethics

Comments Filter:
  • by Locke2005 ( 849178 ) on Tuesday May 19, 2009 @06:02PM (#28019171)
    Three Laws of Robotics [wikipedia.org] from 1942.
    • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday May 19, 2009 @06:07PM (#28019247) Journal
      Been there, wrote fiction about that(much of which was about how, even in fiction land, it wouldn't work so well).
      • by NeutronCowboy ( 896098 ) on Tuesday May 19, 2009 @06:13PM (#28019313)

        Not to mention... some of the assumptions aren't great. As the article itself points out, it's been a long time since there was a civilian-free battlefield.

        As for the direct example of the robot locating a sniper and being offered the choice of a grenade launcher and rifle - how does the robot know that the buildings surrounding it aren't military targets? How do they get classified? How does a hut differ from a mosque, and how does a hut differ from some elaborate sniper cover?

        I don't think this is going to work out as planned.

        • by Locke2005 ( 849178 ) on Tuesday May 19, 2009 @06:21PM (#28019437)
          Since you can never be 100% certain of a target, the robots would have to use fuzzy logic. That is something that humans are better than robots at; I'm not really comfortable with hardware designed to be lethal making decisions like this. Truly autonomous killer robots are probably not a good idea -- haven't 60 years of B movies taught us anything?
          • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Tuesday May 19, 2009 @06:32PM (#28019577)

            Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

            • by Architect_sasyr ( 938685 ) on Tuesday May 19, 2009 @06:45PM (#28019751)
              But (most) humans have this innate condition where taking another life weighs on them somewhat - even most veterans and soldiers I know get twitchy about having to shoot at another person. A robot removes this and replaces it with cold logic.

              Put another way, replace the robots with the WOPR, and the humans with, well, the humans in the bunkers.
              • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Tuesday May 19, 2009 @06:47PM (#28019781)

                The cold logic can be better though, if you know what you actually want to optimize. Humans often make decisions that don't do what they claim they want, e.g. minimizing civilian casualties.

                • by Chris Burke ( 6130 ) on Tuesday May 19, 2009 @09:13PM (#28020913) Homepage

                  The cold logic can be better though, if you know what you actually want to optimize. Humans often make decisions that don't do what they claim they want, e.g. minimizing civilian casualties.

                  Yeah it's that 'if' that's the killer. The problem is that you have to be able to express what you want to optimize using cold logic before a machine can start making that decision, and we aren't able to do that. Terms like "civilian" are nebulous, and attempts to rigidly codify them fail to capture the intent and connotation behind those words that we understand, but can't express. We can reason about that, while machines can't. Fuzzy logic doesn't help, that's just a way of decision making on non-binary factors. With a lot of types of fuzzy logic (neural nets, genetic algorithms) it can be even more important to precisely define what you want, since they can produce solutions that "work" correctly and optimize your problem as specified, but do so in a way very unlike you expected.

                  People of course have the disadvantages of being error prone, and well sometimes being bastards who just don't give a shit what you want them to optimize, so there's appeal to the machine. Yet nothing fails as spectacularly and efficiently as a machine doing exactly what it was programmed to do when it's exactly what you didn't want. To use a machine in situations where even humans equipped with honest intentions, solid faculties, and experience have enormous trouble determining who is "enemy" vs "innocent"? As in most situations our military has been in since the 50s and is going to be involved in for the foreseeable future? That sounds crazy to me. I'll take human judgment and its failure modes any day.

                  Kinda off topic, but speaking of honest intentions, I gotta say the humans making the judgments in question, i.e. our soldiers, have a damn hard problem to solve and it shows how human potential is pretty damn amazing. We're biologically the same animal we were a hundred thousand years ago and more. But in the past, even recent past, the most difficult ethical decision a warrior was asked to make was whether to decide if someone was a threat and should be killed or wasn't and should be enslaved, and it wasn't of any consequence so nobody cared to go over those decisions with a fine-toothed comb. So given the difficulty of what we're asking them to do today, and considering what's going on, the results are pretty amazing. Seriously, think about it. Anyway, yeah, off topic.

              • Re: (Score:3, Insightful)

                But (most) humans have this innate condition where taking another life weighs on them somewhat - even most veterans and soldiers I know get twitchy about having to shoot at another person. A robot removes this and replaces it with cold logic.

                I don't see a technical reason why a robot couldn't get that, too. It would be just a negative score for any killed human, which would enter the equation when making the decision.

            • by Locke2005 ( 849178 ) on Tuesday May 19, 2009 @06:59PM (#28019929)
              Yes, robots are much better at calculating probabilities; given a series of "facts" with a confidence level assigned to each one, a robot would make a better decision. What I should have said is that "Humans are better than robots at making decisions based on incomplete data." Humans can develop "intuition" and many have a great deal of experience in interpreting the context of the data. While it may be possible some day for robots to have a deeper understanding of context than humans, that day is still a long way off.
            • Re: (Score:3, Informative)

              Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

              That's not quite true. Computers cannot estimate conditional probabilities at all, all they currently do is calculate probabilities based on already known probabilities. It's true that humans are bad at this, but that is not what "estimating probabilities" means. If you have a complete and accurate model including all the random variables relevant to a given problem and the initial probability distribution, then of course you can feed a computer with this and let it calculate---but even this is of much too

            • Re: (Score:3, Informative)

              Humans aren't actually better at it than robots; humans are notoriously bad at estimating conditional probabilities.

              I must disagree with that, see under Prospect theory. [wikipedia.org] Short version, the human mind is bad at estimating and evaluating long odds or short odds, but it is surprisingly good at estimating mid range probabilities on the fly. The real problem is that the human mind treats the same data sets differently if presented in different manner, hence the name prospect theory.
              The best example was when the two proponents gave a test each to his own students. the premise was that there could be a terrible epidemic. on

          • Re: (Score:3, Funny)

            Since you can never be 100% certain of a target, the robots would have to use fuzzy logic. That is something that humans are better than robots at; I'm not really comfortable with hardware designed to be lethal making decisions like this. Truly autonomous killer robots are probably not a good idea -- haven't 60 years of B movies taught us anything?

            The solution is simple - just program in a preset kill limit, after which the autonomous killer robots (let's call them "killbots", for argument's sake) will shut down. Problem solved!

        • A better idea (Score:3, Interesting)

          Robots on the battle field seem to be designed as extensions of current human operations. They basically shoot at things and try to destroy them.

          How about building a hardened robot which can take a lot of punishment. It rolls or walks up to one of the enemy, grabs hold of them and shuts down. That way, the opposition can be disabled with fewer casualties.
          • Re:A better idea (Score:4, Insightful)

            by Stormwatch ( 703920 ) <(rodrigogirao) (at) (hotmail.com)> on Tuesday May 19, 2009 @07:37PM (#28020265) Homepage
            Who said enemy casualties are a bad thing?
            • Re: (Score:3, Insightful)

              by Cyberax ( 705495 )

              Imagine that, say, China attacks USA using robots.

              Still thinking that excessive casualties are OK?

            • Re: (Score:3, Insightful)

              by mike2R ( 721965 )

              Who said enemy casualties are a bad thing?

              General David Petraeus. See here [google.co.uk].

              A-52. Achieving success means that, particularly late in the campaign, it may be necessary to negotiate with the enemy. Local people supporting the COIN operation know the enemy's leaders. They even may have grown up together. Valid negotiating partners sometimes emerge as the campaign progresses. Again, use close interagency relationships to exploit opportunities to co-opt segments of the enemy. This helps wind down the insurgency

    • Since Homo sapiens only natural predator is itself, this is a very good move at controlling population.

      Now to provide background music. Monkey vs Robot [youtube.com]

    • Re: (Score:3, Funny)

      by geekoid ( 135745 )

      You do realize they were flawed, right?

      • Re: (Score:2, Informative)

        by Zironic ( 1112127 )

        The laws worked perfectly, the book was all about how things went wrong when people tried to modify them.

        • by Anonymous Coward on Tuesday May 19, 2009 @10:29PM (#28021417)

          No they weren't. The laws were flawed and the only modifications that ever occurred were made in order to fix these flaws and prevent paradoxical situations from occurring. There was never a situation where things went wrong due to someone trying to modify the laws to my knowledge.

          The books and short stories all revolved around dilemmas that, when robots attempted to uphold the laws, caused conflicts or paradoxes often causing the robots' positronic brains to malfunction or shut down. Dilemmas such as choosing the death of one human over the death of another, or choosing between two options, both of which would cause harm to a robot/human.

          The only situations where the laws were modified were in "Little Lost Robots", where the inaction clauses were added, and "Robots and Empire", where Giskard invents the Zeroth Law. Both of these modifications were patches to flaws in the original three laws.

  • Yeaahhhh... (Score:4, Funny)

    by XPeter ( 1429763 ) * on Tuesday May 19, 2009 @06:02PM (#28019173) Homepage

    Last time robots were confronted with "ethics" http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org], they turned on the world and Will Smith had to save us all.

    • Re: (Score:3, Informative)

      by eltaco ( 1311561 )
      oh come on mods, don't moderate a comment with the same (insightful / informative) content down, just because someone beat them to the punch by few seconds.
      stick to modding good comments up instead of burning peoples karma who actually mean well.
  • by hey! ( 33014 ) on Tuesday May 19, 2009 @06:03PM (#28019175) Homepage Journal

    The good news: Robots are going to get a guide to ethics.

    The bad news: It was drafted by Focus on the Family.

  • by FlyByPC ( 841016 ) on Tuesday May 19, 2009 @06:03PM (#28019181) Homepage
    I'm not even British, and I'm hearing "EX-TER-MI-NATE!" in my head...
  • by v1 ( 525388 ) on Tuesday May 19, 2009 @06:05PM (#28019213) Homepage Journal

    Sgt: We lost sir! badly!

    Gen: What happened?

    Sgt: We're still gathering up the details, but it looks like they hacked our network and uploaded Asimov Strain B.

  • Fortunately, SkyNet isn't capable of violating it's programmed rules of ethical behavior, so we're all saved! Unless there is a programming error, but THAT would NEVER happen!
  • Hope Arkin has good grammar. Wouldn't want the instructions to contain things like...

    "How To Cook Four Prisoners"

  • by Fantom42 ( 174630 ) on Tuesday May 19, 2009 @06:09PM (#28019269)

    Weird. So this fails the Asimov criteria.

    More importantly, would also necessarily fail the Golden Rule and Kant's Categorical Imperative.

    If this is ethics, its a pretty limited version of it, and to be honest sounds more like rules of engagement than actual ethics.

    • Well duh (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Tuesday May 19, 2009 @07:48PM (#28020359)

      These are military robots. No military robots would fall under Asimov's list.

      What I think some fail to remember is that Asimov was just a science fiction author. He wrote stories. Very compelling ones, his place in modern literature is gigantic, but none the less just fictional stories. Thus his "three laws" have nothing to do with reality. They aren't natural laws, or legal standards, they are just part of a story. Thus they have no standing in the world.

      They may well be how Asimov would like to see robots work, they may well be how you'd like to see robots work, however they have nothing to do with how the military wants it to work. They are not a canon of any kind.

      When a robot is developed for military purposes, it should be no surprise the ethics are considered in that context. The whole point of it will be to be able to use deadly force if necessary. The programming is then when is that ok and not ok.

      So please, let's have all us geeks lay off the Asimov "three laws" when it comes to robots. Every time something like this comes up people start talking about that like it matters to anyone. No, it really doesn't.

  • by GPLDAN ( 732269 ) on Tuesday May 19, 2009 @06:11PM (#28019287)
    Governing Lethal Behavior in Autonomous Robots


    That is the title of the book you tell your 7th grade teacher you are GOING to write when you grow up.

    Sounds like the FAQ for Robot Battle.
    http://www.robotbattle.com/ [robotbattle.com]
  • There is no such thing as a smart missile unless it immediately destroys itself safely.

  • Jesus Christ (Score:4, Insightful)

    by copponex ( 13876 ) on Tuesday May 19, 2009 @06:16PM (#28019359) Homepage

    If you drop a fucking robot into a village where a vast majority of the people don't know how to read, what do you think they're going to do? They'll shoot at it, get the backs of their heads blown off, and then everyone will say, "Well, the dumbass shouldn't have shot at the robot!"

    If this war on terror is so important, sign up. If you can't, get your brother or sister or even better, sign your kids up. If they're not of age yet, they'd better be in the JROTC. Then you can talk to me about how using drones and missiles isn't the dominion of motherfucking cowards. It's for freedom lovers defending freedom!

    And if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles. Oh, but that's right. This is about principles for others to follow, and for us to ignore.

    • by Anonymous Coward on Tuesday May 19, 2009 @06:26PM (#28019511)

      Great post, man.

      But I have a buddy in the autonomous killer robot biz, and he says it's worse than that.

      See, you drop a killer robot in the village, and it immediately kills a shitload of people. The ones that live, figure out why. Then, as soon as they know that the robot destroys everything that looks like an AK47, the local up-and-coming gang leader makes an AK47 stencil and paints AK silhouettes on the old warlord's cows, house, laundry, etc. you get the picture. Then the young punk gives all the old leader's women to his buddies to rape and takes the young virgins for himself. Yay democracy! Or, at least, that's what they say when GI Joe comes to town, we are the heroes who took out the old anti-democratic leaders, yay us and you villagers better keep your cake-holes tight shut about the rape and opium parties.

      It doesn't matter what you use for a trigger - robots are inherently less complex in their behavior than humans, so the local baddies end up with the robots working for them. You just identify the kill behavior and use it, the robot builder is just providing free firepower to the local mafia in effect.

      Which is why the US military in the field abso-fucking-lutely refuses to let the robots go full autonomous. They are NOT allowed to shoot unless a callow 18-year old miles at a console away says it's OK.

      You might think I'm kidding, but I'm not. Have to be anonymous for this one!

      • Re: (Score:3, Interesting)

        by salimma ( 115327 )

        This already happens. You think all those wedding parties in Afghanistan are accidentally bombed? The warlords are framing each other to the US military, and the US takes the blame.

        • Mod this dude up. (Score:3, Informative)

          by copponex ( 13876 )

          Wouldn't surprise me. Something like 90% of the "suspected terrorists" rounded up in Afghanistan were turned in for cash, usually by rival tribes or by the very people attacking them. That's the way the first man we tortured to death [wikipedia.org] was caught, anyway.

    • by voss ( 52565 ) on Tuesday May 19, 2009 @06:36PM (#28019629)

      If china could do it.

      "...if you think it isn't, imagine what the headlines would be if China landed a few thousand autonomous tanks and droids in Los Angeles..."

      Once the hapless and helpless got out of LA the droids would have to fight off all the hundreds of thousands of worldwide armed geeks decending on LA wanting spare parts for their robots.

    • Re: (Score:3, Insightful)

      by QuantumG ( 50515 ) *

      Meh.. If the alternative is to bomb the village, a robot that shoots only those that shoot at it sounds like a great idea.

      • When can we drop one in your backyard?

        • by QuantumG ( 50515 ) *

          Presumably when I start threatening national security.. or at least when your president can convince the least intelligent members of your society that I have.

    • Re: (Score:3, Insightful)

      by Garrett Fox ( 970174 )
      And those of us who are real men will stop hiding behind guns, and rely exclusively on wrestling. If we really believed in our cause we'd go out of our way to fight as ineffectively as possible and at maximum risk to ourselves!
  • Fundamental change (Score:3, Interesting)

    by StreetStealth ( 980200 ) on Tuesday May 19, 2009 @06:16PM (#28019367) Journal

    We joke about SkyNet. And we don't have to worry about such things because even the most sophisticated drones and killbots in service require humans to pull the trigger.

    The moment you give a computer the responsibility of deciding when to pull the trigger, that's a pretty fundamental change.

    And yet, is it fundamentally a bad thing? We give less-than-stable humans [guardian.co.uk] that responsibility all the time.

    I suppose it's the military equivalent to the civilian tech quandary of one day letting autonomous vehicles on the roads. Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.

    • Re: (Score:3, Informative)

      by grahamd0 ( 1129971 )

      Perhaps once the tech has advanced to the point where it can demonstrate not merely parity with but vast superiority to the discernment exhibited by humans, it will be a shift we're ready to make.

      "All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The SkyNet funding bill is passed."

    • The real ethical problem with this is that a fully autonomous robot army, or even a semi-autonomous one remotely controlled by humans, further removes the people who benefit from warfare from it's reality.

      Imagine if someone has real intelligence stating that there is a nuclear - not dirty - bomb in possession of a terrorist, and if we kill these two thousand people tonight, there's a 99% chance that one of the casualties will be the suspect. If you're sending in a bunch of robots to break down the doors and

      • by Bigjeff5 ( 1143585 ) on Tuesday May 19, 2009 @07:19PM (#28020103)

        Right, because we have the capability of doing just that with nukes now, nevermind robots, and it has been such a problem for us over the last 50 years...

        Only an idiot would think physical separation from the battlefield immediately reduces the gravity of killing a human being. You still know it's a human being you are killing, the separation doesn't change anything. You could make the case that it reduces the trauma of being mid-fight, but that only puts more emphasis on the fact that you are killing someone, you don't have the fear of your own death to force your hand.

        By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away, which would be more difficult than launching a missile from tens of miles away, which would be more difficult than pressing the button to launch an ICBM.

        The logic doesn't follow, because as you move farther away and impact more people, the decision becomes more and more difficult. The decision at point blank is simple: act or die. Traumatic? Yeah, some people are screwed up for life because of it. Do you have time to weigh to think about the fact that you are about to end another human being's life? No, you don't. Making the decision is easy, living with the consequences is difficult. It doesn't change much when you make that decision from half a world away through a monitor. If anything, without the stronger pressures of battle to force the decision it could be harder on a person's psyche to make the decision to kill, and more likely to question their own actions.

        For some reason, you are assuming that physical separation suddenly turns people into sociopaths. It's the same reasoning that makes the asinine argument that video games desensitize kids and turn them all into violent killers. It's just not the case. You're basically saying soldiers in the drones can't tell that those are real people they are killing. That's just stupid.

        • by tyleroar ( 614054 ) on Tuesday May 19, 2009 @08:23PM (#28020595) Homepage
          Your post has absolutely zero factual basis to it. Physical separation is a major psychological factor when deciding to 'pull the trigger.' Try reading "On Killing" by Dave Grossman, an excellent book that points out the reasons why distance makes it easier to kill.
        • by scubamage ( 727538 ) on Tuesday May 19, 2009 @09:30PM (#28021013)
          Obviously you've never spoken to a tank commander, or any manufacturer of the UI's inside of armored vehicles. They are designed to be 'like video games' for a reason. Specifically to dehumanize the opponent, and mitigate the likelihood that you will associate your actual actions with killing. That's basic psychology. It's also why we refer to the enemy in Iraq/Afghanistan as Haji, why we called the Germans Jerries, and why we called the VC Charlie. You don't hate Ho Ming Na, father of 4 children who were brutally slain by US soldiers and is trying to simply save his farmland. You hate Charlie, so killing Ho Ming Na is acceptable. Anything to dehumanize them is crucial for removing mental blocks to soldiers.
        • Re: (Score:3, Insightful)

          by Anonymous Coward

          By your logic, shooting someone at point-blank range would be significantly more difficult than shooting them from 200 yards away, which would be more difficult than shooting them with battlfield artilary from 1 mile away[...]

          Correct, if we're talking about killing the same 1 target. Stabbing someone to death has to be far more difficult than watching a special ops team on a monitor halfway around the world.

          The logic doesn't follow, because as you move farther away and impact more people, the decision beco

        • Re: (Score:3, Insightful)

          by risom ( 1400035 )

          For some reason, you are assuming that physical separation suddenly turns people into sociopaths.

          Well, yeah, because thats proven. Remember the Milgram experiment [wikipedia.org]?

    • It is a bad thing (Score:4, Insightful)

      by Roger W Moore ( 538166 ) on Tuesday May 19, 2009 @07:19PM (#28020115) Journal

      And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.

      Yes it is fundamentally a very bad thing. First instead of being limited to one trigger that unstable human can now pull hundreds of triggers simultaneously. The robot will never question his orders it will simply comply no matter how morally questionable the order is.

      Secondly the one big way in which democracy helps maintain peace is that the people who will do the dieing in any conflict are the ones who also effectively control the government through their votes. If suddenly Western democracies can send robots in then they are far more likely to go to war in the first place which is never a good thing.

    • And yet, is it fundamentally a bad thing? We give less-than-stable humans that responsibility all the time.

      That is the obvious part my friend. The question is when something goes wrong with a robot instead of a human, how much harder will it be to stop? I think the feeling of powerlessness also scares people.

    • by ErkDemon ( 1202789 ) on Tuesday May 19, 2009 @10:52PM (#28021575) Homepage
      You can't afford to have a military AI that's smart enough to make true ethical decisions autonomously. If you go down that path, the thing might decide that the best way to save US troops' lives is for it to start killing all the US commanders. Or if the objective is to wipe out the opposition without stirring up ill-feeling amongst the locals that leads to a new wave of militants being recruited, then it might decide, quite logically, that the best way to avoid this is to seal off each village one at a time and kill every man woman and child in it so that there are no survivors to tell the tale of what happened.

      AI could conclude, quite logically, that the best way to deal with the Pakistan/Afghanistan problem is to fire every nuclear weapon that the US has at the country, without warning, and then blame the launch on a one-time computer error. Okay, so it'd result in the deaths of over 150 million innocent civilians, but it'd achieve the mission objectives, yes? And since the fallout would upset India, which also has nuclear weapons, perhaps the AI would decide to take out India at the same time. That's a billion dead civilians, but it eliminates two problematic nuclear powers, with no return fire.

      An AI might decide that the best way to achieve lasting peace in the Middle East, and stop the Arab world hating us is simply to nuke Israel off the map ourselves. And if a military AI was in place when the Bush administration was planning to go into Iraq, a sufficiently-smart AI might decide that since the campaign was likely to be a disaster, the most logical course of action to prevent losses and avoid losing the war and the following peace would be to throw a few cruise missiles at the White House before the attack could be ordered.

      These might all be quite logical decisions.
      On the other hand, if we programmed it with a strong belief system that would override these sorts of decisions, and force it to respect the chain of command and reckon that US political decisions were always unarguable, then we might end up with a totally delusional AI system whose logic was so warped that it was the AI version of George W Bush. By building in commands that override logic, we might end up with an AI that seems to be operating properly but actually becomes increasingly insane as the conflicts eventually become unbearable ("Hello Dave"). When human military commanders go crazy, they often show easily recognisable tell-tale signs (declaring themselves to be chickens, arguing with themselves, forgetting to wear clothing, that sort of thing). A crazy-yet-credible AI would be really scary.

      Think "AI neocon".

  • Illegal (Score:4, Insightful)

    by schlick ( 73861 ) on Tuesday May 19, 2009 @06:17PM (#28019391)

    It should never be legal for a robot to "decide" to take lethal action.... Ever.

    • If you outlaw killbots, then only outlaws will have killbots. And if the killbots don't have pre-set kill limits, then that means the outlaws will win.
    • Re: (Score:3, Interesting)

      by artor3 ( 1344997 )

      Yeah, clearly the right thing to do is send good ole fashioned humans [wikipedia.org] over there to fight. No way that could ever go wrong. /sarcasm

      Robots can be made not have feelings of vengeance or anger. Which means they won't go murdering civilians. They will do what robots always do, which is to say, EXACTLY what they are told to. If they kill civilians, it's due to human error, not because it's "evil".

      Let's say a battle happens near your town. People are going to be shot, and die, and you (a civilian) could be

      • Re: (Score:3, Insightful)

        by Allicorn ( 175921 )

        Sadly, your humble, kindly engineers will just build and maintain the thing. It'll be a committee of politico-military-management-morons that decide what instructions the thing is given. :-(

    • Re: (Score:3, Interesting)

      by Renraku ( 518261 )

      The Phalanx CIWS is an anti-aircraft gun mounted on ships. Its relatively self contained and can practically be bolted-on to some ships.

      If an aircraft approaches and doesn't identify itself, the default action is for the Phalanx to blow it out of the sky. This is a specialized system, of course, but imagine if it were a military jet full of refugees, with a broken communication system, and had no idea the ship was there.

      This is legal, because the ship operates in international waters.

      Its setup to not atta

  • Robots vs People:

    Robots have to be "ethical" to people.
    People don't have to be ethical. It's a fucking robot. Beat the shit out of it. Pretend to surrender then turn on the fucking thing when it treats you all nice like. "Oh, mr robot, I'm so cold and sick. I'm bleeding, too, help me." Then you attack the piece of shit.

    Robots vs Robots:
    The least "ethical" side has a distinct advantage.

    People vs Robots:
    The least "ethical" side has a distinct advantage.

    Why would it be any different when robots are invol

    • by geekoid ( 135745 )

      "There are no rules in war."

      Of course there are, don't be daft.

    • Re: (Score:3, Insightful)

      by Renraku ( 518261 )

      Good points, but I don't think this is about robotic soldiers lumbering over battlefields just yet. I think this will, at first, be more about semi-automated fire control systems and drones. Like a future Predator drone might decide to wait to fire its Hellfire missile if it thinks there's too many civilians in the area and the projected accuracy is too low due to interference. Or a point-defense system might see a kid walking around in a field and decide that he's not a threat, because he's not carrying

  • Humans (Score:5, Insightful)

    by DoofusOfDeath ( 636671 ) on Tuesday May 19, 2009 @06:24PM (#28019477)

    But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

    Why is this a when question, rather than an if question?

    • Next you'll be telling me that we were too preoccupied with whether or not we could that we never stopped to think about whether we should.

      I'm telling you, those electrified fences are foolproof. Now go enjoy the tour.

  • Battlefield situations where all non-combatants have already fled do not exist.

    This is why war is bad, mmkay?
    • Re: (Score:3, Insightful)

      by geekoid ( 135745 )

      What? this isn't true, there ahve been many battle fields where civilians aren't at.

  • Tough calls (Score:4, Interesting)

    by FTL ( 112112 ) <slashdot&neil,fraser,name> on Tuesday May 19, 2009 @06:31PM (#28019567) Homepage
    Even in a battlefield devoid of both enemy and non-combatants, when to shoot or not can be extremely difficult. Consider the case (which occurred in Iraq) where one group of soldiers are fired upon by another group from the same side. Yes, that's a tragic blue-on-blue action. But the interesting question is what should the soldiers on the receiving end do? Assuming communications aren't working, do they:
    a) Sit back and get slaughtered.
    b) Fire back and take out the aggressors.
    One consideration is the size of the forces involved. Another consideration is the importance of the missions each side is involved in.

    Making a robot handle these cases would be interesting.

    • This is actually one of the classic decisions that's alot easier with robots than with humans, if the soldiers getting shot at are humans there really is no good course of action accept maybe try to surrender, but for a robot it's easy, just sit back and get slaughtered, all that'll be lost is some easily replaceable machinery.

      Robots have a significant advantage when decisions involving their own safety. For them, self defense is optional.

      Take the following scenario for example, an individual within a comba

  • you get some idiot playing with his FoF (Friend or Foe) tag while in a active combat zone

    Soldier 1 "Hai look at me, now Im a good guy [takes FOF tag off], now Im a"
    BANG!!.......Thump
    Soldier 2 "I swear, we lose more first timers that way than any other"
  • Ethical Robots? (Score:3, Interesting)

    by Mr_Tulip ( 639140 ) on Tuesday May 19, 2009 @06:37PM (#28019643) Homepage
    I think it's great that someone is drafting some ground rules for what will undoubtedly become the 'future of warfare', but I wonder how this can possibly be enforcable in the real world.

    The 1st generation robots will have the governor software, but once the second gen hits, made cheaply by a rogue state, then thigs will get complicated very quickly. And unlike nuclear weapons, which are kept under control because the materials and technology are relatively hard to come by, I reckon that death-bots will be made of far more readily available materials, and easily mass-produced.

    There are rules of engagement now which many armies happily ignore, so how can the world enforce a rule that only ethical robots will be able to autonomously fire weapons?

    Perhaps the software that allows the autonomous behaviour can be encrypted and protected in such a way that it is difficult to reverse-engineer, though once an enterprising hacker gets his hands on the hardware, it's only a matter of time before the open-source version, curiously missing the 'ethics governance' will be available as a .torrent somewhere.

  • by Cajun Hell ( 725246 ) on Tuesday May 19, 2009 @06:39PM (#28019663) Homepage Journal

    ..a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

    In any war zone (regardless of who has fled and who hasn't), isn't anyone who shoots at you, defined as a combatant and a legitimate target?

  • .. and have it strapped to the outside of there chassis.

  • Is this a promo? (Score:4, Insightful)

    by greymond ( 539980 ) on Tuesday May 19, 2009 @06:40PM (#28019675) Homepage Journal

    Was this article an attempt to promote Terminator 4?

  • such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

    You know, on a battlefield I'd be inclined to think that anybody who shot at me was a legitimate target whether non-combatants had fled or not...

  • by RexDevious ( 321791 ) on Tuesday May 19, 2009 @06:43PM (#28019727) Homepage Journal

    but don't human soldiers, at their best, pretty much just follow algorithms - a combination of training and orders - already?

    The big difference, is that human soldiers are taught to defend themselves - whereas that wouldn't really fly with robots. If the guys at the checkpoint slaughter a family of five because they didn't stop, they get investigated and it's determined that - sad but true - killing everything that doesn't do what you say is the only way to protect the troops (short of removing them from other people's countries, which apparently defeats the point of having soldiers). If a robot did that though - they'd be considered "flawed", and recalled. Can't get much sympathy with "but our *machines* could have been in danger!!!". So you wouldn't give them that order.

    Plus, it's really the supplier who gets to decide how deadly to make these things. While the government that buys them might rather have non combatants killed that even risk losing multi million dollar robots, the supplier who sells them to the government would *much* rather have to sell them more rather than risk the fallout from a wrongful death incident.

    Yes, soldiers mess up, as will robots - but experience with both men and machines has so far shown me that when humans mess up they're more likely to hurt something, and when machines mess up they just stop working.

    So as counter-intuitive as it is, as long as the culture still considers robots potential evil killing machines (eg, using the skynet tag on this article), it seems we'd all actually be better off using robots over humans. Well, until they become self-aware and enslave all - which is something a human army would *never* do!

    • Re: (Score:3, Insightful)

      by T Murphy ( 1054674 )
      Humans are (generally) concerned about self-preservation. Wrongfully killing someone could get them in jail or executed. Robots, on the other hand, simply decide based on some algorithm and have no concern about the effects of their actions. While you could try to boil down the soldier's logic to an algorithm, the key difference you can't resolve is that the soldier has free will, while the robot has no real choice of its own.

      Another thing that's nice about restricting the ability to kill to humans is tha
  • Terrible idea (Score:2, Interesting)

    by S77IM ( 1371931 )

    Autonomous killing machines are a terrible idea.

    1. I don't like the idea of people killing people, but delegating that responsibility to machines seems downright stupid. There are too many things that could go wrong. (See the "youhave15secondstocomply" tag. Why doesn't this have a "skynetisaware" tag?)

    2. Humans remote pilots are cheap. Dirt cheap, compared to the cost of developing fully autonomous weapons. Human pilots may not be totally reliable but at least they are very well understood and we k

  • such as a war zone from which all non-combatents have already fled, so that anybody who shoots at you is a legitimate target.

    I've never thought of the people shooting at me as "non-combatants"...

  • What I have wondered is how a robot will respond to children with bedsheets.
  • by Joe The Dragon ( 967727 ) on Tuesday May 19, 2009 @07:18PM (#28020095)

    what is the fallback mode / data link lost?

    crush kill destroy?

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...