Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Military Robotics Technology

Human Rights Watch: Petition Against Robots On the Battle Field 275

New submitter KublaCant writes "'At this very moment, researchers around the world – including in the United States – are working to develop fully autonomous war machines: killer robots. This is not science fiction. It is a real and powerful threat to humanity.' These are the first words of a Human Rights Watch Petition to President Obama to keep robots from the battlefield. The argument is that robots possess neither common sense, 'real' reason, any sense of mercy nor — most important — the option to not obey illegal commands. With the fast-spreading use of drones et al., we are allegedly a long way off from Asimov's famous Three Laws of Robotics being implanted in autonomous fighting machines, or into any ( semi- ) autonomous robot. A 'Stop the Killer Robots' campaign will also be launched in April at the British House of Commons and includes many of the groups that successfully campaigned to have international action taken against cluster bombs and landmines. They hope to get a similar global treaty against autonomous weapons. The Guardian has more about this, including quotes from well-known robotics researcher Noel Sharkey from Sheffield University."
This discussion has been archived. No new comments can be posted.

Human Rights Watch: Petition Against Robots On the Battle Field

Comments Filter:
  • Recommended Reading (Score:5, Interesting)

    by smpoole7 ( 1467717 ) on Monday February 25, 2013 @09:01AM (#43002113) Homepage

    http://en.wikipedia.org/wiki/Berserker_(Saberhagen) [wikipedia.org]

    Fred Saberhagen's "Beserker" series.

    Aside from touching on the subject at hand, it's just some crackin' good sci-fi. :)

    I don't know if we'd ever reach that point ourselves, but in that series, an unknown (and now extinct) alien race, losing a war and desperate, created "doomsday" machines that were simply programmed to kill all life. They were self-replicating, self-aware AIs that took their task seriously, too.

    Then again, I ask myself what some jihadist might do, if given half the chance ... . .. ..

    • Re: (Score:3, Informative)

      by will_die ( 586523 )
      Also add in _Second Variety_ by Philip K. Dick
    • by arcite ( 661011 ) on Monday February 25, 2013 @09:32AM (#43002387)

      There are all indications that the coming robotic revolution will usher in a new era of human peace and prosperity. Robots have no emotion, no bias. Imagine deploying a few hundred (or thousand) semi-autonomous robotic peacekeepers into a conflict zone. They maintain the peace 24/7, they never tire, they are alert and objective in their duties. War is traditionally an incredibly wasteful and expensive exercise. Look at Iraq and Afghanistan! $1 trillion and thousands of allied casualties. Deploy a robot army and watch the costs come down. No need for living quarters, no need of food or water, logistics becomes cheaper in every aspect.

      Like them or loath them, Drones are incredibly efficient in what they do. They are very lethal, but they are precise. How many innocents died in the decades of embargo on Iraq and the subsequent large scale bombings under Bush? Estimates run into over 100,000. Use of drones in Libya, Mali, Yemen, Pakistan have reduced costs by hundreds of millions and prevented thousands of needless casualties. Drones are the future and the US has an edge that will not give up.

      • by VAXcat ( 674775 )
        The robots in the Jack WIlliamson's Humanoid stories had a Prime Directive of ''to serve and obey and guard men from harm"....see how well that worked out...
      • by ultranova ( 717540 ) on Monday February 25, 2013 @09:52AM (#43002599)

        Robots have no emotion, no bias. Imagine deploying a few hundred (or thousand) semi-autonomous robotic peacekeepers into a conflict zone. They maintain the peace 24/7, they never tire, they are alert and objective in their duties.

        An autonomous robot needs to form a model of what's happening around it, use that to figure out what its possible long- and short-term actions will be, and finally decide how desirable various outcomes are relative to each other. All of these steps are prone to bias, especially since whoever designed the robot and its initial database is going to have their own biases.

        Also, a robot acting in real life cannot carefully think everything through. There's simply not enough time for that. This necessiates some kind of emotion-analogy to provide context for reflex and simple actions, just like it does on living beings.

        Look at Iraq and Afghanistan! $1 trillion and thousands of allied casualties. Deploy a robot army and watch the costs come down. No need for living quarters, no need of food or water, logistics becomes cheaper in every aspect.

        So there will be a lot more "interventions", since the cost (to you) is lower. I think that's part of what worries the the HRW.

        • Also, a robot acting in real life cannot carefully think everything through. There's simply not enough time for that. This necessiates some kind of emotion-analogy to provide context for reflex and simple actions, just like it does on living beings.

          Why does it need to think at all? It's applying rules, not creating them.

          "Sir, my scans have detected unauthorized weapons. Please put them down or I will apply force."

      • by Caffinated ( 38013 ) on Monday February 25, 2013 @09:57AM (#43002687) Homepage
        Well, that raises the question of the "who controls the robots" question, doesn't it?. Presuming that they'd be as effective as you outline (I quite doubt it), they'd be great for making it domestically painless to invade and occupy places that one doesn't like for whatever reason, and I doubt that's a good thing (Iraq and Afghanistan only happened and went on as long as they did since even with the causalities, the pain was almost entirely borne by military families; heck, we didn't even increase taxes to actually pay for it). In short, I'd imagine that you might have a bit of a concern with autonomous foreign peacekeeping robots patrolling your neighborhood, and I'd expect that people in other places feel that way as well.
        • by TheLink ( 130905 ) on Monday February 25, 2013 @10:36AM (#43003273) Journal

          It is a red herring. The problem is not robots in war. There's no big difference between using robots and drones in wars compared to using cruise missiles in wars. And only a slight difference between using soldiers that have been conditioned/brainwashed to follow orders unquestioningly.

          The real problem is the ease of starting wars that only benefit a very few people. Hence my proposal: http://slashdot.org/~TheLink/journal/208853 [slashdot.org]

          In the old days kings used to lead their soldiers into battle. In modern times this is impractical and counterproductive.

          But you can still have leaders lead the frontline in spirit.

          Basically, if leaders are going to send troops on an _offensive_ war/battle (not defensive war) there must be a referendum on the war.

          If there are not enough votes for the war, those leaders get put on deathrow.

          At a convenient time later, a referendum is held to redeem each leader. Leaders that do not get enough votes get executed. For example if too many people stay at home and don't bother voting - the leaders get executed.

          If it turns out later that the war was justified, a fancy ceremony is held, and the executed leaders are awarded a purple heart or equivalent, and you have people say nice things about them, cry and that sort of thing.

          If it turns out later that the leaders tricked the voters, a referendum can be held (need to get enough signatories to start such a referendum, just to prevent nutters from wasting everyone else's time).

          This proposal has many advantages:
          1) Even leaders who don't really care about those "young soldiers on the battlefield" will not consider starting a war lightly.
          2) The soldiers will know that the leaders want a war enough to risk their own lives for it.
          3) The soldiers will know that X% of the population want the war.
          4) Those being attacked will know that X% of the attackers believe in the war - so they want a war, they get a war - for sufficiently high X, collateral damage becomes insignificant. They might even be justified in using WMD and other otherwise dubious tactics. If > 90% of the country attacking you want to kill you and your families, what is so wrong about you using WMD as long as it does not affect neighbouring countries?

          I think if this was implemented it would be much better than banning robots. I'm biased of course ;).

          • by jythie ( 914043 )
            I think all that would result in is an even more vicious PR battle. Having the mob decide if someone lives or dies has never been very just, and today it would pretty much come down to whoever had a more effective PR machine and can get enough people riled up would decide who lives and dies.
      • by kannibal_klown ( 531544 ) on Monday February 25, 2013 @10:03AM (#43002793)

        A couple of issues.

        1) Software can be hacked... either partially or totally. Maybe just putz with the Friend-Or-Foe logic, maybe take direct control, etc. Sure, humans can be blackmailed and extorted but usually on an individual basis. Mass-putzing with a regiment or squad and you have serious issues. Such as perhaps those drones protecting the US (if they ever become truly robotic).

        2) It does make war a bit more meaningless. If you aren't facing emotional losses, then there's little reason NOT to go to war. If it's not personalized... then who cares? Sure, even now we have sympathy for the other side and protests and such... but the majority of the people that care mostly care because our brothers / sisters / sons / daughters / etc. are out there possibly dying. So that helps push back the question "should we actually GO to war with them?"

        3) There ARE concerns of self-aware armed robots. Make them too self aware, and maybe they realize that the never-ending violent slaughter of humans is contradictory to their goals of preserving their owners' lives. In which case they take a OVERLY logic to preserve the FUTURE "Needs of the many" by doing PLOTLINE X. Sure, it sounds like bad sci-fi... but as you say they have no emotions and only logic. Take away emotion, and we become like cattle... where they cull the herd due to a few random mad-cow cases to save the majority.

      • Robots have no emotion, no bias.

        Really? Given modern analysis techniques, how hard do you think it would be to program such a robot to have bias based on factors like skin color, facial structure, attire, presence of RFID/radio-ident/FoF/etc. tags, language or even accent?

        You think people WOULDN'T add this log in?

      • There are all indications that the coming robotic revolution will usher in a new era of human peace and prosperity... War is traditionally an incredibly wasteful and expensive exercise.

        So you are making the argument that making war cheaper and easier to launch will result in more peace and prosperity? It's so bad already that we don't even count civilian casualties accurately.

      • by jythie ( 914043 )
        What makes you think they have no bias? Seems like they would have the bias of whoever programmed them, and even less ability to examine it in the field.
    • Then again, I ask myself what some jihadist might do, if given half the chance ... . .. ..

      Take the Soviet Union's place at American Boogeyman #1, which is pretty darn impressive accomplishment on their side and just plain sad on America's.

      You are worrying about a bunch of third-world priests and their followers building a high-tech weapon the American Army - or any first-world country - can't out-high-tech. And it got modded +5 Interesting. Come on.

      • > You are worrying about a bunch of third-world priests ...

        Give those "priests," say, a nuclear weapon, or (to stay on topic) a doomsday robot that is self-replicating, and is programmed to kill all infidels, yes, I would worry about that. Advanced weaponry is the Great Equalizer(tm). :)

        Besides, I would rather believe that I was modded "interesting" because of my recommendation of Saberhagen. His stories are much better than anything I could say. :)

        -- Stephen

  • As far as I can tell, we need to focus on dealing with the presence of drones and "killer robots," not how to prevent them. Like it or not, 'progress marches on'.
    • I think robots are a great way to go. Sure in the beginning there would be robots fighting human solders, but in time human solders would be phased out. They're expensive in terms that it takes 18-20 years (in civilized society) to raise and train a human solders. Robots could be build and programmed in a few days with a good manufacturing plant. The future of war will just be machines fighting other machines.

      It'd be like a real life game of starcraft with humans controlling the groups of robots remotely.
      • Re:Deal with it. (Score:4, Insightful)

        by Darth Snowshoe ( 1434515 ) on Monday February 25, 2013 @10:04AM (#43002813)

        You are describing your own fantasy rather than a reasoned prediction.

        Surely once the robots break through the curtain of defenders, they will begin quite efficiently to the civilian population and their infrastructure. How would robots even distinguish between them? (In fact, this is a difficulty for human soldiers today.) Is it not likely that civilians would attempt, at the last, to defend themselves and their families also?

        The hope for humanity is not that the winners will somehow be more virtuous than the losers. Our only hope is that, as the consequences of armed conflict escalate, the number and severity of conflicts will dwindle.

      • " with no intentional lost of human life."

        Yeah, as long as the wining side chooses not to wipe out the humans on the losing side, since they'll have no robot protection anymore.

        I'm sure that'll never happen.

  • This message is sponsored by Sarah and John Connor. With special consideration from Morpheus, Trinity, and Neo.

  • by rodrigoandrade ( 713371 ) on Monday February 25, 2013 @09:06AM (#43002147)

    Hey, James Cameron, are you the submitter??

    The automomous Terminator-style robots the summary refers to are far from becoming a battlefield standard, much to the disappointment of the /. crowd and sci-fi nerds.

    Predator drones et al., like all current robotic devices in the battlefield, still have a human being in charge making all the decisions, so the points raised are completely moot.

    • Are they really just remote controlled devices rather than autonomous "robots"?

    • by fuzzyfuzzyfungus ( 1223518 ) on Monday February 25, 2013 @09:15AM (#43002225) Journal

      Yes and no: especially sophisticated autonomous robots, either self-driving vehicles or biomimetic killbots of some sort, are sci-fi stuff; but land mines 'That's bi-state autonomous area denial agent sir to you, cripple!' and more sophisticated devices like the Mark 60 CAPTOR [wikipedia.org] are autonomous killer robots.

      And, so far, they've proven deeply unpopular in bleeding-heart circles. The fancier naval and anti-vehicle mines are still on the table; but the classic land mine enjoys a sense of ethical distaste only slightly less than just hacking off children's limbs yourself...

      • by wren337 ( 182018 )

        Landmines are the perfect example of existing autonomous technology. Next steps would be, I imagine, drones that fly themselves home if jammed. Still pretty innocuous but a step into automation.

        Also imagine a first generation turret. Automated target acquisition based on stereo imaging and stereo microphones. The first models would require an operator to approve the target. But the systems are so much faster than us - soon you'd want to be able to approve a target area, hold down the "OK" button and ha

      • by CODiNE ( 27417 )

        That Mark 60 CAPTOR is quite interesting with its audio detection of submarines.

        How long before someone relaxing in their boat discovers the right song to make a false positive on that?

        Then how many more blown up civilians before they figure out which song it is?

        • by plover ( 150551 )

          At least we know definitively that the trigger song isn't Margaritaville. That would have been a disaster.

    • by Hentes ( 2461350 )

      But we shouldn't wait until the autonomous drones arrive.

    • Disappointment or not, the problem is kind of different. In fact, the problem exists quite a while ago, since people invented time-bombs, remote controlled bomb, suicide killers, and such.

      The issue at hand is the following: War is about killing people and destroying stuff. People on the battlefield facing to each other turned out to be counter-productive in this regard, exemplified on many occasions in the end of 1st World Massacre. After a long period of constant threat of death, patriotism, religious fana

    • samson (Score:4, Interesting)

      by nten ( 709128 ) on Monday February 25, 2013 @10:06AM (#43002833)

      http://en.wikipedia.org/wiki/Samson_RCWS [wikipedia.org]

      These turrets count I think. Israel has at times said they are keeping a man in the loop, but the technology doesn't require it, and at times they have said they are in
      "see-shoot" mode. This is essentially indiscriminate area denial that is easier to turn off than mines. It does have the computer vision and targeting aspects of a killer robot, just not the path finding and obstacle avoidance parts.

  • The drones America uses are piloted by humans. The other robot in use by the military is the one that disables bombs. It also is remote controlled by a human. I don't think the military has any non piloted robots deployed in combat. Even a turret would be too dangerous. An automated turret could kill our own troops. Closest thing we have is landmines.
    • by Jawnn ( 445279 )

      The drones America uses are piloted by humans. The other robot in use by the military is the one that disables bombs. It also is remote controlled by a human. I don't think the military has any non piloted robots deployed in combat. Even a turret would be too dangerous. An automated turret could kill our own troops. Closest thing we have is landmines.

      Are you sure about that?

    • I don't think the military has any non piloted robots deployed in combat. Even a turret would be too dangerous.

      Ever hear of the PHALANX/CIWS? [wikipedia.org] Automated turrets that are placed on Aircraft Carriers and on bases in the middle east to shoot down incoming mortars and rockets. Something capable of shooting 4,500 20mm rounds per minute could be very deadly. Because human reaction time is too slow, these turrets DO fire automatically.

      • Something capable of shooting 4,500 20mm rounds per minute could be very deadly.

        But please, only when used by a well-regulated militia.

        • Bought one last week. My well-regulated militia is very interested in not being killed by a Hellfire missile shot by Obama. We are American citizens after all, and subject to assassination order by the President.

          And in case anyone is too dense to recognize the sarcasm.... /sarcasm

          • But, but... careful there, brother. It's not the Hellfires shot by Obama personally, who are after you. Don't neglect to watch out for the black FEMA/UN helicopters implementing Agenda 21!!!!

            also... /sarcasm
      • Good Point. But those turrets are not a human rights issue. Missles and mortars don't have human rights. I suppose they could shoot down the occasional plane that attempt to dive bomb a carrier. I also suppose they could be repurposed to shoot at people. But I think that would kill more friendly troops than enemy.
        • by plover ( 150551 )

          They are set to guard an area against any radar detectable objects, and most importantly, they do NOT have IFF. They have only trajectory and min/max target speeds, and anything traveling in the area that is heading in the wrong direction and is traveling within the set speed range is fired upon. I believe they already have shot down one friendly aircraft, which entered the kill zone while towing a target drone.

          They're as close to indiscriminate killing machines as we have. They're self contained weapon

  • by RobinH ( 124750 ) on Monday February 25, 2013 @09:16AM (#43002239) Homepage
    If you think about a virus for a second, it's the same thing. You can't reason with a virus. It doesn't make moral decisions. It just does what its DNA programs it to do, and it's even more dangerous because it's self-replicating. We need to deal with autonomous robots the same way we deal with bio-warfare.
  • Depending on how one defines "robot", this will be extremely unlikely.

  • by concealment ( 2447304 ) on Monday February 25, 2013 @09:17AM (#43002245) Homepage Journal

    I don't mean to be the dark figure in this conversation, but I think it's inevitable that robots will be used on the battlefield, just like people are going to continue to use cluster bombs, land mines, dum-dum bullets and other horrible devices. The reason is that they're effective.

    War is a measurement of who is most effective at holding territory. It is often fought between uneven sides, for example the Iraqi army in their 40-year-old tanks going out against the American Apaches who promptly slaughtered them. Sometimes, there are seeming upsets but often there's an uneven balance behind the scenes there as well.

    Robots are going to make it to the battlefield because they are effective not as killing machines, but as defensive machines. They're an improvement over land mines, actually. The reason for this is that you can programmatically define "defense" where offense is going to require more complexity.

    Already South Korean is deploying robotic machine gun-equipped sentries on its border [cnet.com]. Why put a human out there to die from sniper fire when you can have armored robots watching the whole border?

    Eventually, robots may make it to offensive roles. I think this is more dubious because avoiding friendly fire is difficult, and using transponders just gives the enemy homing beacons. In the meantime, they'll make it to the battlefield, no matter how many teary people sign petitions and throw flowers at them.

  • by Dr. Tom ( 23206 ) <tomh@nih.gov> on Monday February 25, 2013 @09:23AM (#43002315) Homepage

    How many times must it be said? Asimov's 3 "laws" have nothing to do with real robotics, future or present. They were a _plot device_, designed to make his (fictional) stories more interesting. Even mentioning them at all in this context implies ignorance of actual robotics in reality. In reality, robot 'brains' are computers, programmed with software. Worry more about bugs in that software, and lack of oversight on the people controlling them.

    • by ledow ( 319597 )

      Quite.

      The day we get a robot that can understand, interpret and carry out infallibly the "three laws", we don't need the three laws - it will have surpassed the average human ability and probably could reason for itself better than we ever could. We would literally have created a "moral" robot with proper intelligence. At that point, it would be quite capable of providing any justification to its actions and even deciding that the three laws themselves were wrong (like the "0th law" used as a plot device

    • Not just that, even if Asimov's laws were for real, they were made with the idea of robots being tools for humans to use in their work, and hence, the need to avoid any harm, just like one doesn't want one's own Kraftsman tools to harm them. But robots used to substitute battlefield soldiers are different - they are tools meant to be used to harm enemy soldiers, so it makes no sense to conflate them w/ civilian use robots. The latter was not designed to hurt people, which is why this law, if it existed, w
  • Killer robots can't be a government only option =D
  • This led to clever people developing submachine guns.

    Give it a couple decades and you'll be able to download plans for your own battlebot and then create it on your printer

  • Total Garbage. (Score:5, Interesting)

    by inhuman_4 ( 1294516 ) on Monday February 25, 2013 @09:27AM (#43002339)

    This article is absolute garbage. Almost everything in that Guardian article is misinformed and sensationalist.

    "fully autonomous war machines"? Care to give an example? I've follow this stuff pretty closely in the news on top of researching AI myself. And from what I have seen no one is working on this. Hell, we've only just started to crack autonomous vehicles. They site X-37 space plane for gods' sake. Everything about that is classified so how do they know it is autonomous?

    My favourite gem has to be this one: "No one on your side might get killed, but what effect will you be having on the other side, not just in lives but in attitudes and anger?". Pretty sure that keeping your side alive while attacking your opponent has been the point of every weapon that has ever been developed.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      A huge amount is known about the X-37 [wikipedia.org] seeing as it's a redirected NASA project. It's capable of autonomous landing and it's widely assumed that it performed its primary reconaissance mission autonomously seeing as it's basically a glorified spy satellite capable of a controlled re-entry.

      We already have fully autonomous combat aircraft, that can be pointed at a target and perform complex manouvers in order to reach and subsequently destroy it. They're called cruise missiles. You're hopelessly naive if you th

    • http://en.wikipedia.org/wiki/Phalanx_CIWS#Operation [wikipedia.org]

      All it needs is power and cooling water. No human interaction required.

  • robots killing robots

    wars settled in a clash of machinery without any humans for miles around

  • Robots are not alive.

    There is no true sacrifice of blood and souls when robots take the place of soldiers in battle. In my opinion, that brings them up to WMD in terms of being able to inflict loads of casualties with little risk to the aggressor.

    • by medcalf ( 68293 )
      That's not what WMD means. WMD is a term that replaced NBC or CBN or a variety of other acronyms to differentiate chemical, biological and nuclear weapons from other weapons. It's a useful distinction (between weapons that are designed so that their normal use kills over a large area with each use, and those which kill in small increments). By your usage, a knife is a WMD if the population being attacked is unarmed.
  • Only humans should be on the battlefield killing each other, robots killing robots is just so inhumane.

  • Without "autonomous war machines" we've managed to firebomb cities (with a nice 3 hour gap between bombing runs so that fire fighters and so on would be putting out the first run's fires when the second run hit), mass murder civilians, drop atomic bombs on cities, use chemical weapons, and everything in between. I don't think feelings of mercy and pity and an ability to not follow illegal orders makes much of a difference.

  • My main problem with using robots (or, more likely, remotely piloted-semi-autonomous war machines) on the battlefield is that it makes war too easy. Right now, drones aside, war is a costly matter. You need to put actual lives at risk and that acts as a check on what generals/politicians would want to use troops for. Want to invade North Korea and Iran to stop them from being a threat once and for all? Well, that's going to wind up costing tons of lives which is going to make it harder to sell to the pu

  • are coming
    • Daleks were cyborgs, they were organic sentient lifes embedded into an armoured suit.

      In that respect they were more like Mobile Infantry in Starship Troopers, they just didn't bother making the suits as anthropologically shaped.
  • There have been many situations where you've had humans in on the ground, one gets killed, and the slain soldier's buddies snap and decide to massacre an entire village. I'm not really sure what part of merciful warfare autonomous robots are threatening.

  • The author gave the following reasons against autonomous war robots:

    || Robots possess neither common sense... ||
    Me: This is true, but isn't that the point? Someone behind the curtain has common sense? For example, the current generation of drones in use aren't intelligent, but the people flying them are making the decisions (or rather, their superiors). We need to separate the ED-209 vs. drone conversation as I'm pro drone, anti-ED-209 style military robot.

    || 'real' reason ||
    Me: See above. Our current dron

  • ...over humans.
    Here's hoping we eventually get to the point where both sides just deploy robots, and whichever has robots standing at the end wins.
    Lets stop wasting young lives.
  • People keep going on about the robocaplypse and the "friendly AI problem" but the real problem has, for millenia, been the "friendly NI problem" or friendly natural intelligence problem. Whether the drones being manipulated by the natural intelligences are made of silicon, metal and composites powered by electricity, or they are flesh and blood constructs powered by chemistry is beside the point.
  • NPR had a piece that same kinds of arguments have been made against every new escalation of military technology back to metal swords and the gun itself. That technology increases the soldiers killing capacity and makes him/her more removed from man-to-man combat.

    The real jump would be machine-decided (A.I.) killing. For the most part there is a man in the decision loop. Even with the new Israeli "Iron Dome" missiles where operator has seconds to decide to launch. (More of a financial decision because
  • I have put a lot of thought into combat robots, particularly airborne ones. I think they're really an inevitable development.

    I don't have a problem with robots maneuvering themselves over a battlefield. I don't have a problem with a robot killing someone. I don't even have a problem with giving it a target, and letting it decide the best way to eliminate it.

    The only provision I would require is that we not have it select its own targets. There should be a human operator somewhere telling it what it should b

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...