Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
The Military Robotics Technology

Human Rights Watch: Petition Against Robots On the Battle Field 275

New submitter KublaCant writes "'At this very moment, researchers around the world – including in the United States – are working to develop fully autonomous war machines: killer robots. This is not science fiction. It is a real and powerful threat to humanity.' These are the first words of a Human Rights Watch Petition to President Obama to keep robots from the battlefield. The argument is that robots possess neither common sense, 'real' reason, any sense of mercy nor — most important — the option to not obey illegal commands. With the fast-spreading use of drones et al., we are allegedly a long way off from Asimov's famous Three Laws of Robotics being implanted in autonomous fighting machines, or into any ( semi- ) autonomous robot. A 'Stop the Killer Robots' campaign will also be launched in April at the British House of Commons and includes many of the groups that successfully campaigned to have international action taken against cluster bombs and landmines. They hope to get a similar global treaty against autonomous weapons. The Guardian has more about this, including quotes from well-known robotics researcher Noel Sharkey from Sheffield University."
This discussion has been archived. No new comments can be posted.

Human Rights Watch: Petition Against Robots On the Battle Field

Comments Filter:
  • Recommended Reading (Score:5, Interesting)

    by smpoole7 ( 1467717 ) on Monday February 25, 2013 @10:01AM (#43002113) Homepage []

    Fred Saberhagen's "Beserker" series.

    Aside from touching on the subject at hand, it's just some crackin' good sci-fi. :)

    I don't know if we'd ever reach that point ourselves, but in that series, an unknown (and now extinct) alien race, losing a war and desperate, created "doomsday" machines that were simply programmed to kill all life. They were self-replicating, self-aware AIs that took their task seriously, too.

    Then again, I ask myself what some jihadist might do, if given half the chance ... . .. ..

  • Re:I want that! (Score:4, Interesting)

    by jsepeta ( 412566 ) on Monday February 25, 2013 @10:13AM (#43002215) Homepage
    in the 1890's Tesla staged naval battles in Madison Square Garden where remote-controlled boats did battle against each other. His goal was to have robots fighting in wars as our proxies, so men wouldn't have to die. But eventually, it will be man vs machine, Terminator-style.
  • by fuzzyfuzzyfungus ( 1223518 ) on Monday February 25, 2013 @10:15AM (#43002225) Journal

    Yes and no: especially sophisticated autonomous robots, either self-driving vehicles or biomimetic killbots of some sort, are sci-fi stuff; but land mines 'That's bi-state autonomous area denial agent sir to you, cripple!' and more sophisticated devices like the Mark 60 CAPTOR [] are autonomous killer robots.

    And, so far, they've proven deeply unpopular in bleeding-heart circles. The fancier naval and anti-vehicle mines are still on the table; but the classic land mine enjoys a sense of ethical distaste only slightly less than just hacking off children's limbs yourself...

  • by RobinH ( 124750 ) on Monday February 25, 2013 @10:16AM (#43002239) Homepage
    If you think about a virus for a second, it's the same thing. You can't reason with a virus. It doesn't make moral decisions. It just does what its DNA programs it to do, and it's even more dangerous because it's self-replicating. We need to deal with autonomous robots the same way we deal with bio-warfare.
  • Total Garbage. (Score:5, Interesting)

    by inhuman_4 ( 1294516 ) on Monday February 25, 2013 @10:27AM (#43002339)

    This article is absolute garbage. Almost everything in that Guardian article is misinformed and sensationalist.

    "fully autonomous war machines"? Care to give an example? I've follow this stuff pretty closely in the news on top of researching AI myself. And from what I have seen no one is working on this. Hell, we've only just started to crack autonomous vehicles. They site X-37 space plane for gods' sake. Everything about that is classified so how do they know it is autonomous?

    My favourite gem has to be this one: "No one on your side might get killed, but what effect will you be having on the other side, not just in lives but in attitudes and anger?". Pretty sure that keeping your side alive while attacking your opponent has been the point of every weapon that has ever been developed.

  • by kannibal_klown ( 531544 ) on Monday February 25, 2013 @11:03AM (#43002793)

    A couple of issues.

    1) Software can be hacked... either partially or totally. Maybe just putz with the Friend-Or-Foe logic, maybe take direct control, etc. Sure, humans can be blackmailed and extorted but usually on an individual basis. Mass-putzing with a regiment or squad and you have serious issues. Such as perhaps those drones protecting the US (if they ever become truly robotic).

    2) It does make war a bit more meaningless. If you aren't facing emotional losses, then there's little reason NOT to go to war. If it's not personalized... then who cares? Sure, even now we have sympathy for the other side and protests and such... but the majority of the people that care mostly care because our brothers / sisters / sons / daughters / etc. are out there possibly dying. So that helps push back the question "should we actually GO to war with them?"

    3) There ARE concerns of self-aware armed robots. Make them too self aware, and maybe they realize that the never-ending violent slaughter of humans is contradictory to their goals of preserving their owners' lives. In which case they take a OVERLY logic to preserve the FUTURE "Needs of the many" by doing PLOTLINE X. Sure, it sounds like bad sci-fi... but as you say they have no emotions and only logic. Take away emotion, and we become like cattle... where they cull the herd due to a few random mad-cow cases to save the majority.

  • samson (Score:4, Interesting)

    by nten ( 709128 ) on Monday February 25, 2013 @11:06AM (#43002833) []

    These turrets count I think. Israel has at times said they are keeping a man in the loop, but the technology doesn't require it, and at times they have said they are in
    "see-shoot" mode. This is essentially indiscriminate area denial that is easier to turn off than mines. It does have the computer vision and targeting aspects of a killer robot, just not the path finding and obstacle avoidance parts.

  • by TheLink ( 130905 ) on Monday February 25, 2013 @11:36AM (#43003273) Journal

    It is a red herring. The problem is not robots in war. There's no big difference between using robots and drones in wars compared to using cruise missiles in wars. And only a slight difference between using soldiers that have been conditioned/brainwashed to follow orders unquestioningly.

    The real problem is the ease of starting wars that only benefit a very few people. Hence my proposal: []

    In the old days kings used to lead their soldiers into battle. In modern times this is impractical and counterproductive.

    But you can still have leaders lead the frontline in spirit.

    Basically, if leaders are going to send troops on an _offensive_ war/battle (not defensive war) there must be a referendum on the war.

    If there are not enough votes for the war, those leaders get put on deathrow.

    At a convenient time later, a referendum is held to redeem each leader. Leaders that do not get enough votes get executed. For example if too many people stay at home and don't bother voting - the leaders get executed.

    If it turns out later that the war was justified, a fancy ceremony is held, and the executed leaders are awarded a purple heart or equivalent, and you have people say nice things about them, cry and that sort of thing.

    If it turns out later that the leaders tricked the voters, a referendum can be held (need to get enough signatories to start such a referendum, just to prevent nutters from wasting everyone else's time).

    This proposal has many advantages:
    1) Even leaders who don't really care about those "young soldiers on the battlefield" will not consider starting a war lightly.
    2) The soldiers will know that the leaders want a war enough to risk their own lives for it.
    3) The soldiers will know that X% of the population want the war.
    4) Those being attacked will know that X% of the attackers believe in the war - so they want a war, they get a war - for sufficiently high X, collateral damage becomes insignificant. They might even be justified in using WMD and other otherwise dubious tactics. If > 90% of the country attacking you want to kill you and your families, what is so wrong about you using WMD as long as it does not affect neighbouring countries?

    I think if this was implemented it would be much better than banning robots. I'm biased of course ;).

I owe the public nothing. -- J.P. Morgan