Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics The Military

Robotics Prof Fears Rise of Military Robots 258

An anonymous reader writes "Interesting video interview on silicon.com with Sheffield University's Noel Sharkey, professor of AI & robotics. The white-haired prof talks state-of-the-robot-nation — discussing the most impressive robots currently clanking about on two-legs (hello Asimo) and who's doing the most interesting things in UK robotics research (something involving crickets apparently). He also voices concerns about military use of robots — suggesting it won't be long before armies are sending out fully autonomous killing machines."
This discussion has been archived. No new comments can be posted.

Robotics Prof Fears Rise of Military Robots

Comments Filter:
  • by Anonymous Coward on Thursday January 14, 2010 @11:47PM (#30775042)

    http://en.wikipedia.org/wiki/The_Secret_War_of_Lisa_Simpson [wikipedia.org]

    "The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots."

  • by Anonymous Coward on Friday January 15, 2010 @12:21AM (#30775228)

    "He also voices concerns about military use of robots — suggesting it won't be long before armies are sending out fully autonomous killing machines."

    This Gloomy Gus overlooks the obvious. These "fully autonomous killin machines" - let's call them, oh I don't know, "killbots" - will almost certainly have a preset kill limit. So right there we'll have an easy way to stop them!

    Hell yeah! The next time we need our military to go blow the shit out of a little nation of brown people that is no threat to us and has no WMDs, at least we don't have to put our own troops into harm's way.

  • Re:"Friendly AI" (Score:2, Insightful)

    by Ethanol-fueled ( 1125189 ) * on Friday January 15, 2010 @12:21AM (#30775232) Homepage Journal
    That depends on whether they started using steroids before or after they joined Blackwater.
  • by Anonymous Coward on Friday January 15, 2010 @12:37AM (#30775322)
    Yah like our current crop of soldiers have any problems raping/killing women and children. Or maybe thats why so many of them are committing suicide when they come home.
  • Re:"Friendly AI" (Score:5, Insightful)

    by jollyreaper ( 513215 ) on Friday January 15, 2010 @12:56AM (#30775422)

    This is one of the things that makes me think the concern about "friendly AI" is blown out of proportion. The problem isn't making sure teh AI's are "friendly" -- its making sure the NI (natural intelligence) owners of the AI's are "friendly".
    If half the effort spent on "friendly AI" were spent on examining the ownership of AI's, there might be some hope.

    That's just it -- human nature never changes. The general can order genocide but it's up to the soldiers to carry it out. The My Lai Massacre was stopped by a helicopter pilot who put his bird between the civilians and "told his crew that if the U.S. soldiers shot at the Vietnamese while he was trying to get them out of the bunker that they were to open fire at these soldiers."

    http://en.wikipedia.org/wiki/My_Lai_Massacre [wikipedia.org]

    Robots aren't really the issue -- distancing humans from killing is the problem. Not many of us could kill another human being with our bare hands. A knife might make the task easier in the doing but does nothing to ease the psychological horror of it. Guns let you do it at a distance. You don't even have to touch the guy. And buttons make it easier still. It's like you're not even responsible. You could convince young men to fly bombers over enemy cities and rain down incendiaries but I don't think you could convince many of them to kill even one of those civilians with a gun, let alone a knife.

    This is the strange distinction we make where we find one form of killing a horrible thing, a war crime, terrorism, and another form of killing is a regrettable accident but there's really no blame to be assigned. A suicide bomber walks into a pizzeria and blows himself up, we lose our minds. An Air Force bomber drops an LGB in a bunker filled with civilians instead of top brass, shit happens. We honestly believe there's a distinction between the two. "Americans didn't set out to kill civilians" war hawks will huff. Yes, but they're still dead, aren't they?

    Combat robots are simply continuing this process. Right now there is still a man in the loop to order the attack. Hamas kills Israeli targets with suicide bombs, Israelis deliver high explosives via missile into apartment blocks filled with civilians. They're using American-manufactured anti-tank missiles. I think they're still using TOW. Predator drones use hellfires and their operators are sitting in the continental US while Israeli pilots are a few miles away from the target inside their choppers but really, what's the difference? And what happens when drones are given the authority to engage targets on their own? A soldier with a gun can at least see what he's shooting at. Those in the artillery corps are firing their shells off into the unseen distance and have no idea who they're killing. Not that much different from laying land mines, indiscriminate killing. Psychologically no different from what it would be to set a robot on patrol mode, fire-at-will.

    If one extrapolates a little further, the problem of the droid army is similar to that of the tradition of unpopular leaders using corps of foreign mercenaries to protect them from the wrath of the people. The mercenaries did not speak the language, did not know the customs, and were counted as immune to palace intrigues. They could be used against the people for they would not the sympathy for fellow countrymen that a native force might feel. What are droids being used for? Only the people operating them could say for sure. Welcome to the age of the push-button assassination.

  • by Mr. Freeman ( 933986 ) on Friday January 15, 2010 @01:00AM (#30775438)
    "Humans will always be better than machines at killing humans (unfortunately), machines can only simulate our thinking..."

    I disagree. What robots lack up for in creativity they make up for in the ability to withstand orders of magnitude more damage than humans. I mean, blow a robot's leg clean off and its weapon systems still work. It doesn't pass out from blood loss or pain. Put a few bullets though it and chances are it's still going to be up and running. No human can do that.

    They won't be creative, but everything is going to be directed by human commanders located in a semi-remote facility, so it's a non-issue. Any new threat will be adapted to by the humans controlling the robots.

    Furthermore, humans need to be creative to avoid getting killed. That really isn't an issue with robots. One dead soldier is a very bad thing, 50 dead robots isn't good but no one is going to lose any sleep over it. If you kill half of a human squad, they're probably not going to advance any further. Wipe out half a fleet of robotic killing machines and they'll keep marching right on in.
  • Re:"Friendly AI" (Score:5, Insightful)

    by Daniel Dvorkin ( 106857 ) * on Friday January 15, 2010 @01:37AM (#30775632) Homepage Journal

    Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length. Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty. So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.

    OTOH, I agree with you completely about the absurdity of calling some methods of killing heroic and others evil. Dead is dead.

  • Re:"Friendly AI" (Score:3, Insightful)

    by S77IM ( 1371931 ) on Friday January 15, 2010 @01:42AM (#30775648)

    Shouldn't this story have an "ED-209" tag?

    I agree with you that distancing humans from killing is big a problem. We have that problem now with cruise missiles, cluster bombs, nuke-from-orbit, etc.

    But accidental death from robots run amok is not a pleasant thought either. The whole point of an AUTOMATED system is that it runs without a human driving it. This leads to a potential -- however slim -- that the system starts killing people without permission.

    It sucks that we kill each other deliberately. Let's not create more opportunities for accidents.

      -- 77IM, "Guns don't kill people, robot guns kill people."

  • Re:"Friendly AI" (Score:3, Insightful)

    by stdarg ( 456557 ) on Friday January 15, 2010 @01:46AM (#30775664)

    We honestly believe there's a distinction between the two. "Americans didn't set out to kill civilians" war hawks will huff. Yes, but they're still dead, aren't they?

    Are you serious? So to take a personal example, say somebody murdered your mother. How would you want that person punished? Many people would call for the death penalty. Now what if someone killed your mother completely by accident... say your mom ran a red light and got hit by someone. She's still dead, isn't she?

  • Re:"Friendly AI" (Score:4, Insightful)

    by AJWM ( 19027 ) on Friday January 15, 2010 @01:46AM (#30775666) Homepage

    Heck, for thousands of years people have been killing each other with autonomous -- although not intelligent -- devices. The projectile from a trebuchet or ballista can't be recalled or turned off once it's on its way. And the destructive force of long range munitions has only gotten greater since.

    To the extent that battlefield robots can do a better job of telling the combatants from the non-combatants than can lobbed rocks or bombs, then all the better.

    Just so long as somebody has an "off" switch.

  • Re:"Friendly AI" (Score:3, Insightful)

    by joe_frisch ( 1366229 ) on Friday January 15, 2010 @02:22AM (#30775834)

    I think a lot will depend on the extent to which the robot operator is held responsible for the semi-autonomous robot's actions. If the human is completely responsible, it might make ware less deadly. If the human can use the excuse "well the automatic targeting system mistakenly identified the 5 year old with a tricycle as an enemy robot - its a terrible shame, we need to update the recognition system" - then you have problems.

      There is a tendency for large organizations to avoid placing blame on any particular person - so the military might tend to deflect blame from the human operator. In fact the blame IS unclear - is it the operator, or one of the possibly thousands of programmers involved in the pattern recognition algorithms in the robot?

  • Re:"Friendly AI" (Score:4, Insightful)

    by mbone ( 558574 ) on Friday January 15, 2010 @02:29AM (#30775856)

    Well, suppose your Mom was at a restaurant having dinner, and it got blown up, killing her and most of the rest of the clientele, and you learned that the restaurant was bombed without warning because a "high value target" was supposed to have been there, but wasn't. (This has happened, and it was no accident.) I assume, based on the above, you would feel that "them's the breaks," but I can assure you that many people would conclude that the people dropping the bombs don't really care much as to whether civilians were killed or not, and you don't have to dig very deep to learn that in reality many of the people at the receiving end of such incidents do indeed feel that the people behind the bombs deserve punishment.

  • And (Score:3, Insightful)

    by Weaselmancer ( 533834 ) on Friday January 15, 2010 @02:32AM (#30775864)

    Mechanized soldiers can be dangerous, too.
    Consider the following scenario.

    In the early morning of December 7, 2041, one million mechanized soldiers arise from the receding tide and onto the shores of China. The robots march relentlessly westward, killing all Chinese soldiers in their path. The final destination is Tibet.

    Fortunately, the Chinese have had state sponsored hackers for decades now. [slashdot.org] It was a simple matter for these hardened pros to return the bots to their creators, with orders to kill.

  • What? (Score:2, Insightful)

    by Tibia1 ( 1615959 ) on Friday January 15, 2010 @04:42AM (#30776418)
    The most society changing robot on the rise is the... vacuum cleaner? Was that a joke?
  • by Mikkeles ( 698461 ) on Friday January 15, 2010 @07:47AM (#30777322)

    'EXCEPT... When perfect machines, with perfect performance, are made to perfectly resemble man - who needs man?"'

    To define meaning and purpose where there are none and to set goals to fit.

  • Re:"Friendly AI" (Score:4, Insightful)

    by jollyreaper ( 513215 ) on Friday January 15, 2010 @09:20AM (#30777902)

    Of course, for thousands of years of recorded history, people did kill each other en masse at arm's length. Alexander's soldiers may have been more honest about what they were doing than somebody today sitting in a bunker pressing a button and killing people on the other side of the globe, but they were no less bloodthirsty. So I don't think you can blame the modern willingness to kill on the impartiality created by modern military technology, because the modern willingness to kill looks remarkably like the ancient willingness to kill, just with different tools.

    Part of it is cultural conditioning. People who grow up in times of war like that are more willing to do the whole rape and pillage thing. But just look at the problem modern armies have had conditioning soldiers to shoot to kill. The statistics come from WWI, II, Korea, and Vietnam. Something like one in ten soldiers were shooting for effect when their lives weren't immediately in danger. Not sure exactly how this was determined but the whole kill drill done in boot camp is about breaking that resistance until shooting becomes automatic. The studies said it became 100% by Vietnam.

    There's a desensitization that comes with all of this, of course. Take a normal, sane, caring 18-yr old and put him in a fucked situation like Iraq. The first month in, he's not wanting to hurt civilians. After he loses his best friend to a car bomb driven by what looked like "civilians" he's willing to kill all the motherfucking motherfuckers and doesn't care about arguments of guilt or innocence. They're local, they're all guilty. Of course, there's also the guys who shoot up a car they think is running the blockade only to find out it was just a confused father with his family and here's the kids dripping life into the street. That's gonna stick with those guys for the rest of their lives. Might even cause them to eat a bullet.

  • Re:"Friendly AI" (Score:3, Insightful)

    by Idiomatick ( 976696 ) on Friday January 15, 2010 @10:28AM (#30778596)
    We kill less now than we did back then... even with the ease of doing so we have today, thats a good thing.
  • Re:"Friendly AI" (Score:2, Insightful)

    by Toze ( 1668155 ) on Friday January 15, 2010 @11:39AM (#30779362)

    Actually, modern willingness to kill is significantly different than ancient willingness to kill. Rates of death in combat didn't exceed 10% until the Napoleonic wars, and didn't reach 50% until the World Wars. David Grossman wrote a couple of books (On Killing, and On Combat) explaining the psychological tools used to increase a soldier's willingness to kill (and ability to avoid or recover from the severe psychological trauma caused by killing). Physical distance is precisely one of those methods, as is technological distance (button-pushing) and psychological distance (seeing the enemy as inhuman). The tendency of a nation or its troops to refer to the enemy in dehumanizing terms (raus, hun, sand nigger) is one example of the soldier's attempt to distance himself from the awareness that he's killing another human being. Modern combat training involves a lot of methods (human-shaped targets, training instinctive reaction, training obedience to orders) meant to create a buffer between the soldier and "the enemy."

    If you read the "historical" accounts of most battles, you'll believe that 5,000,000 Persian soldiers invaded ancient Greece, and most of them died. Archeology suggests the numbers were more like 1,000,000 people at most, 100,000 of which at most were combat troops, and only 10,000 of them died before they went back home. War history where we have each sides' records of dead and wounded, and kills attributed to their own soldiers, show that most nations will significantly overestimate how many people they killed. Before Napoleon, despite the bloody accounts of even medieval battles, way more people would die from dysentery than sword wounds.

    Today's soldiers are not any more bloodthirsty than Alexander's soldiers were, but they have tools that are much more effective, and significantly psychologically easier for them to use. The two benefits of robot soldiers are that, first, it will reduce the number of human beings on "our side" who are put in harm's way, and second, that it will be considerably easier for someone to push the button marked "kill" if it looks more like Command & Conquer than Apocalypse Now. We can see attrition rates of 80 or 90% today because we've made it psychologically and technically easy enough to kill 1,000 people with the push of one button. The danger, for example, of nukes in the cold war was not that nukes were destructive (though they were), but that they were easy to use. Stalin killed way more people by working them to death than died in Hiroshima- but in Hiroshima they only had to push a button. Killer robots are a lot like that. Easy to use.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...