Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics The Military

The Struggle To Ban Killer Robots 138

Lasrick (2629253) writes "The Campaign to Stop Killer Robots is a year old; the same month is was founded, the UN's special rapporteur on extrajudicial, summary or arbitrary executions called for a moratorium on the development and deployment of autonomous lethal weapons while a special commission considered the issue. The campaign is succeeding at bringing attention to the issue, but it's possible that it's too late, and if governments don't come to a common understanding of what the problems and solutions are, the movement is doomed. As this article points out, one of the most contentious issues is the question of what constitutes an autonomous weapons system: 'Setting the threshold of autonomy is going to involve significant debate, because machine decision-making exists on a continuum.' Another, equally important issue of course is whether a ban is realistic."
This discussion has been archived. No new comments can be posted.

The Struggle To Ban Killer Robots

Comments Filter:
  • ...easier to stop them if they turn on us. Also, give them a 3-foot cord.
     
    -Dwight Schrute

  • I am pretty sure that Skynet will nip this ban effort in the bud.

  • seen 'em (Score:4, Funny)

    by lophophore ( 4087 ) on Thursday May 08, 2014 @05:49PM (#46954457) Homepage

    I saw the Killer Robots. They opened for the B-52s at the House of Blues in Orlando.

    They were... interesting. Why does the UN want to ban them? I've seen many worse bands.

    • Hardly a quality statement. Just like when you get kicked in the groin after being slapped in the face, the slap does not feel so bad, so no matter what the band, they don't seem too bad when you get to endure the B52s afterwards...

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Thursday May 08, 2014 @05:50PM (#46954465)
    Comment removed based on user account deletion
    • One might argue that the "cost effective" part is the stickler. The more cost effective the mayhem and the less chance of constituents' sons and daughters at risk, the easier it is to make a decision to use aggression. Cost effective, non of our people get hurt, win!

      Of course, there's a flaw in the argument, but I don't expect the average politician to see it.

      • Comment removed based on user account deletion
      • It is very cost effective to bomb the fuck out of your enemies with huge nukes that take out entire cities at once (the Hiroshima and Nagasaki bombs were pretty tiny explosive yield things compared to presently available stuff, or what recently has been available before dismantlement). During the cold war a lot of effort went into these cost effective weapons, nuclear arms buildup went to the point where some people said we have enough nukes to erase all life on Earth seven times over. Now that might be an
      • Automated armies are best used against ones own citizens. A normal army will not be ruthless in crushing a homeland rebellion because the people in the army are from the same group as the people in the revolution. This can cause a conflict of feelings in a group of soldiers putting down a revolt. Robots have no problem with a "police action" against the citizens of their own country. The Romans did basically the same thing by absorbing conquered armies and then sending them to other regions where they would
    • Re:Too late. (Score:5, Interesting)

      by Opportunist ( 166417 ) on Thursday May 08, 2014 @06:37PM (#46954791)

      The very LAST thing you want is a cheap war, at least if you value peace at least a little. If war is cheap, what's keeping you from using it with impunity when you have the strongest army on the planet?

      Quite seriously, the only thing that keeps the US from simply browbeating everyone into submission that doesn't want to play by their rules is that it's a bit too expensive to wage war against the rest of the world.

      • I thought the Americans' problem was they had not yet figured out "we are your friends" and "we're invading your country" are largely incompatible concepts.

      • "It is well that war is so terrible, otherwise we should grow too fond of it" - Robert E. Lee
    • I just read up on Asimov's rules on robots. I read them in the past before, but I didn't remember them now until rereading. Those are very good rules, but laughable and naive. They are good to have when you absolutely must design AI, those are the basic principles you want to program into the ROM BIOS of the robot. Such situations may arise in many circumstances, such as, these Asimov rules were erected 13.8 billion years after the creation of the Universe (today is assumed to be 13.8 billion years from the
      • There are a lot of circumstances where you have to weigh such things as injuring or at least offending a human being to protect injury or offense to two human beings, one vs. two, it's really hard to apply algebra when it comes to ethics, because, for instance Pontius Pilate's mistake was to uphold the motto, the guiding principle: "give the people what they want", as in take over only the external politics of a conquered city, but do not interfere in the internal affairs, so, in view of the whole city requ
        • Should we go ahead and genetically modify the wasp species to not do such a thing anymore? Or can we leave other lifeforms alone, and focus on human beings only, can we even judge other cultures and modify them to what we think is right as opposed to what they think is right? Should we just allow all kinds of moral behaviors roam freely? Sometimes I feel like I'm living on a reservation where moral behaviors are allowed to roam freely, with people that come here talking about, hey, imagine there are even st
        • Existentialism is deeply frowned upon by religions, which claim there is a morality that's correct, and it's their version that's correct. But it can be an aid to a warning that the Asimov rules regarding to the ethics of a robot may lead to cases where it's impossible to apply those rules, and then if the robot can get into situations where he is not forced to apply the rules, then what's holding him back in situations where he should apply them. If it thinks about it, he may abandon the rules altogether i
      • Those are very good rules, but laughable and naive.

        You kind of contradict yourself with this. While I initially liked the idea of the 3 laws, problems quickly came up even within Asimov's books. Even in the books it's noted that fulfilling the 3 laws actually took up the MAJORITY of the 'brains' of all 3-rules compliant AIs. The cost to implement the 'laws' was, and would be, enormous.

        I mean, consider the 'through inaction' clause. That means that every robot has to be constantly on the lookout for a human that might be about to be injured, to the limit

    • From my essay: http://www.pdfernhout.net/reco... [pdfernhout.net]
      ====
      Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?

      Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or

    • by LWATCDR ( 28044 )

      Frankly killer robots have been around for at least a century.
      Torpedos, sea mines, and land mines. Sure the kill logic started off simple for them. Kill what steps on me, kill a ship that bumps me, and kill what I run into.
      By WWII sea mines could "decide" to blow up based on the size of the ship that passes over it. Torpedos could find their target based on the sound it made. And some landmines would kill tanks and trucks but not men that walked over them.
      By the 70s you had guided missiles of all kinds, an

  • Okay, I'll admit... (Score:2, Interesting)

    by mrxak ( 727974 )

    Okay, I'll admit, when I read the first sentence of TFS, I figured this was some kind of joke campaign or something. I guess my mind is too much in science fiction, and not really noticing that the future is already here.

    Still, do we really think the governments of the world (at least the ones with the resources to build these robots) are actually going to go for fully autonomous killing machines? I would think all of them would want humans in the loop, if for no other reason than to justify their military

    • > not really noticing that the future is already here

      We should put this on a t-shirt so we don't forget it. The future? The good parts, flying cars, colonies on other planets, still a long way off. The bad parts -- surveillance state, punishment for potential crimes, autonomous robot weapons, that's already here. Also (from another article) artificially created alien organisms. (Because in SF, that always ends well...)

    • Yeah. An easily portable automated kill-zone barrier. I see no reason why a general might want one of those. After all, minefields were just a fad. This works just about as well for a "if you step here we will kill you" sort of thing. Plus, no muss, no fuss cleanup. Just disarm the thing and pack up.

      • by mrxak ( 727974 )

        Well, okay, true. I know the military wants those sorts of systems to replace minefields. They don't leave any explosives in the ground after the war is over, and they can be smart enough to choose a weapon system based on the threat (tank, launch an armor-piercing missile, squad of soldiers, launch a fragmentation bomb).

        Still, that's a lot different than say, some kind of mobile automated killing machine.

        • Well, okay, true. I know the military wants those sorts of systems to replace minefields. They don't leave any explosives in the ground after the war is over, and they can be smart enough to choose a weapon system based on the threat (tank, launch an armor-piercing missile, squad of soldiers, launch a fragmentation bomb).

          Still, that's a lot different than say, some kind of mobile automated killing machine.

          How is a machine that automatically kills things not an automated killing machine?

          This is like real-wor

    • Could some of the people arguing for this ban please explain the difference between being on a ship during WWII that was hit by a kamikze and being on a ship during the Falklands war and being hit by an Exocet? Somehow being killed is being killed regardless of whether there was a human pilot or an autonomous robot flying the lethal projectile.

      • What they are trying to address is the decision to release the weapon - whether that decision is made by a human or non-human. After that point, automated guidance is a non-issue, its been around for 60 years and thus does not pose an ethical question (a 2000lb laser guided bomb taking out a bridge is better than 100 B-17s dropping 50 tonnes of bombs to drop the same bridge - the automated guidance aspect of the LGB means much less collateral damage than with area bombing).

        At the moment the point to which

        • by bigpat ( 158134 )

          Take a heat seeking missile for instance. It is designed to "decide" to blow up something that matches a certain heat signature. Or a radar guided missile. It is designed to track, follow and destroy something that matches a certain radar profile. There is no meaningful technical or ethical difference between firing such a missile and turning on a ground or air robot that is designed to destroy something or someone that matches some sort of profile. You are "releasing" the weapon when you turn the robo

        • I see. It's better to have a human decide to bomb a Guernica, Rotterdam, Coventry, Dresden, Hiroshima, Nagasaki, etc. than it is to have cold, soulless, purely analytical robot "decide" whether or not to release lethal force based on some programmed criteria. I'm glad you clarified that for me.

          Cheers,
          Dave

  • Machine logic (Score:5, Insightful)

    by Firethorn ( 177587 ) on Thursday May 08, 2014 @05:55PM (#46954481) Homepage Journal

    because machine decision-making exists on a continuum.'

    No kidding. Depending on how you define it, a cruise missile could be considered a one-use killer robot. It executes it's program as set on launch.

    Now consider making it more sophisticated. We now provide it with some criteria to apply against it's sensors when it reaches the target location. If criteria A is met, dive and explode on target, if B, pull up and detonate more or less harmlessly in the air. If neither criteria is met, it depends on whether it's set fail safe/deadly.

    This is mixed - on the one hand properly programmed it can reduce innocent casualties, but on the other it encourages firing missiles on shakier intelligence. But then again Predators armed with hellfires are a heck of a lot more selective than WWII gravity bombs. As long as you presume that at least some violence/warfare can be justified, you have to consider these things.

    On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

    • This strikes me as a false dichotomy. Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct. Until smart bullets drop enormously in cost, this scenario is infeasible.

      Assuming the cost of a smart bullet does fall, the initial authorization to fire it is still a decision to kill. The fact that something or someone might later reverse the decision does not mean the initial choice to launch was not a kill.

      The goal of this controversy is that no machine should never h

      • Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct.

        Current US Tomahawk Tactical Cruise Missile cost, per unit: $1.45 million.

        You were saying?

        • Why is the cost of one of today's (dumb) Tomahawks relevant? It can't order itself to self destruct. And I can't believe any have ever been ordered (by a human) to self destruct, without *somebody* being busted several ranks.

          What's more, an fully autonomous Tomahawk is going to cost a good deal more than $1.45 million. Nobody inferior to a colonel is going to pop that cork, and certainly not the missile itself.

          No. That scenario still misfires.

      • Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct.

        You'd be surprised. To a combatant commander, a million bucks is nothing. It all depends on the tactical circumstances.

        Worst case you make the abort recoverable.

        Heck, what do you think about a AI type interlock system? Both the machine logic AND a human have to decide firing is appropriate. Done right it *should* cut down on mistakes.

        BTW, I'm figuring having this on 'big boom' weapons, not small arms.

        The goal of this controversy is that no machine should never have the authority to issue the *first* kill command. That responsibility should always lie with a human. With that, I concur.

        Agreed. Sort of like how casualties, on either side, are on the president's head if he orders troops in

    • On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

      The problem now is that's pretty much who os doing the fighting, there is no Talabanistan or United Al-Qaedian Emerates; look at the misery the drug cartels and gangs bring to Latin-American countries like El Salvador, Honduras, Mexico and California. Even in the Ukraine It's mostly Pro-russian civillian millitias and a cadre of Russian Spetsnaz.
      In the old days any combatant that was ununiformed or undocumented was a spy and summarily executed and the any collateral damage were harboring anyways

    • On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.

      Such as a weapon that can think for itself, like this?

      https://www.youtube.com/watch?... [youtube.com]

    • We already have weapons that make the decisions you suggest - the European StormShadow cruise missile for example, or the British ALARM anti-radar missile (launch it in standoff mode, it climbs to a given height and then deploys a parachute and waits until it can see a ground based radar, at which point it releases the parachute and kills the radar).

  • Looks like someone was curious about the protestors in the new season of "24", and started Googling!
  • I gotta say, this whole thing seems a little ridiculous. Unlike Hollywood, any such weapon would be incredibly limited by power source (batteries or burning hydrocarbons) and limited ammunition. I'd also like to point out that there numerous ways to disrupt robots such as EMPs and strong magnets.

    Besides, I'm looking forward to the giant robot spiders that sound like children.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Thursday May 08, 2014 @06:14PM (#46954623)
      Comment removed based on user account deletion
      • by cusco ( 717999 )

        It's not going to matter one bit, someone in charge of a Black Budget in the Pentagon is going to think it's a good idea. Remember what the Pentagon did when Commander-In-Chief President Clinton directly ordered the military to stop all work on bio-weapons? Renamed the project, moved it to the Black Budget, and didn't even skip a beat.

      • A mine in the earth or at sea is an autonomous weapon on one possible definition. So is a proximity triggered automatic rifle, as used on the Berlin Wall. The ship has sailed; the question is what parameters can be introduced.
    • Well you could make a robot that is powered by drinking the blood of its enemies.

      But honestly, if I were making a killer robot, I would probably just make it so that it could plug itself into outlets or just grab power lines if it were running low.

    • You can use all the killer robots you want, but it ain't over untill there are boots on the ground.

  • Just kidding, it's a pile of shit.

  • Unfortunately, no. (Score:5, Interesting)

    by timeOday ( 582209 ) on Thursday May 08, 2014 @06:11PM (#46954601)
    There at least 3 different levels of problems here:

    1) Does this even make sense: No. Autonomy is not well-defined. Does a thermostat make "decisions"? etc.

    2) Assuming it makes sense, is it a good idea: No. Firing a cruise missile at a target is better than firing a huge barrage of mortars towards a target, for everybody involved. Any smarter version of a landmine would be better than the current ones that "decide" to blow up whatever touches them 20 years after the war is over.

    3) Assuming it's a good idea, can it be implemented: No. Arms races are often bad for everybody involved. Everybody involved knows this. And yet that universal realization does not provide a way out. Everybody knows if they don't, the other side might well anyways.

    • 1) Yes. The decision to fire the weapon and authorize lethal force is discrete and binary. That is indeed well defined. By launching it, arming it, and ordering it to engage the "enemy" you have made the decision to kill. Any human private who kills without prior authorization to engage is in violation of the rules of combat. Authorizing him/her to kill *is* the issue here.

      2) ??? The technique of projecting force is irrelevant. It's the *authorization* of of autonomous dispatch of lethal force that's

  • Don't have them.

    First: If the concern is really about automated killing then we have to establish the following:
    No object capable of generating enough kinetic energy to kill a human can be directly interfaced with an electronic circuitry.

    But that would include cars and all kind of machinery. So the rule above would be a 95% insurance that AIs would not be able to kill humans. The other 5% is accounting that an AI would self-destruct to short-circuit and generate enough electromagnetic current to electrocute

  • Selective, efficient killer robots only make sense in the context of using them in limited skirmishes/small wars. For the really BIG wars, killer robots would be horribly inefficient, because the point of the big wars is to eliminate as much of your enemy as possible--civilians included. Both the Axis and the Allies were actively involved in targeting each other's civilian populations via total war. In that regard, there isn't anything much cheaper and effective, or cost-efficient, than nuclear-tipped ICBMs
  • Thou shalt not make a machine in the likeness of the human mind. Done.
  • The consensus around here is that autonomously-driven cars will inevitably establish a better safety record than human-driven cars. I.e., robotic systems will on the whole make better, less-reckless decisions than human drivers.

    A good case could be made that autonomous military systems will likewise make better decisions than fatigued and/or panicky young soldiers.

    Current military tools and techniques certainly result in fewer friendly-fire incidents, collateral damage, etc. than were experienced during WW

  • You know, the pact to outlaw war [state.gov]. Signed in 1928.

    Didn't work out so well.

    And even if it were signed by a significant number of nations, we could be sure the non-democratic ones would be violating the ban before the ink was even dry.

    Unenforceable treaties are actually worse than worthless: they constrain good actors without deterring bad ones.

    • by bigpat ( 158134 )

      Unenforceable treaties are actually worse than worthless: they constrain good actors without deterring bad ones.

      If I hadn't already commented, then I would mod you up. But the counterpoint is that there still could be some deterrent effect and that deterring good actors will at least let you tell the difference... but I don't buy that argument either. Ultimately it is about who will be charged with a war crime by whichever side wins or how to come up with rules that most people can follow.

      In this case I don't think it is the technology that can or should be banned, but the use case of just indiscriminately unlea

  • by jlowery ( 47102 ) on Thursday May 08, 2014 @06:25PM (#46954701)

    A robot is going to (or will eventually) react much faster to a threat or other adverse conditions than a human can. If you've got a hypersonic missile heading toward a carrier, are you put a human in the loop? Nope.

    There are simply going to be many many situations where a robot will neutralize a threat faster than a human can, and those situations will increase if fighting against another autonomous army.

    Is this a good thing? No, it's like atomic weapons. We're heading toward another arms race that will lead us to the brink or over. We barely survived the MAD era.

    • HUMAN OODA LOOP:
      1. Orient
      2. Observe
      BOOOOOM!!!!

    • by rgbscan ( 321794 )

      This was basically the premise of the book "Kill Decision". A shadowy government/private contractor apparatus launches a series of attacks on America specifically to get the American public to by into the logic you've suggested. Dreams of new defense spending contracts spurred on "The Activity" and was supported widely. Of course, our hero puts it to a stop - but for how long??

  • But I wanted to make killer robots! Now what am I going to do with this libKillerRobot I was working on?!
  • As will all new weaponry, all the countries that don't have it/can't get it panic and agree that it's a horrible idea. They pass UN resolutions banning it, etc... all the countries that do have it refuse to sign and so nothing has changed, other than the countries that don't have it will start accusing those that do of war crimes and flouting international law which they rarely recognize anyway. When some of the countries that signed the ban finally get enough money/science to get the tech, they of course d

  • Skynet and The Terminator are definitely coming. But what about the economics of machine intelligence? This article makes an interesting case: http://hanson.gmu.edu/aigrow.p... [gmu.edu]

  • If the "killer robots" tried to take over the world today they would fail quickly, XKCD seems to have explained why already.
    https://what-if.xkcd.com/5/
  • by BenSchuarmer ( 922752 ) on Thursday May 08, 2014 @07:18PM (#46955035)
    only super criminals will have killer robots.
  • ... dictates the answer. Reasoning strictly inside the box that creates, if you then try to propose a robot can use it's own judgment for everything but firing a weapon, you'll get criticized for hitting the edge of the box and not allowing it to actually be autonomous.

    In fact, the question isn't "how autonomous", it's "autonomous or not".

  • If it chooses what target to select and makes the call on whether to attack the target, it is autonomous.

    If a human chooses the target and makes the strike call, the machine is not autonomous.

    Complete no brainer.

  • Robocop, the ultimate law enforcement officer!
  • You simply present them with a paradox, and they'll melt down or blow up trying to solve it. I saw Captain Kirk do it once.

  • Old Glory Insurance. "For when the metal ones decide to come for you. And they will."

    https://screen.yahoo.com/old-g... [yahoo.com]

  • Yeah, let's ban killer robots. Better let humans do the killing. I'm sure they have a much better track record at discriminating hostiles from innocent civilians.
    After the war, when we bring our killer heroes back home to rejoin their families, everything will be just dandy. Because after daddy has shot three Extremistanis in the face and seen his buddy's leg torn off by an IED, the first thing he wants to is hug his little girl and tell her he loves her.
    Killer robots would just be so immoral.

  • You can bet that China will protest martial robots. After all when it comes to flesh and blood soldiers China has a huge advantage due to their excessive population levels. But with dedication and planing smaller nations like Norway or Switzerland could invest heavily in reserves of very potent martial robots capable of resisting invasion by much larger nations. Think about it. Russia is doing an expansion right now. If the Ukraine and others had a few thousand really good nuclear equipped cruise m
  • Because an army of robots is less likely to rape civilians after taking over and occupying a city. As a result there's actually less collateral damage.

  • Start developing Robots that are "3 Laws Safe," before you wish you had.
    • by smithmc ( 451373 ) *

      Start developing Robots that are "3 Laws Safe," before you wish you had.

      Nice thought, but I'm not sure how you'd do such a thing. The laws would have to coded in software, and software can be changed...?

      • I got to start somewhere. Maybe use AIML to evaluate the different ways physical harm could come to someone?
  • ...but not till I've got one of my own!
  • Once they ban killer robots then they'll work on taking away your Roomba, and the self autoclaving toilet washing robots so you have to clean your own floors and, worse yet, toilets.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...