The Struggle To Ban Killer Robots 138
Lasrick (2629253) writes "The Campaign to Stop Killer Robots is a year old; the same month is was founded, the UN's special rapporteur on extrajudicial, summary or arbitrary executions called for a moratorium on the development and deployment of autonomous lethal weapons while a special commission considered the issue. The campaign is succeeding at bringing attention to the issue, but it's possible that it's too late, and if governments don't come to a common understanding of what the problems and solutions are, the movement is doomed. As this article points out, one of the most contentious issues is the question of what constitutes an autonomous weapons system: 'Setting the threshold of autonomy is going to involve significant debate, because machine decision-making exists on a continuum.' Another, equally important issue of course is whether a ban is realistic."
Just make them 3/4 size... (Score:2)
...easier to stop them if they turn on us. Also, give them a 3-foot cord.
-Dwight Schrute
Re: (Score:1)
I would prefer they be given guns loaded with blanks.
Skynet would not approve (Score:2)
I am pretty sure that Skynet will nip this ban effort in the bud.
Rise of the machines. (Score:2)
Read TFA, found an easier-to-read, more informative page. [theregister.co.uk]
seen 'em (Score:4, Funny)
I saw the Killer Robots. They opened for the B-52s at the House of Blues in Orlando.
They were... interesting. Why does the UN want to ban them? I've seen many worse bands.
Re: (Score:2)
Hardly a quality statement. Just like when you get kicked in the groin after being slapped in the face, the slap does not feel so bad, so no matter what the band, they don't seem too bad when you get to endure the B52s afterwards...
Comment removed (Score:5, Insightful)
Re: (Score:2)
One might argue that the "cost effective" part is the stickler. The more cost effective the mayhem and the less chance of constituents' sons and daughters at risk, the easier it is to make a decision to use aggression. Cost effective, non of our people get hurt, win!
Of course, there's a flaw in the argument, but I don't expect the average politician to see it.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Most likely, if killer robots did get out of control that they would hit some limiting factor and loose the ability to kill all humans before getting the job done
Ok. That one definitely calls for:
Fry: "I heard one time you single-handedly defeated a horde of rampaging somethings in the something something system"
Brannigan: "Killbots? A trifle. It was simply a matter of outsmarting them."
Fry: "Wow, I never would've thought of that."
Brannigan: "You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down."
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
So would that include something like a Terramax UGV (http://oshkoshdefense.com/technology-1/unmanned-ground-vehicle/) coupled with a Boomerang anti sniper system (http://en.wikipedia.org/wiki/Boomerang_%28countermeasure%29)?
This would give a military the ability to send an unmanned vehicle into almost any terrain (rural or urban), which could respond instantly to shots fired at it with its own deadly return fire. And, considering the hell that Marines faced in Helmand with IEDs and snipers while slogging through muddy fields, wouldn't this present a far better option (particularly for the Marines and their families)?
+2 Informative
Re: (Score:2)
Re: (Score:2)
Re:Too late. (Score:5, Interesting)
The very LAST thing you want is a cheap war, at least if you value peace at least a little. If war is cheap, what's keeping you from using it with impunity when you have the strongest army on the planet?
Quite seriously, the only thing that keeps the US from simply browbeating everyone into submission that doesn't want to play by their rules is that it's a bit too expensive to wage war against the rest of the world.
Re: (Score:2)
I thought the Americans' problem was they had not yet figured out "we are your friends" and "we're invading your country" are largely incompatible concepts.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Those are very good rules, but laughable and naive.
You kind of contradict yourself with this. While I initially liked the idea of the 3 laws, problems quickly came up even within Asimov's books. Even in the books it's noted that fulfilling the 3 laws actually took up the MAJORITY of the 'brains' of all 3-rules compliant AIs. The cost to implement the 'laws' was, and would be, enormous.
I mean, consider the 'through inaction' clause. That means that every robot has to be constantly on the lookout for a human that might be about to be injured, to the limit
Killer robots&nukes are ironic, not cost effec (Score:2)
From my essay: http://www.pdfernhout.net/reco... [pdfernhout.net]
====
Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?
Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or
Re: (Score:2)
Frankly killer robots have been around for at least a century.
Torpedos, sea mines, and land mines. Sure the kill logic started off simple for them. Kill what steps on me, kill a ship that bumps me, and kill what I run into.
By WWII sea mines could "decide" to blow up based on the size of the ship that passes over it. Torpedos could find their target based on the sound it made. And some landmines would kill tanks and trucks but not men that walked over them.
By the 70s you had guided missiles of all kinds, an
Okay, I'll admit... (Score:2, Interesting)
Okay, I'll admit, when I read the first sentence of TFS, I figured this was some kind of joke campaign or something. I guess my mind is too much in science fiction, and not really noticing that the future is already here.
Still, do we really think the governments of the world (at least the ones with the resources to build these robots) are actually going to go for fully autonomous killing machines? I would think all of them would want humans in the loop, if for no other reason than to justify their military
Re: (Score:2)
> not really noticing that the future is already here
We should put this on a t-shirt so we don't forget it. The future? The good parts, flying cars, colonies on other planets, still a long way off. The bad parts -- surveillance state, punishment for potential crimes, autonomous robot weapons, that's already here. Also (from another article) artificially created alien organisms. (Because in SF, that always ends well...)
Re: (Score:2)
Yeah. An easily portable automated kill-zone barrier. I see no reason why a general might want one of those. After all, minefields were just a fad. This works just about as well for a "if you step here we will kill you" sort of thing. Plus, no muss, no fuss cleanup. Just disarm the thing and pack up.
Re: (Score:2)
Well, okay, true. I know the military wants those sorts of systems to replace minefields. They don't leave any explosives in the ground after the war is over, and they can be smart enough to choose a weapon system based on the threat (tank, launch an armor-piercing missile, squad of soldiers, launch a fragmentation bomb).
Still, that's a lot different than say, some kind of mobile automated killing machine.
Re: (Score:2)
How is a machine that automatically kills things not an automated killing machine?
This is like real-wor
This barn door has been open for decades (Score:2)
Likewise (Score:3)
Could some of the people arguing for this ban please explain the difference between being on a ship during WWII that was hit by a kamikze and being on a ship during the Falklands war and being hit by an Exocet? Somehow being killed is being killed regardless of whether there was a human pilot or an autonomous robot flying the lethal projectile.
Re: (Score:3)
What they are trying to address is the decision to release the weapon - whether that decision is made by a human or non-human. After that point, automated guidance is a non-issue, its been around for 60 years and thus does not pose an ethical question (a 2000lb laser guided bomb taking out a bridge is better than 100 B-17s dropping 50 tonnes of bombs to drop the same bridge - the automated guidance aspect of the LGB means much less collateral damage than with area bombing).
At the moment the point to which
Re: (Score:2)
Take a heat seeking missile for instance. It is designed to "decide" to blow up something that matches a certain heat signature. Or a radar guided missile. It is designed to track, follow and destroy something that matches a certain radar profile. There is no meaningful technical or ethical difference between firing such a missile and turning on a ground or air robot that is designed to destroy something or someone that matches some sort of profile. You are "releasing" the weapon when you turn the robo
Re: (Score:2)
I see. It's better to have a human decide to bomb a Guernica, Rotterdam, Coventry, Dresden, Hiroshima, Nagasaki, etc. than it is to have cold, soulless, purely analytical robot "decide" whether or not to release lethal force based on some programmed criteria. I'm glad you clarified that for me.
Cheers,
Dave
Machine logic (Score:5, Insightful)
because machine decision-making exists on a continuum.'
No kidding. Depending on how you define it, a cruise missile could be considered a one-use killer robot. It executes it's program as set on launch.
Now consider making it more sophisticated. We now provide it with some criteria to apply against it's sensors when it reaches the target location. If criteria A is met, dive and explode on target, if B, pull up and detonate more or less harmlessly in the air. If neither criteria is met, it depends on whether it's set fail safe/deadly.
This is mixed - on the one hand properly programmed it can reduce innocent casualties, but on the other it encourages firing missiles on shakier intelligence. But then again Predators armed with hellfires are a heck of a lot more selective than WWII gravity bombs. As long as you presume that at least some violence/warfare can be justified, you have to consider these things.
On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.
Re: (Score:2)
This strikes me as a false dichotomy. Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct. Until smart bullets drop enormously in cost, this scenario is infeasible.
Assuming the cost of a smart bullet does fall, the initial authorization to fire it is still a decision to kill. The fact that something or someone might later reverse the decision does not mean the initial choice to launch was not a kill.
The goal of this controversy is that no machine should never h
Re: (Score:2)
Current US Tomahawk Tactical Cruise Missile cost, per unit: $1.45 million.
You were saying?
Re: (Score:2)
Why is the cost of one of today's (dumb) Tomahawks relevant? It can't order itself to self destruct. And I can't believe any have ever been ordered (by a human) to self destruct, without *somebody* being busted several ranks.
What's more, an fully autonomous Tomahawk is going to cost a good deal more than $1.45 million. Nobody inferior to a colonel is going to pop that cork, and certainly not the missile itself.
No. That scenario still misfires.
Re: (Score:2)
Nobody is going to launch a million dollar bullet (smart missile) then tell it to self destruct.
You'd be surprised. To a combatant commander, a million bucks is nothing. It all depends on the tactical circumstances.
Worst case you make the abort recoverable.
Heck, what do you think about a AI type interlock system? Both the machine logic AND a human have to decide firing is appropriate. Done right it *should* cut down on mistakes.
BTW, I'm figuring having this on 'big boom' weapons, not small arms.
The goal of this controversy is that no machine should never have the authority to issue the *first* kill command. That responsibility should always lie with a human. With that, I concur.
Agreed. Sort of like how casualties, on either side, are on the president's head if he orders troops in
Re: (Score:2)
On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.
The problem now is that's pretty much who os doing the fighting, there is no Talabanistan or United Al-Qaedian Emerates; look at the misery the drug cartels and gangs bring to Latin-American countries like El Salvador, Honduras, Mexico and California. Even in the Ukraine It's mostly Pro-russian civillian millitias and a cadre of Russian Spetsnaz.
In the old days any combatant that was ununiformed or undocumented was a spy and summarily executed and the any collateral damage were harboring anyways
Re: (Score:2)
On the whole, I like weapons being more selective, tends to cut down on civilian casualties, but I think that it's a topic more deserving of careful scrutiny than a reflexive ban.
Such as a weapon that can think for itself, like this?
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
We already have weapons that make the decisions you suggest - the European StormShadow cruise missile for example, or the British ALARM anti-radar missile (launch it in standoff mode, it climbs to a given height and then deploys a parachute and waits until it can see a ground based radar, at which point it releases the parachute and kills the radar).
Send Jack Bauer (Score:2)
Alarmist much? (Score:1)
I gotta say, this whole thing seems a little ridiculous. Unlike Hollywood, any such weapon would be incredibly limited by power source (batteries or burning hydrocarbons) and limited ammunition. I'd also like to point out that there numerous ways to disrupt robots such as EMPs and strong magnets.
Besides, I'm looking forward to the giant robot spiders that sound like children.
Comment removed (Score:5, Insightful)
Re: (Score:2)
It's not going to matter one bit, someone in charge of a Black Budget in the Pentagon is going to think it's a good idea. Remember what the Pentagon did when Commander-In-Chief President Clinton directly ordered the military to stop all work on bio-weapons? Renamed the project, moved it to the Black Budget, and didn't even skip a beat.
Defining autonomous weapons (Score:2)
Re: (Score:2)
Re: (Score:2)
Well you could make a robot that is powered by drinking the blood of its enemies.
But honestly, if I were making a killer robot, I would probably just make it so that it could plug itself into outlets or just grab power lines if it were running low.
Re: (Score:2)
You can use all the killer robots you want, but it ain't over untill there are boots on the ground.
The new Robocop explores this in a nuanced fashion (Score:2)
Just kidding, it's a pile of shit.
Unfortunately, no. (Score:5, Interesting)
1) Does this even make sense: No. Autonomy is not well-defined. Does a thermostat make "decisions"? etc.
2) Assuming it makes sense, is it a good idea: No. Firing a cruise missile at a target is better than firing a huge barrage of mortars towards a target, for everybody involved. Any smarter version of a landmine would be better than the current ones that "decide" to blow up whatever touches them 20 years after the war is over.
3) Assuming it's a good idea, can it be implemented: No. Arms races are often bad for everybody involved. Everybody involved knows this. And yet that universal realization does not provide a way out. Everybody knows if they don't, the other side might well anyways.
Re: (Score:2)
1) Yes. The decision to fire the weapon and authorize lethal force is discrete and binary. That is indeed well defined. By launching it, arming it, and ordering it to engage the "enemy" you have made the decision to kill. Any human private who kills without prior authorization to engage is in violation of the rules of combat. Authorizing him/her to kill *is* the issue here.
2) ??? The technique of projecting force is irrelevant. It's the *authorization* of of autonomous dispatch of lethal force that's
The first law of automated weapons is (Score:2)
Don't have them.
First: If the concern is really about automated killing then we have to establish the following:
No object capable of generating enough kinetic energy to kill a human can be directly interfaced with an electronic circuitry.
But that would include cars and all kind of machinery. So the rule above would be a 95% insurance that AIs would not be able to kill humans. The other 5% is accounting that an AI would self-destruct to short-circuit and generate enough electromagnetic current to electrocute
These rules only make sense in context (Score:2)
I already got your ban (Score:2)
Might have the opposite of the intended effect (Score:2)
The consensus around here is that autonomously-driven cars will inevitably establish a better safety record than human-driven cars. I.e., robotic systems will on the whole make better, less-reckless decisions than human drivers.
A good case could be made that autonomous military systems will likewise make better decisions than fatigued and/or panicky young soldiers.
Current military tools and techniques certainly result in fewer friendly-fire incidents, collateral damage, etc. than were experienced during WW
As Successful as the Kellogg-Briand Pact (Score:2)
You know, the pact to outlaw war [state.gov]. Signed in 1928.
Didn't work out so well.
And even if it were signed by a significant number of nations, we could be sure the non-democratic ones would be violating the ban before the ink was even dry.
Unenforceable treaties are actually worse than worthless: they constrain good actors without deterring bad ones.
Re: (Score:2)
Unenforceable treaties are actually worse than worthless: they constrain good actors without deterring bad ones.
If I hadn't already commented, then I would mod you up. But the counterpoint is that there still could be some deterrent effect and that deterring good actors will at least let you tell the difference... but I don't buy that argument either. Ultimately it is about who will be charged with a war crime by whichever side wins or how to come up with rules that most people can follow.
In this case I don't think it is the technology that can or should be banned, but the use case of just indiscriminately unlea
It's going to be driven by reaction time (Score:5, Insightful)
A robot is going to (or will eventually) react much faster to a threat or other adverse conditions than a human can. If you've got a hypersonic missile heading toward a carrier, are you put a human in the loop? Nope.
There are simply going to be many many situations where a robot will neutralize a threat faster than a human can, and those situations will increase if fighting against another autonomous army.
Is this a good thing? No, it's like atomic weapons. We're heading toward another arms race that will lead us to the brink or over. We barely survived the MAD era.
human in the OODA loop! (Score:2)
HUMAN OODA LOOP:
1. Orient
2. Observe
BOOOOOM!!!!
Re: (Score:2)
This was basically the premise of the book "Kill Decision". A shadowy government/private contractor apparatus launches a series of attacks on America specifically to get the American public to by into the logic you've suggested. Dreams of new defense spending contracts spurred on "The Activity" and was supported widely. Of course, our hero puts it to a stop - but for how long??
What what what?! (Score:2)
not going to happen (Score:2)
As will all new weaponry, all the countries that don't have it/can't get it panic and agree that it's a horrible idea. They pass UN resolutions banning it, etc... all the countries that do have it refuse to sign and so nothing has changed, other than the countries that don't have it will start accusing those that do of war crimes and flouting international law which they rarely recognize anyway. When some of the countries that signed the ban finally get enough money/science to get the tech, they of course d
The economics of machine intelligence (Score:2)
Skynet and The Terminator are definitely coming. But what about the economics of machine intelligence? This article makes an interesting case: http://hanson.gmu.edu/aigrow.p... [gmu.edu]
I'm surprised no one commented about this yet. (Score:1)
https://what-if.xkcd.com/5/
When Killer Robots are illegal... (Score:4, Insightful)
The person who frames the question... (Score:2)
... dictates the answer. Reasoning strictly inside the box that creates, if you then try to propose a robot can use it's own judgment for everything but firing a weapon, you'll get criticized for hitting the edge of the box and not allowing it to actually be autonomous.
In fact, the question isn't "how autonomous", it's "autonomous or not".
Nothing difficult about the autonomy issue. (Score:2)
If it chooses what target to select and makes the call on whether to attack the target, it is autonomous.
If a human chooses the target and makes the strike call, the machine is not autonomous.
Complete no brainer.
They're about 30 years too late... (Score:2)
Easy to stop killer robots (Score:2)
You simply present them with a paradox, and they'll melt down or blow up trying to solve it. I saw Captain Kirk do it once.
Time to get an instance plan that covers robots (Score:2)
Old Glory Insurance. "For when the metal ones decide to come for you. And they will."
https://screen.yahoo.com/old-g... [yahoo.com]
Killer Robots are so messed up... (Score:2)
Yeah, let's ban killer robots. Better let humans do the killing. I'm sure they have a much better track record at discriminating hostiles from innocent civilians.
After the war, when we bring our killer heroes back home to rejoin their families, everything will be just dandy. Because after daddy has shot three Extremistanis in the face and seen his buddy's leg torn off by an IED, the first thing he wants to is hug his little girl and tell her he loves her.
Killer robots would just be so immoral.
We Need Killer Bots (Score:2)
Killer robots could be a good thing... (Score:2)
Because an army of robots is less likely to rape civilians after taking over and occupying a city. As a result there's actually less collateral damage.
Simple Solution (Score:2)
Re: (Score:2)
Start developing Robots that are "3 Laws Safe," before you wish you had.
Nice thought, but I'm not sure how you'd do such a thing. The laws would have to coded in software, and software can be changed...?
Re: (Score:2)
Sure, ban the robots... (Score:2)
We all know where this leads (Score:1)