US Seeks To Allay Fears Over Killer Robots (bbc.com) 67
Humans will always make the final decision on whether armed robots can shoot, the US Department of Defense said today. From a report: The statement comes as plans emerge for gun platforms that can choose their own targets on the battlefield. The plans seek to upgrade existing aiming systems, using developments in machine intelligence. The US said rules governing armed robots still stood and humans would retain the power to veto their actions. The defense department's plans seek to upgrade the current Advanced Targeting and Lethality Automated System (Atlas) used on ground combat vehicles to help human gunners aim. The military is seeking commercial partners to help develop aiming systems to "acquire, identify, and engage targets at least three times faster than the current manual process."
How many humans? (Score:1)
I fear a president or single 'general' issuing orders and obliterating a school/city/country.. I feel it should be passed down a chain of command OF HUMANS, ultimately with human eyes on target for finial determination.
Huh? (Score:5, Interesting)
"The US said rules governing armed robots still stood and humans would retain the power to veto their actions."
Uh, say WHAT? That line says something completely different from 'requiring human finger on every trigger'.
"Yeah, I coulda - probably shoulda - vetoed that Predator bombing on the wedding, but I was in the can at the time."
Re: (Score:2, Interesting)
"Yeah, I coulda - probably shoulda - vetoed that Predator bombing on the wedding, but I was in the can at the time."
Like we need an amoral AI to do that.
Obama signed orders personally on several occasions to bomb weddings or funerals filled with known-innocent people because one of them kinda sorta looked like Osama Bin Laden from 10,000 feet.
I did a paper and presentation about it for my second degree. He knew exactly what he was doing and did it anyway.
Re: Huh? (Score:3)
You're inserting your own assumptions into that phrase. The concept of "veto" does not imply that the default is for the action to proceed. The word "veto" essentially just means "reject". In this case, the targrtting system would select an object, target it, and present a prompt to the human requesting permission to engage. The human can either authorize or reject.
It seems you're thinking of a word more along the lines of "interrupt" but, again, that's not what veto means.
Re:Huh? (Score:4)
It all happened so fast.
I definitely vetoed the followup strike on the smouldering ruins though!
Indiscriminate killing (Score:1)
Right, because if there's one thing the US military isn't known for, it's indiscriminate killing at a distance, so let's trust them when they promise that these are mindful, ethical kill-bots they're talking about.
Re: (Score:3)
Re: (Score:2)
Would you like to know more?
Not what you think it says (Score:3)
It in fact is saying that systems will be developed to override, but if they can not be engaged (eg interference weather hacking) the mission will continue.
Once the bot swarm is live, it's on live fire in the target zone, will complete the mission, and return to the marshall point (which is frequently an airplane where they rendevous and then disable).
The Command and Control units have kill switches, but they're basically Abort The Mission signals. A human decides the mission is go, arms the flight, and it's kind of like any other missile or remote control device, it completes the firing pattern.
Re: (Score:1)
Re: (Score:3)
To be frank, once it's sent, you can't always stop it, whether we're talking smart dumb bombs (active once they leave the plane), cruise missiles (active once they're in a specified region), or drone swarms. Combat is always messy.
I know in TV series and movies there's always a kill switch that works right up to the end, but that's not how things actually work.
Re: (Score:1)
Uh (Score:5, Funny)
I, for one, proclaim that their decision to call it a lethality system has allayed my fears.
Re: (Score:2)
Killer robots? More like Kuddle robots!
Re: (Score:2)
But that's reason enough to be wary in my book.
Re: Uh (Score:2)
And they have hard-coded into them a kill limit, so we just need to send wave after wave of our own people against them until they reach their quota and shut down.
How long will this directive last? (Score:1)
In a total war scenario where both sides have killer robots and AI on par with today's self-driving cars, how long would this directive last? Obviously at least one side will stand a risk of losing... can anyone imagine, in this scenario, that a desperate party would NOT give the order to drop these "ethics" and send out fully-autonomous killer droids?
It's so easy to take the apparent moral high ground when you have the military high ground.
Re: How long will this directive last? (Score:1)
No, at that point they use nuclear weapons. Killer robots pale by comparison. And yeah, the US military will let 'em loose & blame it on the "fog of war" as usual.
Fortunately (Score:2)
There's no chance these can be hacked by the enemy forces, so they will never be reprogrammed to attack and kill US military field personnel.
Been playing Horizon: Zero Dawn recently (Score:5, Interesting)
For anyone not familiar, the entire premise of the game is that you're in a post-apocalyptic world about 1000 years after our war robots went out of control, with exactly the sorts of results you'd expect. I found it interesting when, in a moment of self-awareness, the main character discovers a recording circa 2065 of an engineer who worked on the war robots lamenting the fact that they didn't pay attention to the warnings that were everywhere in the science fiction material of the day. More or less, we already had a good notion of how this would end, so why, oh why, did we go along with it?
Honestly, I do wonder how we can avoid a bad outcome. After all, if we don't build them, our opponents will (for whatever definition of "opponent" you want to pick), since taking the human out of the loopwill eventually confer a large tactical advantage. It's one of those horrible things where no one wants it, but everyone seems to be forced to do it anyway. So, how to avoid it in the long-term?
Re:Been playing Horizon: Zero Dawn recently (Score:4)
It's one of those horrible things where no one wants it, but everyone seems to be forced to do it anyway. So, how to avoid it in the long-term?
We probably can't. But it's a very far leap from an autonomous drone or turret to an autonomous war machine. Guns need bullets, machines need fuel, until a Terminator-like AI takes control over the whole supply chain down to factories and refineries a rouge robot army would fizzle. As for humans thinking war would be winnable again, we still have ICBMs with nukes as the ultimate "fuck you too" with much better bang for the buck.
The best thing we could do is build the peace, no matter what the guys with the doomsday clock say I don't feel like we're anywhere near 1939-45 or 1962. Ethnic/racial tension isn't anywhere near the same in US or Europe or Japan as 50 years ago. Even Africa is fairly peaceful compared to the past, the Middle East is once more a cluster fuck but overall it's pretty quiet [ourworldindata.org]. From Washingon to Beijing it seems cash is king and capitalism doesn't care what your skin color is or where your parents come from or what's between your legs and how you use it.
Which is not to say it's your friend, but it's an equal opportunity exploiter. Mega-corporations don't need war, it's bad for business unless you're one of the few whose business is war. Sure, you'll probably have some more civil wars where shitty countries tear themselves apart. I think straight up invasions is going to be very rare though, unless it's some strategic grab like Crimea. Not that it's not a big deal but compared to the Soviet Union rolling in the tanks over half of Europe it's barely a nibble.
The reason I say it's inevitable is that this isn't a particular technology like ABC weapons. All the building blocks are generic and will be built even if they're not used to build kill bots at present it's not really a question of whether they'll be forced into service under duress. When your freedom is quite literally under fire, you're luck if they stick to the Geneva convention much less consider long term ramifications of militarizing this technology. If what you need to end the war is a nuke, you build nukes.
Re: (Score:2)
...until a Terminator-like AI takes control over the whole supply chain down to factories and refineries a rouge robot army would fizzle.
Well naturally. They'll run out of cosmetics fairly quickly. There are only so many Walgreens stores.
Right! the US and Eu are more (Score:2)
And why, because in their eyes it reduces their risk, cost and their loses.
The effect on culture/sociability/others is not relevant
Just my 2 cents
Oppression through Milgram's Experiment (Score:5, Insightful)
Re: Oppression through Milgram's Experiment (Score:2)
And Nazi death camps show us how it goes without a button.
XKCD nailed it (Score:4, Insightful)
I'm not worried about sci-fi scenarios where robots kill all humanity. I _am_ worried about the ruling class using killer robots to usher in an endless age of dystopian oppression. Right now about the only thing keeping them a _little_ in check is having to balance the Military and Working Classes. If they go to far the working class lets the military class form a Junta and we get a change of masters with the old order's heads on a pike. Killer robots eliminate the Military class. All that's left is a tiny group of engineers who'll get bought off with an OK life.
If you're a member of the working class you should be doing everything in your power to put the kibosh on this crap. Fast.
Laugh it up (Score:2)
They even make it before! (Score:2)
The humans making the decision will just send the robots in there, the ones killed will have done _something_ to deserve it, right?
Re: (Score:2)
3 times faster, cause we just can't kill them fast enough?
Power of veto != control (Score:3)
The US said rules governing armed robots still stood and humans would retain the power to veto their actions.
Veto power by humans is important but what's more important is having a proven track record of being able to reliably identify enemies. Someone should need to authorize each kill order.
I believe the short-term objective here is to alleviate the stress put on the soldier in charge. If you see a robot kill someone then you mind can be convinced that you didn't really kill anyone. However, if you have to press the button that activates the sequence to kill someone then your mind registers that as you killing them. This is important because PTSD is higher in drone pilots.
The long-term objective of a killer machine army is obvious and terrifying because it's mere the extension of a small number of individuals. There will be nobody to say "this is wrong" or "these people are not our enemies" because machines do not feel, do not object and they do not think.
Never create a weapon that you wouldn't want to fall into the hands of your worst enemy.
Re: Power of veto != control (Score:2)
This is important because PTSD is higher in drone pilots.
I don't know why this nonsense keeps getting repeated. Where the fuck did it even come from?
https://www.livescience.com/47... [livescience.com]
"About 1,000 United States Air Force drone operators took part in the study, and researchers found that 4.3 percent of them experienced moderate to severe PTSD. In comparison, between 10 and 18 percent of military personnel returning from deployment typically are diagnosed with PTSD, the researchers wrote."
As an added bonus:
"The percentage of drone operators in the study who had PTSD
Re: (Score:2)
Don't worry folks (Score:2)
Have the peace of mind to know that if we need your oil, you'll be murdered by a fellow human being in as sensitive a manner as possible [youtube.com] :D
have no fear! (Score:2)
those robots will shoot only bad guys!
Contradiction (Score:2)
The military is seeking commercial partners to help develop aiming systems to "acquire, identify, and engage targets at least three times faster than the current manual process."
Humans will always retain the veto? Over a system that is literally designed to be three times faster than human reaction speeds are physically capable of being? Uh, no? It's in the damn spec that no human will be able to react fast enough to veto a kill. That sentence has three verbs, "acquire, identify and engage". Engage means pulling the trigger. And that's going to be a kill when it's a bot doing the aiming. Aimbots have been doing headshots on virtual heads for two decades now. Ones capable of