Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics United States Technology

US Seeks To Allay Fears Over Killer Robots (bbc.com) 67

Humans will always make the final decision on whether armed robots can shoot, the US Department of Defense said today. From a report: The statement comes as plans emerge for gun platforms that can choose their own targets on the battlefield. The plans seek to upgrade existing aiming systems, using developments in machine intelligence. The US said rules governing armed robots still stood and humans would retain the power to veto their actions. The defense department's plans seek to upgrade the current Advanced Targeting and Lethality Automated System (Atlas) used on ground combat vehicles to help human gunners aim. The military is seeking commercial partners to help develop aiming systems to "acquire, identify, and engage targets at least three times faster than the current manual process."
This discussion has been archived. No new comments can be posted.

US Seeks To Allay Fears Over Killer Robots

Comments Filter:
  • by Anonymous Coward

    I fear a president or single 'general' issuing orders and obliterating a school/city/country.. I feel it should be passed down a chain of command OF HUMANS, ultimately with human eyes on target for finial determination.

  • Huh? (Score:5, Interesting)

    by lrichardson ( 220639 ) on Monday March 11, 2019 @05:13PM (#58256406) Homepage

    "The US said rules governing armed robots still stood and humans would retain the power to veto their actions."

    Uh, say WHAT? That line says something completely different from 'requiring human finger on every trigger'.

    "Yeah, I coulda - probably shoulda - vetoed that Predator bombing on the wedding, but I was in the can at the time."

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      "Yeah, I coulda - probably shoulda - vetoed that Predator bombing on the wedding, but I was in the can at the time."

      Like we need an amoral AI to do that.

      Obama signed orders personally on several occasions to bomb weddings or funerals filled with known-innocent people because one of them kinda sorta looked like Osama Bin Laden from 10,000 feet.

      I did a paper and presentation about it for my second degree. He knew exactly what he was doing and did it anyway.

    • You're inserting your own assumptions into that phrase. The concept of "veto" does not imply that the default is for the action to proceed. The word "veto" essentially just means "reject". In this case, the targrtting system would select an object, target it, and present a prompt to the human requesting permission to engage. The human can either authorize or reject.

      It seems you're thinking of a word more along the lines of "interrupt" but, again, that's not what veto means.

    • by ceoyoyo ( 59147 ) on Monday March 11, 2019 @07:50PM (#58257426)

      It all happened so fast.

      I definitely vetoed the followup strike on the smouldering ruins though!

  • by Anonymous Coward

    Right, because if there's one thing the US military isn't known for, it's indiscriminate killing at a distance, so let's trust them when they promise that these are mindful, ethical kill-bots they're talking about.

  • by WillAffleckUW ( 858324 ) on Monday March 11, 2019 @05:22PM (#58256478) Homepage Journal

    It in fact is saying that systems will be developed to override, but if they can not be engaged (eg interference weather hacking) the mission will continue.

    Once the bot swarm is live, it's on live fire in the target zone, will complete the mission, and return to the marshall point (which is frequently an airplane where they rendevous and then disable).

    The Command and Control units have kill switches, but they're basically Abort The Mission signals. A human decides the mission is go, arms the flight, and it's kind of like any other missile or remote control device, it completes the firing pattern.

    • The issue with killer robots isn't that they kill people, it's that they are automated in any way shape or form. They will end up being used against citizens eventually because the only thing keeping the government from having already curb-stomped the population into submission is that the population is armed to the teeth and even the military value freedom. As it stands the military wouldn't help subvert freedom in the US on a large scale, the moment automated killer robots exist in any capacity the rule
      • To be frank, once it's sent, you can't always stop it, whether we're talking smart dumb bombs (active once they leave the plane), cruise missiles (active once they're in a specified region), or drone swarms. Combat is always messy.

        I know in TV series and movies there's always a kill switch that works right up to the end, but that's not how things actually work.

        • There's a difference between a dumbfire weapon post-deployment and a Human-seeking weapon post-deployment. The latter variety is a threat to even have that software developed because the software is the threat, not the deployment process. The deployment process is always going to be open to manipulation to the extent that 1 guy could potentially control an entire nation's military forces without checks and balances at every level. The thing which makes Human warriors better than AI warriors (from the sta
  • Uh (Score:5, Funny)

    by backslashdot ( 95548 ) on Monday March 11, 2019 @05:28PM (#58256524)

    I, for one, proclaim that their decision to call it a lethality system has allayed my fears.

  • by Anonymous Coward

    In a total war scenario where both sides have killer robots and AI on par with today's self-driving cars, how long would this directive last? Obviously at least one side will stand a risk of losing... can anyone imagine, in this scenario, that a desperate party would NOT give the order to drop these "ethics" and send out fully-autonomous killer droids?

    It's so easy to take the apparent moral high ground when you have the military high ground.

    • by Anonymous Coward

      No, at that point they use nuclear weapons. Killer robots pale by comparison. And yeah, the US military will let 'em loose & blame it on the "fog of war" as usual.

  • There's no chance these can be hacked by the enemy forces, so they will never be reprogrammed to attack and kill US military field personnel.

  • by Anubis IV ( 1279820 ) on Monday March 11, 2019 @05:34PM (#58256590)

    For anyone not familiar, the entire premise of the game is that you're in a post-apocalyptic world about 1000 years after our war robots went out of control, with exactly the sorts of results you'd expect. I found it interesting when, in a moment of self-awareness, the main character discovers a recording circa 2065 of an engineer who worked on the war robots lamenting the fact that they didn't pay attention to the warnings that were everywhere in the science fiction material of the day. More or less, we already had a good notion of how this would end, so why, oh why, did we go along with it?

    Honestly, I do wonder how we can avoid a bad outcome. After all, if we don't build them, our opponents will (for whatever definition of "opponent" you want to pick), since taking the human out of the loopwill eventually confer a large tactical advantage. It's one of those horrible things where no one wants it, but everyone seems to be forced to do it anyway. So, how to avoid it in the long-term?

    • by Kjella ( 173770 ) on Monday March 11, 2019 @07:23PM (#58257256) Homepage

      It's one of those horrible things where no one wants it, but everyone seems to be forced to do it anyway. So, how to avoid it in the long-term?

      We probably can't. But it's a very far leap from an autonomous drone or turret to an autonomous war machine. Guns need bullets, machines need fuel, until a Terminator-like AI takes control over the whole supply chain down to factories and refineries a rouge robot army would fizzle. As for humans thinking war would be winnable again, we still have ICBMs with nukes as the ultimate "fuck you too" with much better bang for the buck.

      The best thing we could do is build the peace, no matter what the guys with the doomsday clock say I don't feel like we're anywhere near 1939-45 or 1962. Ethnic/racial tension isn't anywhere near the same in US or Europe or Japan as 50 years ago. Even Africa is fairly peaceful compared to the past, the Middle East is once more a cluster fuck but overall it's pretty quiet [ourworldindata.org]. From Washingon to Beijing it seems cash is king and capitalism doesn't care what your skin color is or where your parents come from or what's between your legs and how you use it.

      Which is not to say it's your friend, but it's an equal opportunity exploiter. Mega-corporations don't need war, it's bad for business unless you're one of the few whose business is war. Sure, you'll probably have some more civil wars where shitty countries tear themselves apart. I think straight up invasions is going to be very rare though, unless it's some strategic grab like Crimea. Not that it's not a big deal but compared to the Soviet Union rolling in the tanks over half of Europe it's barely a nibble.

      The reason I say it's inevitable is that this isn't a particular technology like ABC weapons. All the building blocks are generic and will be built even if they're not used to build kill bots at present it's not really a question of whether they'll be forced into service under duress. When your freedom is quite literally under fire, you're luck if they stick to the Geneva convention much less consider long term ramifications of militarizing this technology. If what you need to end the war is a nuke, you build nukes.

      • ...until a Terminator-like AI takes control over the whole supply chain down to factories and refineries a rouge robot army would fizzle.

        Well naturally. They'll run out of cosmetics fairly quickly. There are only so many Walgreens stores.

  • likely to enact rules and obey them when it is easy. But like everyone else in the world with military power when push comes to shove. There will be automated killer everything being used everywhere, by everyone (criminals, governments, groups, terrorest, individuals), Everyone!

    And why, because in their eyes it reduces their risk, cost and their loses.

    The effect on culture/sociability/others is not relevant

    Just my 2 cents ;)
  • by DallasTruaxxx ( 4880195 ) on Monday March 11, 2019 @05:46PM (#58256672)
    Armed robots enable oppression by reducing violence to the 'press of a button'. Milgram's Experiment on Obedience to Authority shows us how that one goes.
  • XKCD nailed it (Score:4, Insightful)

    by rsilvergun ( 571051 ) on Monday March 11, 2019 @05:51PM (#58256708)
    See here [xkcd.com]

    I'm not worried about sci-fi scenarios where robots kill all humanity. I _am_ worried about the ruling class using killer robots to usher in an endless age of dystopian oppression. Right now about the only thing keeping them a _little_ in check is having to balance the Military and Working Classes. If they go to far the working class lets the military class form a Junta and we get a change of masters with the old order's heads on a pike. Killer robots eliminate the Military class. All that's left is a tiny group of engineers who'll get bought off with an OK life.

    If you're a member of the working class you should be doing everything in your power to put the kibosh on this crap. Fast.
  • The humans making the decision will just send the robots in there, the ones killed will have done _something_ to deserve it, right?

  • by Gravis Zero ( 934156 ) on Monday March 11, 2019 @06:09PM (#58256834)

    The US said rules governing armed robots still stood and humans would retain the power to veto their actions.

    Veto power by humans is important but what's more important is having a proven track record of being able to reliably identify enemies. Someone should need to authorize each kill order.

    I believe the short-term objective here is to alleviate the stress put on the soldier in charge. If you see a robot kill someone then you mind can be convinced that you didn't really kill anyone. However, if you have to press the button that activates the sequence to kill someone then your mind registers that as you killing them. This is important because PTSD is higher in drone pilots.

    The long-term objective of a killer machine army is obvious and terrifying because it's mere the extension of a small number of individuals. There will be nobody to say "this is wrong" or "these people are not our enemies" because machines do not feel, do not object and they do not think.

    Never create a weapon that you wouldn't want to fall into the hands of your worst enemy.

    • This is important because PTSD is higher in drone pilots.

      I don't know why this nonsense keeps getting repeated. Where the fuck did it even come from?

      https://www.livescience.com/47... [livescience.com]

      "About 1,000 United States Air Force drone operators took part in the study, and researchers found that 4.3 percent of them experienced moderate to severe PTSD. In comparison, between 10 and 18 percent of military personnel returning from deployment typically are diagnosed with PTSD, the researchers wrote."

      As an added bonus:

      "The percentage of drone operators in the study who had PTSD

  • Have the peace of mind to know that if we need your oil, you'll be murdered by a fellow human being in as sensitive a manner as possible [youtube.com] :D

  • those robots will shoot only bad guys!

  • The military is seeking commercial partners to help develop aiming systems to "acquire, identify, and engage targets at least three times faster than the current manual process."

    Humans will always retain the veto? Over a system that is literally designed to be three times faster than human reaction speeds are physically capable of being? Uh, no? It's in the damn spec that no human will be able to react fast enough to veto a kill. That sentence has three verbs, "acquire, identify and engage". Engage means pulling the trigger. And that's going to be a kill when it's a bot doing the aiming. Aimbots have been doing headshots on virtual heads for two decades now. Ones capable of

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...