Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics The Military

Killer Robots In Plato's Cave 91

Lasrick writes Mark Gubrud writes about the fuzzy definitions used to differentiate autonomous lethal weapons from those classified as semi-autonomous: "After all, if the only criterion is that a human nominates the target, then even The Terminator...might qualify as semi-autonomous." Gubrud wants a ban against autonomous hunter-killer weapons like the Long-Range Anti-Ship Missile and the canceled Low-Cost Autonomous Attack System, and vague definitions surrounding autonomous and semi-autonomous weapons that will allow weapons that should be classified as autonomous but aren't. Existing definitions draw a "distinction without a difference" and "will not hold against the advance of technology." Gubrud prefers a definition that reduces autonomy to a simple operational fact, an approach he calls "autonomy without mystery." In the end, Gubrud writes, "Where one draws the line is less important than that it is drawn somewhere. If the international community can agree on this, then the remaining details become a matter of common interest and old-fashioned horse trading."
This discussion has been archived. No new comments can be posted.

Killer Robots In Plato's Cave

Comments Filter:
  • by account_deleted ( 4530225 ) on Monday April 13, 2015 @12:16PM (#49464277)
    Comment removed based on user account deletion
    • by garyisabusyguy ( 732330 ) on Monday April 13, 2015 @12:36PM (#49464409)

      As explored in Dark Star (1974), ship on a mission to destroy unstable planets, intelligent bomb has decided to blow up in the ship

      [Doolittle convinces the bomb not to explode]
      Doolittle: Hello, Bomb? Are you with me?
      Bomb #20: Of course.
      Doolittle: Are you willing to entertain a few concepts?
      Bomb #20: I am always receptive to suggestions.
      Doolittle: Fine. Think about this then. How do you know you exist?
      Bomb #20: Well, of course I exist.
      Doolittle: But how do you know you exist?
      Bomb #20: It is intuitively obvious.
      Doolittle: Intuition is no proof. What concrete evidence do you have that you exist?
      Bomb #20: Hmmmm... well... I think, therefore I am.
      Doolittle: That's good. That's very good. But how do you know that anything else exists?
      Bomb #20: My sensory apparatus reveals it to me. This is fun.

      • by Anonymous Coward

        I only saw that movie once, and I was a child too young to really appreciate it, but I seem to recall that the conclusion of the bomb's introduction to solipsism had a fittingly narcissistic conclusion:

        Bomb #20: Let there be light.

  • https://medium.com/the-physics-arxiv-blog/the-face-recognition-algorithm-that-finally-outperforms-humans-2c567adbf7fc

    If people are willing to trust autonomous cars to do a better job than people, why not target recognition, Seems to be a cognitive disconnect.

    • An autonomous cars' job is "don't hit those people, other cars, or other obstacles in the road." It doesn't need to know Person A is fine to hit but avoid Person B or else. Autonomous weapons need to make this decision and might decide that the wrong person is OK to kill.

      • Autonomous weapons need to make this decision and might decide that the wrong person is OK to kill.

        So? The important criteria is whether they will make MORE mistakes than humans. Human soldiers make lots of mistakes, they become fatigued, angry about their best friend's leg getting blown off, etc. A robot would not have massacred civilians at My Lai, or No Gun Ri.

        • The problem domains are different, and current AI is far better at "calculate a trajectory for this physical object that avoids other physical objects and follows a set of rules" than "identify a random human or group of humans, assess their level of involvement (both now and likely future level) in a given conflict, and determine whether they are legitimate combatants."
      • True, but let's not make the perfect an implacable enemy of the good enough. Consider that we're already willing to launch a Hellfire missile at terrorist leaders and count the 10-15 "maimed and also deads" as collateral damage. Shooting only one wrong person in the head every fifty kills would be a huge improvement over the missile.
    • Don't crash into anything while moving from point A to point B is a fairly unambiguous goal which computers should be able to handle, even if the details in reality are fairly complicated. Only kill bad people is not the same thing at all.

      • by dougmc ( 70836 )

        Don't crash into anything while moving from point A to point B is a fairly unambiguous goal which computers should be able to handle, even if the details in reality are fairly complicated.

        Given the number of computer games I've played with horrible pathfinding ... I'm guessing that this must be an even more complicated concept than we are aware of. (Scott Adams had something to say about that ... [dilbert.com])

    • I agree. I'd much rather see a large drone release a little drone that homes in on one guy and shoots him than I would see that same drone release an explosive missile that blows up the guy and the seven people standing next to him. If we're going to kill people with drones (and, I'm not saying we should, but it sure looks like that's going to be happening), we should use all the technology we can to reduce collateral damage as much as possible.
  • by bughunter ( 10093 ) <{ten.knilhtrae} {ta} {retnuhgub}> on Monday April 13, 2015 @12:29PM (#49464351) Journal

    I once worked on the camera portion of a semi-autonomous weapon which, once a target was designated, would continually analyze the live image to maintain, track and intercept that target. A key part of the system was a human in the loop abort, which would cause the system to veer off target before impact should the operator see something he or she didn't like: not the intended target, high probability of collateral damage, etc.

    The point is, all judgements about selecting the target and aborting the mission or changing targets were in the hands of a human. The automated parts were vehicle operations, corrections for terrain and weather, tracking an operator-designated object, etc. — all things that required no risk assessment, moral judgment, ethical considerations, etc.

    That's the difference between autonomous and semi-autonomous: A human identifies the target, and monitors the system to issue a stand down order as new information becomes available.

    (It's also the only weapon system I ever worked on, and it caused me great conflict. Though the intended use had merit, the possible unintended uses made me very uncomfortable. No, I can't be more specific.)

    • by Nidi62 ( 1525137 )

      (It's also the only weapon system I ever worked on, and it caused me great conflict. Though the intended use had merit, the possible unintended uses made me very uncomfortable. No, I can't be more specific.)

      Shouldn't every weapon have a moral conflict inherent in it's use? Whether it is wondering for a fraction of a second if you should pull the trigger of a rifle(am I aiming at a target or a civilian), or deliberating for a week on whether or not to launch a strike on a compound (good intel, collateral damage, etc), there should always be a period of reflection and wondering if the weapon needs to be employed. The act of taking a life is not a decision to be taken lightly, and if when killing becomes second

      • For a second, I thought you were arguing that the moral conflict should be built into the weapon. I envisioned a weapon version of Clippy. "It looks like you are trying to kill someone. Do you want me to help?"

        On the plus side, building Clippy into every weapon would ensure that they are never used. (On the minus side, using the weapons as clubs until the weapons were destroyed would increase a thousand fold.)

    • This is a key point. No military in the world is going to want a weapon system that they have zero control over. Limited control, maybe - but we've had that for decades in the form of long range guided cruise/ballistic missiles, and even then there's a human "in the loop" (in the decision to launch/fire). Some of those may also have a self-destruct/abort, but the early ones certainly didn't.

      Furthermore, trying to draw an artificial line between a present-day cruise missile that gets launched from a ship, fl
      • by ceoyoyo ( 59147 )

        The problem isn't with a drone that flies to a set of GPS coordinates, drops a bomb, and flies back. It's with a drone that flies to a set of GPS coordinates, waits around until it sees something in the general vicinity it wants to blow up, drops it's bomb and flies back. The issue is with the "something it wants to blow up" part.

        • No, the problem is in how you:

          A) Definite it clearly enough to include one and exclude the other
          B) Make it sufficiently in the interest of all countries to want to do so.

          It's B that's really going to be the hard part. Weapons generally don't get banned because they're morally horrifying or repugnant, they get banned because countries come to the conclusion that using them really just isn't worth it, and that we'd be better off agreeing to not do so, EVEN IF SOMEONE ELSE DECIDES TO VIOLATE THAT.

          Consider Che
        • Not terribly different than the non-intelligent weapons we already deploy.

          Take mines ( both the land and sea variety ) for example. A human deployed them ( or made the decision to deploy them ) and they pretty much just sit around until someone crosses paths with it. At least the autonomous version can have some logic built into it to discriminate against its targets.

      • I still do not know what is wrong with zone defense automated systems. Sometimes, you WANT segregation as a tactical diplomacy method, and we're to the point of "If it moves and is in the zone, kill it" technology far in excess of the low tech minefields of yesteryear.

      • This is a key point. No military in the world is going to want a weapon system that they have zero control over.

        Militaries? No.

        Powerful despots who want armies who not only won't, but literally can't disobey orders? No matter how incomprehensibly immoral? Oh, very much yes.

    • by TiggertheMad ( 556308 ) on Monday April 13, 2015 @01:51PM (#49465003) Journal
      It seems to me that these weapons are morally equivalent to a land mine. A land mine is an autonomous weapon, that has the following logic: 'Is trigger depressed? If so, detonate'.

      Putting more complicated logic on a robot armed with machine guns is pretty much the same thing. If you have morale problems with land mines, you probably should have the same problems with kilbots. (Also, expect the exact same classes of problems to occur).

      Most civilized countries are realizing that landmines are rather deplorable weapons, it seems interesting that they would be ok with robotic weaponry...
      • Perhaps the moral equivalent, perhaps. But a landmine will remain lethal for decades, if not longer, and as far as I know there are none that have been deployed that can be easily turned off. Nor did those who placed them keep any real record of where they were for retrieval later.

        There's little chance of a couple hundred killbots being left in place and active after a conflict ends. And hopefully they won't default into a kill children, puppies, and anything that moves mode. Plus they won't be as cheap a

      • by sjames ( 1099 )

        In some ways, the autonomous weapon is far worse. At least the landmine stays put. Imagine landmines roving randomly around the countryside.

        • That would mean they'd have a power source that would be rapidly depleted, rendering the mine inert. Considering the difficulty of effectively hiding such a mobile mine, it'd also be more easily detected, allowing for proper cleanup once the conflict is resolved.

          In some ways, a randomly-roving land mine is far better than a stationary one.

        • I think they are called 'cruse missiles'.
      • by khallow ( 566160 )

        Most civilized countries are realizing that landmines are rather deplorable weapons, it seems interesting that they would be ok with robotic weaponry...

        It's because landmines have limited value, but robotic weaponry is a game changer. For example, we may be a few decades away from obsolescence of traditional human piloted fighter aircraft due to higher cost per seat, lower acceleration tolerance, and possibly slower reaction speeds.

        Sure, you can ban the weapons, but then the initiative for their development and use will just go to those who break the rules.

        • by s.petry ( 762400 )

          I personally don't see it as a game changer. Radars are detecting them easier, and jammers are bringing them down easier. Iran has dropped quite a few from the US and Israel.

          It is a real moral dilemma having to kill someone, and especially if your life is not in danger. It is that dilemma which is leading to the desire for autonomous systems by people in power. No risk of guys like Manning or Snowden being disgusted with the morality of the situation and dumping information to the public. Immoral polit

          • by khallow ( 566160 )

            I personally don't see it as a game changer. Radars are detecting them easier, and jammers are bringing them down easier. Iran has dropped quite a few from the US and Israel.

            Easier than what? There is nothing else in the role these current drones are being used for.

            It is a real moral dilemma having to kill someone, and especially if your life is not in danger. It is that dilemma which is leading to the desire for autonomous systems by people in power. No risk of guys like Manning or Snowden being disgusted with the morality of the situation and dumping information to the public. Immoral politicians will push the button themselves, or tell the immoral military guys they allow to stay on staff to do the work.

            And you're telling me that's not a game changer either?

            • by s.petry ( 762400 )

              First part, drones were game changing when they were immune to detection and shutdown. No longer the case.

              Second part, no there is nothing new here either. History is full of people holding power trying to use all kinds of tricks to "wipe out those other guys". Drones are no different than aircraft currently. They require a human to pilot and shoot, so morality still gets involved. Autonomous is the push because it breaks that, and I gave the logic for why people holding power want it. You seem to be

              • by khallow ( 566160 )

                First part, drones were game changing when they were immune to detection and shutdown.

                Drones were never immune to detection and shutdown. Nor is that their draw at present.

                Drones are no different than aircraft currently.

                Aircraft that are many times more expensive than drones and which contain a human pilot.

                . They require a human to pilot and shoot, so morality still gets involved.

                The same reasons that morality would get involved in a weapon system with a human pilot, would get involved with any other weapons system. We see it with landmines, for example. The cost/benefit of remote or autonomous systems is different, but your morality should apply equally.

                And humans would still be involved. It's not like they'

                • by s.petry ( 762400 )

                  Drones were never immune to detection and shutdown. Nor is that their draw at present.

                  BS to both of those. Drones could not be seen or detected, hence used as assassination devices. Iran is successfully killing drones, they are no longer immune to detection. As a guess, you are going to attempt to claim that "cost" is the main factor. That is extremely wrong on every possible level. Study up on DOD and Military expenses, money has never been an object, ever in the history of the military.

                  To the last part, I think we are close to agreeing except for where you claim autonomous systems wou

                  • by khallow ( 566160 )

                    Drones could not be seen or detected, hence used as assassination devices. Iran is successfully killing drones, they are no longer immune to detection.

                    The US wasn't using drones to assassinate people in Iran. And so what if Iran can do it? It's not the same as someone elsewhere achieving the same feat, particularly without creating a military target in the process. Keep in mind that the US strategy is to always have drones in the air. So it's not that useful to be able to detect drones, because you will always be able to detect drones. Merely detecting drones tells you nothing about whether the controllers of those drones know enough to commit an effectiv

      • by Kiuas ( 1084567 )

        A land mine is an autonomous weapon, that has the following logic: 'Is trigger depressed? If so, detonate'.

        A land mine is not autonomous anymore than a hole covered with leaves and a sharp stick at the bottom is "autonomous". A land mine is a mechanism, a trigger, which will do one thing if acted upon, ie. if stepped on. The landmine will not suddenly move on its own, or decide that it will not explode if the person stepping on it isn't an adult etc.

        Autonomy implies the capability of a weapon to affect its

  • Giving a clearcut definition to "autonomy" that is inclusive of all its uses is downright impossible. Authors in engineering argue that the term is at least context dependent (things are autonomous regarding task, environment, etc). Perhaps the best way here is to stop using "autonomy", and invent new ones.

  • I KNEW it!
    The Illuminati are controlling our new robot Overlords!

  • But you want an end run around that, don't you?

  • Every time there is a better weapon, someone will seek to ban it. It started at least as long ago as 12th century, when Pope Innocent II banned the use of crossbows (1139).

    It is futile... And, with the particular example of precision weapons, it is also foolishly immoral — because precision helps reduce fatalities. If you no longer need to flatten the village to destroy an artillery battery, or a demolish a high-rise to get that sniper, you kill fewer by-standers and cause less mayhem...

    • by gfxguy ( 98788 )
      The biggest problem with banning anything, especially weapons, is that only the people who feel morally obligated to follow the ban will do so - leaving them unprotected from those who don't.
    • Counterpoint: the more "safe" and sure a weapon is perceived to be, the more likely it is to be used.

      You'll notice over the past forty years the US has been moving the moral repercussions of warfare further and further from public view. When you could be drafted to go kill foreigners and maybe get killed yourself, moral outrage was high. Protests in the streets, the burning of draft cards, fleeing the country. So they moved from a draft to an all-volunteer army. Now when soldiers die, or get PTSD, well, the

      • by mi ( 197448 )

        When you could be drafted to go kill foreigners and maybe get killed yourself, moral outrage was high. Protests in the streets, the burning of draft cards, fleeing the country.

        Except there was none of that during Koran War just a few years earlier, when weapons were worse, not during WW2 even earlier.

        No, the protests you are alluding to were due simply to the enemy action [wikipedia.org] and little else.

        Your premise is wrong — the US, for better or worse, still fights plenty of wars. They are just far less devastati

        • Your premise is wrong — the US, for better or worse, still fights plenty of wars. They are just far less devastating for both sides — because we have better weapons.

          Well that's kind of my point. Do the better weapons mean they're more likely to be used? When there's a conflict with a foreign party (over anything. Resources, ideology, whatever) you have many options, with different trade-offs. You've got economic, diplomatic, or military solutions to the problem. And the "cost" of an option is of course not just measured in dollars, but political capital at home, diplomatic credibility abroad, etc.

          But better weapons make the "cost" of choosing a military option lower an

          • by mi ( 197448 )

            Well that's kind of my point. Do the better weapons mean they're more likely to be used?

            Yes, I understood your question — and the answer is "No". The US is not demonstrably more/less eager to enter into a shooting war now, than it was during the 20th century, for example.

            There's a lot less public scrutiny, then

            The protests against Iraq-war were the largest ever [time.com] — public "scrutiny" (or hysteria, rather) was immense. We went in anyway.

            Killing Saddam seems like a no-brainer, but then you wind up w

  • If I booby trap my house to kill intruders, is that autonomous?

    • Based on the definition in the article, it would seem "no" would be the answer, since no target recognition or decision-making is occurring. The government's definitions are roughly that "semi-autonomous machines" are instructed to engage a target and can then do so without subsequent human interaction, whereas "autonomous machines" are those that are simply let loose and make their own decisions about who or what to engage. Currently, no one has autonomous machines, based on those definitions, though we'll

    • If I booby trap my house to kill intruders, is that autonomous?

      The Engineer's answer is yes, most certainly!
      It is a "horrible example" of everything that is dangerous about autonomous weapons, to the extent of having no target recognition at all.

      That opens the question: Do the existing laws banning mines, in at least some areas, also ban robot weapons?
      I would think so...

  • I think the issue isn't really autonomous robots. The problem is the declared and clearly defined battlefield. Inside the battlefield, autonomous and semi-autonomous systems are already at work, there is not much you can do about that. Ships, for example, have anti missle systems that are completely autonomous. And decisions to kill or not to kill are often made on the spot and quick. Humans err a lot in these situations, leading to lots of horrible mistakes.

    Outside the declared battlefield, e.g. around the

  • Why does this topic remind me of Dark Star [wikipedia.org]

    In particular this bit https://www.youtube.com/watch?... [youtube.com]

    The major of media coverage of this topic is just bullshit.

  • Hasn't anyone developing these weapons read any science-fiction? Is Fred Saberhagen so far out of vogue that no one has read *any* of the Berserker novels or stories?
    How about Phillip K. Dick? He's been pretty popular with Hollywood recently, and his story Second Variety was not only about this very thing, but made into a movie starring Peter Weller called "Screamers". You can read it for free via Project Gutenberg: http://www.gutenberg.org/ebook... [gutenberg.org]

The best defense against logic is ignorance.

Working...