Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Government Robotics The Military

UN to Debate Use of Fully Autonomous Weapons, New Report Released 180

concertina226 (2447056) writes "The United Nations will debate the use of killer robots for the first time at the UN Convention on Certain Conventional Weapons (CCW) this week, but human rights activists are calling for the robots to be banned. Human Rights Watch and Harvard Law School's International Human Rights Clinic have published a new report entitled 'Shaking the Foundations: The Human Rights Implications of Killer Robots', which calls for killer robots to be banned to prevent a potential arms race between countries. Killer robots, or fully autonomous weapons, do not yet exist but would be the next step after remote-controlled armed drones used by the US military today. Fully autonomous weapons would have the ability to identify and fire on targets without human intervention, putting compliance with international humanitarian laws in doubt. Among the problems with killer robots highlighted in the report is the risk of criminal liability for a military officer, programmer or weapons manufacturer who created or used an autonomous weapon with intent to kill. If a robot killed arbitrarily, it would be difficult to hold anyone accountable."
This discussion has been archived. No new comments can be posted.

UN to Debate Use of Fully Autonomous Weapons, New Report Released

Comments Filter:
  • by srussia ( 884021 ) on Monday May 12, 2014 @08:57AM (#46978289)
    Don't mines qualify as "autonomous weapons"?
    • by kruach aum ( 1934852 ) on Monday May 12, 2014 @09:05AM (#46978325)

      Not according to the definition used in the summary, which specifies that fully autonomous weapons have the ability to identify targets. Mines fire indiscriminately whenever they're triggered, whether they're stepped on, something falls on them, they fall on something else, whatever.

      • What if you attached an IFF [wikipedia.org] system to the mine?
        • Then they would have the ability to discriminate, but I would still hesitate to call them robots, because they don't exhibit agency. They passively trigger, they don't actively kill the way RoboCop does.

      • They identify targets just fine, it move == target.

        Not how they define it I'm sure but whatever.
        • That's not identification. Identification entails a differentiated response to a stimulus (in the case of mines, the pressure plate being triggered). When a mine is triggered, it can only explode, it cannot differentially not explode.

    • by ShanghaiBill ( 739463 ) on Monday May 12, 2014 @09:29AM (#46978507)

      Don't mines qualify as "autonomous weapons"?

      Most countries have already agreed to ban landmines, by signing the Ottawa Treaty [wikipedia.org].

      • by jfengel ( 409917 )

        Except, of course, for the countries that make huge sums of money producing land mines, and the countries (and non-country actors) with a grudge against somebody and a disposition to not care who else it blows up.

        So according to this site [rascan.com], land mine usage is nearly flat despite the treaty.

        It would be great to get the US to give up making land mines, but unfortunately China and Russia would almost certainly ramp up production to fill any shortfall. That's not a good enough reason for us to keep doing it, but

        • "It would be great to get the US to give up making land mines, but unfortunately China and Russia would almost certainly ramp up production to fill any shortfall. That's not a good enough reason for us to keep doing it"

          how about, we keep making/using them because they work? they are a very effective, low cost solution that saves the lives of those using them.

          i love land mines. much better then throwing soldiers at the problem.

        • It would be great to get the US to give up making land mines, but unfortunately China and Russia would almost certainly ramp up production to fill any shortfall.

          That is a misleading statement. America has not sold or exported landmines since 1992. America also has not used any landmines during the wars in Iraq and Afghanistan (although American troops have been killed by landmines in both wars). The vast majority of landmines placed by the US are along the DMZ in Korea. I believe the only other place in the world where the US currently uses landmines are on the perimeter of Guantanamo Bay.

      • Don't mines qualify as "autonomous weapons"?

        Most countries have already agreed to ban landmines, by signing the Ottawa Treaty [wikipedia.org].

        Yes. But any country likely to start a big war(USA, China, Russia, Iran, Isreal, India, Saudi Arabia Etc) did not sign the treaty. The ones that did reserved the right to keep them around for "training purposes". And the treaty did not ban anti-vehicle mines, claymores, cluster munistions or pretty much anything that a lay person would call a mine.

    • They have some autonomy, but no more than barbed wire or a cloud of mustard gas, and we have legal frameworks for those sort of autonomous weapons. The autonomy at issue lately is in target acquisition. Ironically the sort of thing that previous autonomous weapons lacked - the ability to distinguish a target from a non-target - is exactly the thing that raises ethical questions.

  • by EmagGeek ( 574360 ) on Monday May 12, 2014 @08:58AM (#46978291) Journal

    Bans will not only not prevent them being developed, probably even by a technologically advanced State that is a signatory to the treaty, but it will also not prevent them being used by rogue or puppet states who don't care about bans, or who use them at the behest of a signatory state that is just using them to do their dirty work.

    • by MozeeToby ( 1163751 ) on Monday May 12, 2014 @09:18AM (#46978405)

      What will happen is that the defense contractors will develop autonomous less-lethal robots that can scout, identify targets, and engage with less lethal weapons. But you know... for flexibility purposes... we'll just make sure the weapon hardpoints are as modular as possible. Hey! I know! We'll make them be adaptable to any standard infantry fir... errrrr, less-lethal weapon.

      • by bitt3n ( 941736 )

        What will happen is that the defense contractors will develop autonomous less-lethal robots that can scout, identify targets, and engage with less lethal weapons. But you know... for flexibility purposes... we'll just make sure the weapon hardpoints are as modular as possible. Hey! I know! We'll make them be adaptable to any standard infantry fir... errrrr, less-lethal weapon.

        They'll just install a remote attack-authorization button so the thing isn't technically autonomous, and then someone at Quantico will put his coffee cup on top of the button. Problem solved.

      • by cusco ( 717999 )

        And hold in reserve a firmware update that will reconfigure the device in a few second. I suspect that this treaty, if the US bothers to even consider (much less sign, much less ratify) it, will be treated in much the same manner as the chemical weapons and biological weapons treaty. "We need to develop this capability to practice defending against it, but it's only for training purposes! Honest!"

    • by buchner.johannes ( 1139593 ) on Monday May 12, 2014 @09:24AM (#46978465) Homepage Journal

      Bans will not only not prevent them being developed, probably even by a technologically advanced State that is a signatory to the treaty, but it will also not prevent them being used by rogue or puppet states who don't care about bans, or who use them at the behest of a signatory state that is just using them to do their dirty work.

      Any state today is dependent on trade from the international community. If the US and the EU (or any other large fraction of the international community) decide not to trade with a country, and not grant bank transfers to that country, that has a huge effect on their economy. The countries able to withstand this are countable on one hand. Of course, trade sanctions are not a plan, but the lack of a plan.

      It is always better though to help the particular country address their actual problems rather than supporting their approach. For example, perceived threats can be thwarted by establishing a neutral buffer zone controlled by a third party.

      So no, contrary to the common opinion on Slashdot, I think collectively agreeing to not use a certain, dangerous technology can be useful, and is also enforceable.

      • So no, contrary to the common opinion on Slashdot, I think collectively agreeing to not use a certain, dangerous technology can be useful, and is also enforceable.

        Last I checked, the Slashdot community was more likely to be on the side of supporting a ban. Regardless, how enforceable is such a ban? We can look for signs that a country is developing nuclear capability because of the unique nature of the technology involved. Autonomous, lethal robots, however, are made up of relatively benign or not suspicious parts, so we would have to rely on direct observation to determine if a country were developing such technology.

      • Because it's working so well against North Korea, Iran, Cuba, Syria, Russia...
      • by njnnja ( 2833511 )

        It is always better though to help the particular country address their actual problems rather than supporting their approach

        What if a country's "actual problem" is that the head of the country wants the land, money, resources, or extermination of a neighbor? It seems like these have been much more common reasons for war throughout history than perceived threats or misunderstandings

      • How did our red lines on chemical weapon usage work out in syria? How bout those redlines in crimea?

        • by cusco ( 717999 )

          The problem in Syria is that it was our al Qaeda allies who used the chemical weapons. Whoops.

    • That's okay. I'll just defeat the killbots by sending wave after wave of my own men into battle. Killbots have a preset kill maximum before they shut down.

    • That doesn't eliminate the moral imperative for those nations that actually do want to act humanely.

  • c development!"

    When I read things like this I wonder how these people even function in daily life without eating pebbles and glue sandwiches. The fact that the law is not currently equipped to assign guilt in the case of the malfunction of an autonomous robot is not a good enough incentive to stop scientific progress. First of all, robots can't kill arbitrarily, they can only kill who their programming specifies they should kill, even if that programming encounters a bug or operates in a manner unforeseen by its programmers. Arbitrarily would be without reference to a standard, randomly, like an earthquake or lightning. Second, banning killer robots will not prevent an arms race. It will simply hamper the combat effectiveness of the side who holds itself to the treaty. Third, it would be much more effective if the money spent on ethicists worrying about how scary science is to them went to the scientists instead, so that it could go into development and research of the very thing the ethicists are so afraid of, to make it better understood and less scary.

    • AFAIK any mandate wouldn't restrict the development of these weapons, just the deployment. It's not like the complete irrelevance of a technology in a battlefield setting ever stopped DARPA before.

      • Uh...what?

        I'm pretty sure everything DARPA works on has huge battlefield relevance. It's not like cold-fusion powered tanks wouldn't be a huge game-changer.

    • Beyond that, I see another problem with the idea of banning development of autonomous weapons: most of the technology involved would probably be developed anyway because it would be widely applicable.

      Think about it. If you were going to make a killer robotic soldier, what technology would be hard to develop? It's difficult to make a robot that can easily traverse diverse terrain, but we'll work on that for other reasons. Making an AI that can accurately identify people by facial features, clothing, and

      • by radtea ( 464814 )

        most of the technology involved would probably be developed anyway because it would be widely applicable.

        Furthermore, most of the tech has already been developed. It just hasn't been packaged conveniently (yet.) This has been the case for some years now, and the only thing that's surprising is that no one has deployed this kind of thing for domestic purposes, which is to say: assassination.

        There are two canonical limits on assassination as a means of political expression: retaliation and the death of the assassin. Retaliation (if you assassinate our leader we'll assassinate yours) doesn't apply to terrorist gr

        • I'd imagine that nobody has come up with an assassination robot because of the need for stealth. It'd be hard enough to have a robot target a specific individual, but to do it without alerting anyone ahead of time would be much trickier. I would think that poison or a bomb would be easier.

          Now if you're talking about a cruise missile, then I'm not sure what's gained by having it be completely autonomous. You may as well have someone sitting in a bunker someplace selecting the target and deciding whether

    • I would rather have an autonomous lawn mower for less than $1k than cylons.

    • by cusco ( 717999 )

      You forget, politics is run by lawyers, and lawyers think that passing a law (or in this case a treaty) will magically fix everything. My dad, a remodeler, actually had a lawyer tell him that almost all the houses destroyed by hurricanes and tornadoes could be saveed if the law required that houses be built with hurricane clips on the roof trusses. For a lot of these bozos the law is their religion, which is one of the reasons why groups like the Innocence Project run into so much obstructionism.

  • by Anonymous Coward

    Auto-targeting weapons are only a matter of time. If a college student can make a gun that spits out paintballs with high accuracy, then the best and brightest likely have items far superior.

    Yes, the UN will debate it, but it will be like the debate on land mines. A lot of hand wringing, but nothing really getting done, and the belligerent parties will still make them.

    Right now, it is only a matter of perfecting manufacturing. I wouldn't be surprised to see in 5-10 years that sentry robots, which shoot a

    • To be honest, civilian sentry robots with non-lethal weapons would be cool. Rubber bullets, bean bags, paintballs, whatever.

      • The preferred term is "less-lethal".

      • The real goal is to build a robot which can outrun a human and then just holds onto them until the authorities arrive. The magic of robotics is really going to be the ability to let the robot take the first, second and subsequent shots and keep going.

      • I know at least rubber bullets and paintballs can still maim the shit out of you.

    • As I posted earlier, they are not a matter of time, they are a matter of already built. The USA may have no desire to build or use them, but South Korea sits on the border with North Korea and has built them and installed Super Aegis II armed robots on the border.
    • Auto-targetting weapons exist and have for some time. CIWS and Phalanx, for starters.

  • by swb ( 14022 ) on Monday May 12, 2014 @09:15AM (#46978387)

    I don't know how robot soldiers identify targets, but presuming they have some mechanism whereby they only kill armed combatants it's not hard to see some advantages over human soldiers at least with respect to civilian noncombatants.

    More accurate fire -- ability to use the minimal firepower to engage a target due to superior capabilities. Fire back only when fired upon -- presumably robots would be able to withstand some small arms fire and thus wouldn't necessarily need to shoot first and wouldn't shoot civilians.

    Emotionally detached -- they wouldn't get upset when Unit #266478 is disabled by sniper fire from a village and decide to kill the villagers and burn the village. You don't see robots engaging in a My Lai-type massacre.

    They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

    • by Sibko ( 1036168 )

      You don't see robots engaging in a My Lai-type massacre.

      They also wouldn't commit atrocities against civilians, wonton destruction, killing livestock, rape, beatings, etc. Robots won't rape and pillage.

      Well... You won't see them independently decide to do something like that. But orders are literally orders to a robot. You tell them to burn a city to the ground, shoot anyone who tries to flee, and they will burn that city to the ground and shoot everyone who flees. Without remorse, without second guessing orders, without a moment of any hesitation.

      Which frankly, worries me a bit more. Because the upper levels of command in just about every model of human hierarchy always seems to have worrying numbers

      • The point is that massacres like Mai Lai are caused when soldiers go a bit nutty due to the emotional stress of seeing their friends cut down by an insurgent resistance. That stress isnt going to be there if you're a remote operator, and youre generally going to be much better supervised by pencil-pushers as a drone operator than as an infantryman in a hostile country.

        Just that, humans are capable of refusing to do these things. Robots aren't.

        And as history shows us, humans dont. Forget about the holocaust, the cultural revolution, the soviet purges? Humans go with the crowd, es

  • The difficulties are only in people's minds -- especially those who seek justification to push projects. If someone deploys a weapon, they are responsible for all foreseeable consequences. Whether that weapon is a slug of dumb lead, smart missile or robot.

    Even if the weaponeer did not intend the effects they are still responsible, perhaps as manslaughter rather than murder. The capabilities and risks are hardly concealed. OTOH, if they were careless nor negligent, then their responsibility increases. N

  • by xxxJonBoyxxx ( 565205 ) on Monday May 12, 2014 @09:21AM (#46978433)

    I expect this will be as successful as the UN's 1990-era anti-mine treaty (the Ottawa Treaty - http://en.wikipedia.org/wiki/O... [wikipedia.org]), with over a hundred signatories, but not Russia, China or the United States. http://en.wikipedia.org/wiki/L... [wikipedia.org]

    • Possibly because it's asinine? Mines are cheap and effective weapons. They would have been far better off requiring that mines have some form of self destruct when not used in a designated area.

      • The problem is the self-destruct on the mines was expected to have about a 3% failure rate (these were actually developed). Leaving 3% of your mines in the ground and potentially active after the conflict means you're still left with mindfields about as large as they were during the war.

        • Reducing the number of mines to be cleared by 97% is a huge improvement. There was research into biodegradable explosives as well.

          PS the treaty only covers antipersonnel mines, antitank mines are also dangerous post war to the civilian population.

          • Reducing the number of mines to be cleared by 97% is a huge improvement.

            But you *haven't* reduced the number of mines to be cleared by 97%, because you can't tell which ones have failed to deactivate (until they explode). So you still have to clear all of them.

            • Depends on how they deactivate. Detonating the mine does not leave much to question whether or not it's still active.

          • Biodegradable mines are a red herring. Theyre almost certain to be more expensive and less reliable, and when you're in an armed conflict you tend not to care about such things as "biodegradable" or "what happens after the war".

      • They would have been far better off requiring that mines have some form of self destruct when not used in a designated area.

        ... and it would be equally as effective as requiring that mines be made from rainbows and unicorn farts.

        Let's face reality here: The sort of people who start wars,plant mines, and want armies of automated killing machines don't really give a shit how many children they cripple over the next couple of decades, because they know it won't be their children getting crippled.

    • I expect this will be as successful as the UN's 1990-era anti-mine treaty (the Ottawa Treaty - http://en.wikipedia.org/wiki/O... [wikipedia.org]), with over a hundred signatories, but not Russia, China or the United States. http://en.wikipedia.org/wiki/L... [wikipedia.org]

      Don't worry. I've been playing Minesweeper almost every waking moment since the signing of the Ottawa Treaty. By my calculations, the earth should be mine free in another decade or two.

    • by DougF ( 1117261 )
      Maybe because 50,000 of them separate North from South Korea, are much cheaper than 50,000 soldiers in their place, and you don't have to send body bags and letters home to widows?
      • and you don't have to send body bags and letters home to widows?

        Yes you do, just later when the border is dissolved and civilians run across any mines that weren't removed (which, if the mines were deployed in a shithole, will be all of them).

        Well technically you could be sending the letters to parents or widowers, but you get the idea.

        • and you don't have to send body bags and letters home to widows?

          Yes you do, just later when the border is dissolved and civilians run across any mines that weren't removed (which, if the mines were deployed in a shithole, will be all of them).

          Well technically you could be sending the letters to parents or widowers, but you get the idea.

          my guess is by the time that boarder is dissovled many of them will have gone off already.

  • by GrpA ( 691294 ) on Monday May 12, 2014 @09:27AM (#46978485)

    Taking such action really is a bad idea. An autonomous killing machine could be as complicated as as a military drone with hellfire missiles or as simple as a car loaded with autonomous weapons designed to engage any anything that move, with a GPS pre-determined route and self-driving capability, sitting like a mobile minefield in an abandoned house long after the occupants have left, waiting to be activated.

    I think the appropriate course of action would be to feed international condemnation of such tactics until they are treated with ruthlessness by the international community against any involved in use of such weapons, for any infraction. Just like the use of chemical weapons should have been...

    Autonomous weapons are far more frightening that WMDs... And nowhere is safe.

    Then again, I wrote a book on the creation of a universal standard for determining if an autonomous weapon could be trusted with the decision to kill, so perhaps I am somewhat hypocritical there.

    GrpA

  • In a ground combat scenario, autonomous weapons could be a good thing. Right now, soldiers are tasked with protect others as well as themselves, and in most situations the safest resolution is to kill the antagonist. A machine or robot wouldn't suffer from emotional lapses in judgment (anger, hostility). A robot may have better weapons skills, so instead of a kill shot, may only need to wound. A robot would be more willing to put itself in harms way to protect a living person.

    The programming required for su

  • by argStyopa ( 232550 ) on Monday May 12, 2014 @09:52AM (#46978697) Journal

    ...as with most technological weapon issues, those with them, or with a reasonable chance of developing them will defend the idea.

    Those without will roundly condemn it using a great deal of moral and ethical language, but their base issue is that they cheerfully condemn the use of any weapons that they cannot yet field.

    The UN as a clearinghouse organization for multinational efforts does a massive amount of good that would otherwise be difficult to enable.
    The UN's general chambers are worthless talking shops where inconsequential states get to criticize significant, powerful states for acting in their own narrow self-interest ... for reasons based entirely on their OWN narrow self-interests. (Not to mention its main actual value: a way for the favored scions of grubby tinpot regimes to be prostitute-frequenting scofflaws in a place far nicer than their own pestilential capitals.)

  • Now if you were a kamikaze pilot and everybody was sleeping on the job and you went for a final dive against a modern battleship or aircraft carrier, wouldn't you already be blased to bits by an autonomous defense system? I imagine the same goes for tanks, planes, helicopters and even indivdual robot-soldiers, you'll never wait until you're blasted to bits to say "yup, that was an enemy". Even if they don't go on their own search & destroy missions I doubt they'll avoid being used as sentries, convoy es

  • by gurps_npc ( 621217 ) on Monday May 12, 2014 @10:10AM (#46978881) Homepage
    My understanding is that South Korea has robotic guns set up on the border with North Korea. While they can be human over-ridden, when fully activated, they fire at anything that attempts to cross the border.

    The Super Aegis II has a 12.7 mm machine gun and a grenade launcher. laser and infrared sensors that see 3 km in the day, 2 at night. But the gun probably can't shoot that far - it just sees that far.

  • Killer robots, or fully autonomous weapons, do not yet exist but would be the next step after remote-controlled armed drones used by the US military today.

    Weapons contractors make their living imagining new weapons, sharing their visions with the public, then advocating that the US Military develop those weapons to avoid "the enemy" from making them first. Then once the weapon is invented, new weapons need to be created to defend against the weapon that already exists. Wash, rinse, repeat.

    And people wonde

  • We need all of this why? Wanting Weapons and Wars just shows how non-advanced and uncivilized Humans are and how we haven't even moved out of our caves yet.

    If we are to survive well into the future, we need to learn to disregard our primitive ways and start thinking about others instead of only ourselves. The whole idea of separate Countries and separate people disgust me deeply. People, we are not separate, we all live on a tiny Blue Planet in the middle of an unexplored ocean of awesomeness. If we can't r

    • by clovis ( 4684 )

      We need all of this why? Wanting Weapons and Wars just shows how non-advanced and uncivilized Humans are and how we haven't even moved out of our caves yet.

      If we are to survive well into the future, we need to learn to disregard our primitive ways and start thinking about others instead of only ourselves. The whole idea of separate Countries and separate people disgust me deeply. People, we are not separate, we all live on a tiny Blue Planet in the middle of an unexplored ocean of awesomeness. If we can't rid ourselves of our primitive nature, maybe we are long overdue for extinction.

      Well said.
      All we need to do is put everyone who refuses to disregard their primitive ways into re-education camps where they can grow into rational modern humans.
      But, in the past this has caused some hostility between the people being rounded up and those who have been tasked with finding them.
      But, it has a simple solution. All we need to do is make numbers of autonomous robots to do the gathering.
      It will be easy to identify the primitive humans, because they'll be the ones trying to resist re-education. Sa

    • by xtal ( 49134 )

      If you do not have men with guns to protect your freedom, money, women, *insert thing here, other (bad) men with guns will come take it from you.

      That's human nature, and I have no problem with my way of life being protected under threat of planetary annihilation.

      Forget that lesson at your peril.

  • People condemn autonomous killing robots because they might screw up and kill something that shouldn't be killed.

    How is this any different from what humans in charge of deciding who to kill do? (exhibit A: the Iraq war)

  • I'm outraged that anyone is even considering building soldiers that have no intrinsic sense of self-preservation, adrenaline, aggression, revenge. Imagine a soldier that would allow itself to be destroyed rather than fire when ordered not to, eg if there would be civilian casualties or if the target is not definitively identified. Obviously that sort of thing can't be allowed, because if we don't kill innocent bystanders how can we spawn new enemies to fight? Sadly, I suspect that robot soldiers won't actua

  • > it would be difficult to hold anyone accountable.

    Whoever built the hardware, and programmed the software would be accountable.

    In shootings we hold the shooter responsible because it was a human who committed the crime. Since there is no human on the end, we can simply go back up the "chain" to find the people who ARE responsible.

    This isn't rocket science.

  • Think about the size of an army that China can deploy. We have zero ability to even dream of a one on one combat situation with a traditional Chinese army. Fielding mechanical warriors of various types would be our real hope other than using nuclear bombs are other weapons of mass destruction. Or the US could put two million soldiers on the line and watch them be swarmed over as a trivially, small force. Then there is the problem of cost. And it is not just for the full military responses. An
    • Comment removed based on user account deletion
    • by cusco ( 717999 )

      We have zero ability to even dream of a one on one combat situation with a traditional Chinese army.

      OK, I'll bite . . . Why the hell would we even WANT to? Are you actually frightened that the Chinese are going to swim across the Pacific and invade? Or maybe they'll wander through Russia and build a bridge across the Bering Straight, marching through Alaska? Sorry, but the entire scenario of a direct China/US war is absurd, there isn't even any reason to pretend to prepare for such a thing.

  • If a robot killed arbitrarily, it would be difficult to hold anyone accountable.

    Not like now, when every time that an Afghan peasant is killed in error, heads roll.

  • by gman003 ( 1693318 ) on Monday May 12, 2014 @11:28AM (#46979511)

    Automated weapons are already deployed - the Korean DMZ is guarded, in part, by autonomous sentry drones. If it moves, they shoot it - and they're armed with machine guns or automatic grenade launchers.

    That's a good model. Don't try to make a drone that can distinguish targets from non-targets - make something that treats everything as a target, and deploy it only when you don't have non-targets to worry about. Or, to allow your own forces to operate in the area, provide an IFF transmitter to designate them as non-targets (civilians are still fucked though - so don't use it anywhere near civilians). Works fine for air, land and sea - we already have an established concept of "shoot to kill zones", this just replaces the soldiers under orders to shoot anything that moves with robots under programming to shoot anything that moves.

    For automated weapons deployed outside such areas (or even ones within), I would say that a human still has to give the fire order. The automated system can identify targets, track them, pursue them, prioritize targets, do basically everything but pull the trigger, but it has to request permission to fire from a human operator. And for all legal and ethical purposes, that human operator can be considered the one who pulled the trigger. It's still some massive force multiplication even compared to modern drones, so I don't see why the military would have much problem with it.

    Let's wait until after we get true AI before we try to give machines the responsibility to decide whether or not to kill someone.

  • If a robot killed arbitrarily, it would be difficult to hold anyone accountable.

    Whereas currently there is no indiscriminate killing with drones going on without any accountability whatsoever? What's the current body count for innocent civilians murdered by the US and its allies in Iraq, Pakistan, Afghanistan, etc.? A few tens of thousands to hundreds of thousands?

    I doubt it would make much difference in practice...

  • At what point will we all admit that the UN is an experiment, with results that indicate the need to stop pumping cash into this failed debating body? Can anyone site a single, positive accomplishment for which the UN was/is responsible? Not really. Even if you want to point to the "climate change studies".... that was simply a compendium of other peoples (flawed) work on the subject. Is it time to admit that the UN is useless? Iran on the human rights council? Seriously?
  • 2004: Why is that website URL at the top of Google for that keyword? Answer: "The algorithm"

    2019: Why did that fully autonomous weapon kill that person? Answer: "The algorithm"

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...