Forgot your password?
typodupeerror
Robotics The Military

Examining the Ethical Implications of Robots in War 369

Posted by ScuttleMonkey
from the come-with-me-if-you-want-to-live dept.
Schneier points out an interesting (and long, 117-pages) paper on the ethical implications of robots in war [PDF]. "This report has provided the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system architecture capable of the ethical use of lethal force. These first steps toward that goal are very preliminary and subject to major revision, but at the very least they can be viewed as the beginnings of an ethical robotic warfighter. The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity."
This discussion has been archived. No new comments can be posted.

Examining the Ethical Implications of Robots in War

Comments Filter:
  • What's the point? (Score:5, Interesting)

    by ccguy (1116865) * on Monday January 28, 2008 @02:55PM (#22210830) Homepage
    Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.

    Besides, if your enemy expects your robots to defeat their army, what would be the point of fighting them in the first place? Attacking civilians seems a more logical step (I don't think it's reasonable to demand any country at war not to attack only military targets where there's none that can't be replaced easily).

    (and no, I didn't read the whole 117 pages, but after a quick glance I reached the conclusion that whoever wrote the title didn't either, so I'm sharing my thoughts on the title, not the PDF)
    • by The Aethereal (1160051) on Monday January 28, 2008 @03:02PM (#22210926)

      Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy'
      Of equal concern to me is the fact that a country with a robot army can use them against their own citizens with no chance of mass mutiny.
      • Re: (Score:3, Insightful)

        Depends on the robots. What about the people who build and maintain the robots? They can mutiny. Also I'd bet you need some sort of networking to coordinate the robots. Probably wireless. Sure you can set the right failure modes for jamming, but what about signal intrusion? You could make the robots mutiny for you.
        • Re: (Score:3, Funny)

          by Dancindan84 (1056246)

          Also I'd bet you need some sort of networking to coordinate the robots. Probably wireless.
          I agree. Let's do it centrally. We can name the hub skynet. Or V.I.K.I.
        • Re: (Score:2, Insightful)

          by Applekid (993327)
          Talk about human support for mutiny is moot when even today in the United States our rights are being stripped out one thread at a time and nobody so much as blinks or turns away from American Idol or Walmart.

          One worker might talk about it and wind up turned in (because he's a terrorist, obviously) and those that betray will be rewarded with coupons to McDonalds.
        • Re: (Score:2, Insightful)

          by smussman (1160103)

          Depends on the robots. What about the people who build and maintain the robots? They can mutiny. Also I'd bet you need some sort of networking to coordinate the robots. Probably wireless. Sure you can set the right failure modes for jamming, but what about signal intrusion? You could make the robots mutiny for you.

          But I don't think you really want that, because if the maintenance people can make the robots mutiny, how would you prevent your opponent from making them mutiny? Even if it requires very specialised knowledge, all it takes to get the secret is one converted/planted maintenance person.

        • Re: (Score:3, Insightful)

          by Kjella (173770)

          Depends on the robots. What about the people who build and maintain the robots? They can mutiny. Also I'd bet you need some sort of networking to coordinate the robots. Probably wireless. Sure you can set the right failure modes for jamming, but what about signal intrusion? You could make the robots mutiny for you.

          They can mutiny with what, sticks and stones? Whoever makes the robots will surely put in digital signatures and kill switches so that they can reclaim control from the operators as well as prevent them from being used against themselves. Hell, it's difficult enough to run your own code on a game console and try breaking WPA 128-bit encryption if you can. After the first attempts are quickly rounded up by a special ops division operated by devout fanatics, it won't happen again.

      • by Arkham (10779) on Monday January 28, 2008 @03:43PM (#22211568)
        I don't think automatic robots will ever be a smart plan. The chance of malfunction is just too great, and the consequence would be too serious. There've been a million sci-fi movies to that effect, from "Terminator" to "I, Robot".

        What would be interesting though would be robots as a shell to the humans they represent. Think "Quake" with a real robot proxy in the real world. Soldiers with hats on showing wide angle camera views of their area and a quake-like interface that would allow them to attack or assist as needed. Limited automation, but case-hardened soldiers being run by trained humans would present a powerful adversary. Heck, every army recruit would already know 80% of how to operate one on signing day if the UI was good.

        I know I'd be a lot upset with "Four robots were blown up by a roadside bomb today. They should be operational again by tomorrow." than to see more soldiers die.
        • Re: (Score:3, Interesting)

          by williamhb (758070)

          I know I'd be a lot less upset with "Four robots were blown up by a roadside bomb today. They should be operational again by tomorrow." than to see more soldiers die.

          Hmm, I worry that this could indirectly make attacks on civilians seem legitimate, and turn every war into an insurgency or terrorist scenario. Think about the case with rockets: the soldier is not the rocket, it is the person that launched the rocket. In the same manner, the enemy will not see the robots as "soldiers" but as "smart bullets" -- they will see the technicians who make, build, and commission the robots as being the soldiers they should target. And the caterers and managers and universities

      • Re: (Score:3, Insightful)

        by Amorymeltzer (1213818)

        a country with a robot army can use them against their own citizens with no chance of mass mutiny.
        You don't know the American Army! Our boys are so well trained they wouldn't think twice before firing upon the innocent masses. Hell, they wouldn't think once!
    • by KublaiKhan (522918) on Monday January 28, 2008 @03:03PM (#22210952) Homepage Journal
      I'd think that it'd be more effective to attack infrastructure--things like power stations, traffic control systems, that manner of thing--than to go after civilians directly.

      For one thing, what's the point of taking over a territory if there's nobody there to rebuild and to use as a resource?

      For another, it looks a -lot- better on the international PR scene if your robots decidedly ignore the civilians and only go after inanimate strategic targets--at least, up until the point that they get attacked. With that sort of programming, you could make the case that you're "seeking to avoid all unnecessary casualties" etc. etc.

      Mowing down a civilian populace does sow terror, of course, but keeping the civilians intact (if in the dark and without water) can be argued to be more effective.
      • Re: (Score:3, Insightful)

        by ccguy (1116865) *

        Mowing down a civilian populace does sow terror, of course, but keeping the civilians intact (if in the dark and without water) can be argued to be more effective.
        When desperate enough civilians can become soldiers. In fact, some can be willing to die (being willing to die and accepting a certain risk are totally different things). This is proven day after day in the Gaza strip for example.
        • Re: (Score:3, Funny)

          by KublaiKhan (522918)
          At which point, once they take up arms, they're conveniently reclassifying themselves as irregular enemy combatants. If they had only stayed calm and awaited further instruction from our occupying forces, robotic or otherwise, this sad scene could have been avoided. We're just trying to be as humane as possible; is it our fault if they aren't going to follow directions?

          Think like an evil overlord, man!
      • by Dr Tall (685787)
        I'd think that it'd be more effective to attack infrastructure.... For one thing, what's the point of taking over a territory if there's nobody there to rebuild and to use as a resource?

        But on the flip side, what's the point of taking over a territory if its entire infrastructure is ruined? How can you provide the civilians (whose lives you spared) the resources needed to repair the infrastructure that you destroyed?
      • Re: (Score:3, Interesting)

        by Sylver Dragon (445237)
        For one thing, what's the point of taking over a territory if there's nobody there to rebuild and to use as a resource?

        It depends on the goals of the war. If it is a war of conquest, you are right that you want to keep the infrastructure as intact as possible, and enough civilians alive to make it useful.

        On the other hand, if the war is over land or resources, an indigenous population may be counterproductive to the goal. Ultimately, you may not want the local people to interfere with the collection
      • by Kingrames (858416)
        Because they don't have to be robots.

        You can construct a bioengineered organism that eats away at specific wires and circuitry and set back an entire country about 300 years.

        Robots aren't going to be made because they're practical, they'll be made because they're scary.

        The day a robot army is announced, someone will be selling robot insurance.
    • Re: (Score:3, Insightful)

      It sounds like this is proposing something along the lines of Asimov's Three Laws of Robots, only instead of not being able to harm humans at all, they're able to harm humans only in an ethical manner.

      Instead of sending human soldiers into Iraq who are able to go crazy and kill civilians, you could send in a robot that wouldn't have emotional responses. Instead of having VA hospitals filled with injured people, you could have dangerous assignments filled out with robots that are replaceable.

      However,
    • I don't know. I think robotic armies would completely eliminate the horrors of war. Either you go to war with another country with a robot army (in which case you have a protracted war of production, same as any war between world powers since 1914 except with no human lives lost in the process) or you totally overpower the enemy (meaning they immediately surrender). Now, it would suck if the wrong people had robots, but war would be a remarkably tidy business.
      • Re:What's the point? (Score:5, Interesting)

        by ccguy (1116865) * on Monday January 28, 2008 @03:28PM (#22211336) Homepage
        This assumes that once you have destroyed your opponent's robotic army you are done. However most likely is that after the robots will come humans, so in the end you are going to lose both.

        Besides, I still fail to see why a country which is likely to lose in the robotic war would accept these rules, when it makes a lot more sense to attack the other country's civil population - which in turn might reconsider the whole thing.

        Fighting from the sofa is one thing, having bombs exploding nearby is quite different.
        • I always liked the opening scene in the movie Troy where each army puts up it's best fighter to decide the outcome. This would be far more civilized. Just extend this to two robot warriors, eh? Nobody gets killed and it's much cheaper.
        • by hazem (472289)

          Besides, I still fail to see why a country which is likely to lose in the robotic war would accept these rules, when it makes a lot more sense to attack the other country's civil population - which in turn might reconsider the whole thing.


          Of course, there is an Original Star Trek episode that takes this to the extreme. Two planets have been fighting a war for centuries. However, instead of sending real bombs, their computers collaborate to determine damage done in a "war game"... then a list of casualties
        • Re: (Score:3, Insightful)

          by kabocox (199019)
          Besides, I still fail to see why a country which is likely to lose in the robotic war would accept these rules, when it makes a lot more sense to attack the other country's civil population - which in turn might reconsider the whole thing.

          Fighting from the sofa is one thing, having bombs exploding nearby is quite different.


          Um, cause they may be terrified that the robots would switch from ethical mode to genocide on populations found to be training terrorists or recently conquered populations found to be ter
        • by Kingrames (858416)
          You don't want your robots attacking the enemy's robots.
          It'll take forever. One side yelling "IDENTIFY YOURSELVES!" and the other Yelling "You will identify first!"
          http://www.youtube.com/watch?v=QuFuTah4UXw [youtube.com]
    • Re: (Score:2, Insightful)

      by SkelVA (1055970)

      Besides, if your enemy expects your robots to defeat their army, what would be the point of fighting them in the first place? Attacking civilians seems a more logical step (I don't think it's reasonable to demand any country at war not to attack only military targets where there's none that can't be replaced easily).

      I think we just saw the thought process that bred guerrilla warfare (or terrorism, depending on your point of view). I'll make the logical leap.

      Besides, if your enemy expects your highly-tra

      • by hazem (472289)
        As per the logical process in the quotes though, you don't necessarily have to destroy the other side's army (or robots).

        Precisely! One's military power is actually the least important of the 3 necessities for war fighting. Much more important are the will to fight a war and the economic resources to wage war.
    • by kabocox (199019)
      Besides, if your enemy expects your robots to defeat their army, what would be the point of fighting them in the first place? Attacking civilians seems a more logical step (I don't think it's reasonable to demand any country at war not to attack only military targets where there's none that can't be replaced easily).

      Well, given that you have the tech to make solider death bots, let's also add in the tech to make police bots. You may not be able to properly man customs and police stations with moral upright
    • by infonography (566403) on Monday January 28, 2008 @03:30PM (#22211360) Homepage
      Consider that robots cost money, the country with more economic power is likely to be the winner in such a conflict. A large part of the U.S.A.'s success in WW2 was the sheer capacity of it's factories which were by if nothing but distance well defended against attack. European nations where under constant attack on their military infrastructure while American Factories where never bombed and even the concept of saboteurs blowing up factories in the States was a ridiculous notion to the Axis. Sure, Blow up the Pittsburgh bomb factory then you still have 20 more scattered about the US.

      Robots won't be used simply because a robot doesn't have the discrimination as to who to attack and not to. Despite Orwellian fantasies, the practical upshot is that you would suffer to much friendly fire from such weapons and intense PR backlash. Sorry I don't see it happening.

      Telepresence weapons are far more likely, as we have already seen in use.

      Japan's Ministry of Agriculture [animenewsnetwork.com] has been denying their work on this. America is full of fully trained pilots for these crafts (Wii, Xbox, Playstation etc).

      Suggested reading of Robert A. Heinlein's Starship Troopers and Robert Aspirin's Cold Cash War
      • A large part of the U.S.A.'s success in WW2...

        It was not just the United States; it was the Allies. Don't forget that the Japanese did not decide to attack the Soviet Union. This enabled the Soviets to build shit-loads of tanks and hold troops near their Eastern Coast just in case Nippon decided to attack them (the Russians and the Japanese have a long history of mutual animosity).

        When Stalin finally decided he had less to fear from the East then what he was facing from Deutschland he had shit-loads of

    • Re: (Score:3, Funny)

      by TheNarrator (200498)
      The generals will also get to blame collateral damage on bugs in the software.
      For instance:
      "Oh yeah the flame thrower robot went crazy and torched the entire village because some guy at Lockheed put a semicolon on the end of a for loop. Oops, we'll have to fix that in the next rev".
    • Re: (Score:2, Insightful)

      by MozeeToby (1163751)
      It is well that war is so terrible -- lest we should grow too fond of it.

      Robert E. Lee
    • by TheRaven64 (641858) on Monday January 28, 2008 @03:42PM (#22211542) Journal

      Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.
      Not necessarily. One of the big reasons the USA lost in Vietnam was that it became politically unacceptable to have body bags coming home. The current administration found a solution to that; ban news crews from the areas of airports where the body bags are unloaded.

      Beyond that it's just a question of economics. It costs a certain amount to train a soldier. Since the first world war, sending untrained recruits out to fight hasn't been economically viable since they get killed too quickly (often while carrying expensive equipment). A mass-produced robot might be cheaper, assuming the support costs aren't too great. If it isn't then the only reason for using one would be political.

      • by Sleepy (4551)
        >>Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.

        >Not necessarily. One of the big reasons the USA lost in Vietnam was that it became politically unacceptable to have body bags coming home.

        Almost right.
        The US public turned against the Vietnam war when MIDDLE CLASS kids came home in body bags.

        This for no better reason is why the US military went "volunteer".

        Whenever a war hawk gets
    • Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.

      Don't worry there is still the nuclear option.

      Seriously, I think the same ethics behind nuclear warfare applies to robotic warfare. Both kill people from a distance, just one of them is slower at it than the other.

      As for becoming 'war happy', this is where the theory of Mutual Assured Destruction (MAD) apply. A country would not want to at

      • A country would not want to attack another country with robots (or anything else) for fear of retribution from the target or its allies using similar methods.

        Unless that country is made up of a significant number of religious folk who believe suicide attacks are condoned as part of their religious beliefs.
    • I'm not even going to try answering that question.

      Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.

      Hold on a minute, we're a tremendously long way away from replacing all, most, or any significant portion of soldiers on the field with robots. Even if it were the case, when the hell did robotic killing machines start coming cheap, and in infinite supply? The ability to fight at war will always depend on the amount of resources, efficiency, and a strong will (not only on the front lines).

      Loss of life alone on the battlefield doesn't slow war. Loss of a

    • Re: (Score:3, Insightful)

      by mi (197448)

      Obviously a country that can send robots instead of soldiers to fight is way more likely to become 'war happy' - so I'm not sure this robot thing is a good idea at all.

      This is not robot-specific — it is true about any superiority in weapons...

      Besides, if your enemy expects your robots to defeat their army, what would be the point of fighting them in the first place? Attacking civilians seems a more logical step

      Again, nothing robot-specific here either. Unable to take on our military directly, Al Q

  • Obligatory (Score:5, Funny)

    by Anonymous Coward on Monday January 28, 2008 @02:56PM (#22210848)
    I for one welcome our new robotic overlords.
  • As a registered misanthrope, I support anything that kills more people.

    "I am KillBot. Please insert human."

  • by KublaiKhan (522918) on Monday January 28, 2008 @02:56PM (#22210858) Homepage Journal
    If you've got battlebots, why not have one against another to resolve international conflicts, rather than destroy infrastructure and the like?

    It'd probably take a mountain of treaties and the like, and of course any organization used to judge the battlebot contest would be rife for corruption and whatnot, but it couldn't be that much worse than what happens around the World Cup and the Olympics...
    • by moderatorrater (1095745) on Monday January 28, 2008 @03:02PM (#22210932)

      If you've got battlebots, why not have one against another to resolve international conflicts, rather than destroy infrastructure and the like?
      We've already built structures to solve international conflicts, and it works extremely well when the two sides are willing to work through those structures. The US doesn't need battlebots to deal with European powers, because both sides are willing to talk it through instead. However, when Iraq refuses to cooperate, or the Arabs in Israel refuse to cooperate, the procedures break down and you're left with two countries that can't reach an agreement without raising the stakes.

      In other words, for those countries willing to abide by a mountain of treaties, the problem's already solved. It's the other countries that are the problem, and they're unlikely to resolve their differences like this anyway.
      • Re: (Score:2, Insightful)

        by sammyF70 (1154563)
        How well this work and how willing the US is at talking to .. well .. ANYBODY .. can be seen in archive footage of the UN meetings prior to the latest Iraq invasion.
        If you decide to resolve wars using only bots (or even by playing out a virtual video-game like war), my bets are that one of the side will realize it can actually physically attack its opponent, while the opposing side is arguing that the random number generator used is unfair.
        Add to that that what you want are generally the natural ressource
      • Re: (Score:2, Troll)

        by meringuoid (568297)
        However, when Iraq refuses to cooperate, or the Arabs in Israel refuse to cooperate,

        The Arabs in Israel? I thought it was the Arabs outside Israel who were the problem. Hamas causing bother in the Occupied Territories and all that. The Arabs in Israel itself, I haven't heard that they're such a big problem.

        Unless of course you have an unusually broad definition of what constitutes Israel?

      • Re: (Score:2, Insightful)

        by Splab (574204)
        You must be American to have such a screwed up view of whats going on in Iraq and Israel.

        And talk it through? Since when did Americans start to respect any treaty that didn't put them in a favorable view? Building a robot army is just the next logical step in alienating the rest of the world.
        • Re: (Score:3, Insightful)

          by JonWan (456212)
          Since when did any country start to respect any treaty that didn't put them in a favorable view?

          There I fixed it for you.

    • Take it one step further and virtualize the whole thing.

      Hell, you can use software from the 1960's. [wikipedia.org]

    • Enter the horrendous movie Robot Jox [imdb.com]. But hell, if settling international and territorial disputes means I get to pilot one of those bad boys, sign me up!
    • War is what happens when treaties stop working. You can't have a treaty for some other competition to replace war--if that was the case, FIFA would have replaced the UN by now and Brazil would be a superpower. The purpose of war is to use force in order to impose your will on the enemy, whoever those people may be. The idea is, after your robots destroy the enemy's robots, they will continue to destroy the enemy's infrastructure and population until they give up.
      • by imgod2u (812837)
        While I agree that it will not be a purely robot vs robot war, the idea is that since robots are expendable, less collateral damage will be necessary. That is, you won't have a "shoot-and-ask-questions-later" mentality because you can afford to have some robots get blown up by the other side if it meant not shooting innocent civilians.

        The robots would often have to subdue humans, of course, but this can be done through non-lethal means. What battlebots gives is the ability to selectively use non-lethal fo
    • I would suggest that it will work out a little differently. Once battlebots become superior to human soldiers in warfighting ability, most battles will be between bots, with relatively few humans involved. This is simply because the bots will be the superior fighting force, and deployed preferentially by both sides. Only once one side's bot army is defeated would the war become bots against humans, and in that case the losing side would typically surrender rather than face a massacre of its population.
    • If you've got battlebots, why not have one against another to resolve international conflicts, rather than destroy infrastructure and the like?

      It'd probably take a mountain of treaties and the like, and of course any organization used to judge the battlebot contest would be rife for corruption and whatnot, but it couldn't be that much worse than what happens around the World Cup and the Olympics...


      Um, we'd use the existing way. That would mean that we'd go to war and find out, which side has the better/best
    • why not have one against another to resolve international conflicts

      Instead of robots... why don't we just have the leaders play a game of checkers.
  • by Nursie (632944)
    "creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity"

    Sounds easy to me.

    Rule 1 - Don't abuse prisoners.

    There, we already have a machine that outperforms humans.
    • Sounds easy to me. Rule 1 - Don't abuse prisoners. There, we already have a machine that outperforms humans.

      Actually, I think that's the hardest part. Programming a robot to go out and blow shit up isn't such a difficult problem. Programming a robot to recognise when a human adversary is surrendering and to take him prisoner - I don't really know where you'd begin. It's the ED-209 problem: the shooting works fine, the trouble is deciding whether or not you actually ought to do so.

      I'd guess what they're

    • Same problem... (Score:4, Interesting)

      by C10H14N2 (640033) on Monday January 28, 2008 @03:20PM (#22211212)

      I've always wondered how HAL or Joshua would interpret:

      Rule 1: Kill enemy combatants.
      Rule 2: Do not kill or abuse prisoners.

      "Take no prisoners, kill everything that moves" would be the most efficient means of satisfying both, especially after friendly-fire ensues.

  • Political Ethics... (Score:5, Interesting)

    by RyanFenton (230700) on Monday January 28, 2008 @03:01PM (#22210920)
    Yes. Superior robotic ethics. A regular Ghandi-bot, saving only those who are threatened, willing to die rather than kill in doubt.

    That's all well and good... but what of the men who send these robots into battle? What happens to their sense of ethics? Do they begin to believe that their sending troops into pacify a landscape over political differences is a morally superior action? Do they begin to believe that death-by-algorithm is a morally superior way of dealing with irrational people?

    There's an endless array of rationalizations man can make for war, and subjugation of those who disagree with them. Taking the cost of friendly human lives out of the equation of war, and replace it with an autoturret enforcing your wishes doesn't make for a 'morally superior' political game. For many, it would make for an endgame in terms of justifying a military police as the default form of political governance.

    Ryan Fenton
    • Re: (Score:2, Interesting)

      by drijen (919269)
      If I recall, one of the Gundam Wing Anime Series, dealt with the questions of robots in war. It pointed out the most critical question of all:

      War is about sacrifice, cost, and essentially fighting for what you believe in, hold dear, and WILL DIE to preserve. If you remove the *human* cost from war, then where is the cost? What will it mean if no-one dies? Will anyone remember what was fought for? Will they even recognize why it was so important in the first place?

      Also, if we have mass armies of r
      • That sounds exactly as insightful as I thought it would once you said "Gundam".

        Also, if we have mass armies of robots, won't the victor simply be the one with the most natural resources (metal, power, etc) to waste?

        War is already based on production and logistics, and has been since the Industrial Revolution.

      • by meringuoid (568297) on Monday January 28, 2008 @03:32PM (#22211406)
        War is about sacrifice, cost, and essentially fighting for what you believe in, hold dear, and WILL DIE to preserve. If you remove the *human* cost from war, then where is the cost? What will it mean if no-one dies? Will anyone remember what was fought for? Will they even recognize why it was so important in the first place?

        Bullshit. War is about taking orders, fighting for what someone else believes in, and then getting blown up. Dulce et decorum est pro patria mori and all that shite. That poetic nonsense you spout there is just part of the cultural lie that sells war as romantic and idealistic to every generation of young fools who sign up and go out there to put their lives on the line for the sake of the millionaires. You got it from anime, too... how sad is that? You're buying the same line of bullshit that inspired the damn kamikaze! Clue: Bushido is a lie. Chivalry is a lie. War is about nothing but power.

        Also, if we have mass armies of robots, won't the victor simply be the one with the most natural resources (metal, power, etc) to waste? (Better weapons technology aside)

        Yes. How does that differ from the present situation?

      • How long has it been since there was a war in which the people who decided to go to war paid any human price? A hundred years? War has been about economics for a long time - is it cheaper to obey a treaty or take what you want by force? Perhaps an international body that executes the leaders of whichever country wins a war would be a good idea...
  • by Sciros (986030) on Monday January 28, 2008 @03:03PM (#22210960) Journal
    My Apple comp00tar will just upload a virus wirelessly to them and they will all shut down! I've seen it done!
  • by jockeys (753885) on Monday January 28, 2008 @03:09PM (#22211044) Journal
    Do these killbots have a preset kill limit? Can they be defeated by sending wave after wave of your own men at them?
  • It seems sort of oxymoronic.

    I mean if you program the robots with Asimov's Laws of Robotics [wikipedia.org], then what's the problem.

    Robot on Robot violence?

    Conscientiously objecting robots?

    Or - the horror - formulation of a "Zeroth Law [wikipedia.org]"?

  • by letchhausen (95030) <letchhausenNO@SPAMyahoo.com> on Monday January 28, 2008 @03:15PM (#22211130) Homepage
    When are they going to stop using robots for evil and start using it for good? I want a Natalie Portman "pleasure model" robot and I want it now! Science has lost it's way.....
  • by Overzeetop (214511) on Monday January 28, 2008 @03:18PM (#22211186) Journal
    Wars are won by those who do not follow the "rules." There are no rules in war. If there were, then there would be a third party far more powerful than either side who could enforce said rules. If there was, then that power could enforce a solution to the conflict that started the war, and there would be no need for war. Said power would also not need answer to anyone, and would be exempt from said rules (having no one capable of enforcing them).

    • by Bagels (676159)
      So, basically we need an actively intervening God (or a god-like entity), is what you're saying. That could help us in lots of ways, but I've still always been a fan of free will.
  • Inching ever closer to the inevitable Robot Jox [imdb.com]

  • I don't see how a robot will ever be able to solve an ethical dilemma on the battlefield. It's such a subjective, context sensitive, issue that we're centuries away from being able to handle that digitally. Another thing, I don't think the planet will ever agree to some "let the robots fight it out" type of warfare either. When a country's robot force is beaten they're not going to just throw up their arms and say "good match sir, here take our land/resource you won it fair and square". They'll still resist
  • Kill all humans !
  • by fuzzyfuzzyfungus (1223518) on Monday January 28, 2008 @03:23PM (#22211268) Journal
    The question of making lethal robots act ethically is far easier in some ways than doing so with humans and far harder in others. On the plus side, robots will not be subject to anger, fear, stress, desire for revenge, etc. So they should be effectively immune to the tendency toward taking out the stress of a difficult or unwinnable conflict on the local population. On the minus side, robots have no scruples, probably won't include whistleblowing functions, and will obey any order that can be expressed machine-readably.

    The real trick, I suspect, will not be in the design of the robots; but in the design of the information gathering, storage, analysis, and release process that will enforce compliance with ethical rules by the robot's operators. As the robots will need a strong authentication system, in order to prevent their being hijacked or otherwise misused, the technical basis for a strong system of logging and accountability will come practically for free. Fair amounts of direct sensor data from robots in the field will probably be available as well. From the perspective of quantity and quality of information, a robot army will be the most accountable one in history. No verbal orders that nobody seems to remember, the ability to look through the sensors of the combatants in the field without reliance on human memory, and so on. Unfortunately, this vast collection of data will be much, much easier to control than has historically been the case. The robots aren't going to leak to the press, confess to their shrink, send photos home, or anything else.

    It will all come down to governance. We will need a way for the data to be audited rigorously by people who will actually have the power and the motivation to act on what they find without revealing so much so soon that we destroy the robots' strategic effectiveness. We can't just dump the whole lot on youtube; but we all know what sorts of things happen behind the blank wall of "national security" even when there are humans who might talk. Robots will not, ever, talk; but they will provide the best data in history if we can handle it correctly.
  • I see a lot of implementation problems before even getting involved with the ethical issues. I mean, there's the usual friend-or-foe IDing issues. Then there's the problem of getting the software to recognise a weapon. If you program it to recognise the shape of an AK, it'll pick up replicas or toys or, heck, lots of stuff that looks vaguely gun-shaped. And the enemy will simply resort to distorting the shape of the weapon, which can't be hard to do. Given that it will be a while before AI technology will i
  • WOPR [wikipedia.org] wants to know if you'd like to play a nice game of Chess.

    The Matrix [wikipedia.org] is reported as coming on-line shortly to service your needs.

    SkyNet [wikipedia.org] remains unavailable for comment.
  • I hope we run them on Linux or some other software much moe difficult to compromise. Then again human error is ever present in code so complex.

    Few Movie Plots come to mind on commandeered war robots, but far too many movie plots focus on robotic AI's wising-up and going after their human overlords. Bladerunner, T1-T3, Runaway, Red Planet, A.I. (to a lesser degree), Lost In Space, etc... Help me out here if you want as I am sure I missed many others.
  • by Fuzzums (250400)
    Maybe those robots know not to attack when non-existing WMD are an excuse for war... :s
  • So, why not settle all disputes using video games? That's what it'll come down to, unless the 'bots literally do all the fighting without any human interaction. In a way, Ender's Game [answers.com] gets at this point.
  • The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity.

    These people need to read. A basic background in Asimov would tell you exactly what's going to happen here.

    • General: go kill people.
    • Robot: due to my ethical outperformance, peace is the best option. I cannot comply.
    • General: I order you to comply wi
  • I imagine someone wrote a treatise about the ethical implications of tanks, despite the fact that they saved lives (at least, the lives of the men in the tanks). Or machine guns when they first came out, or carbines, or muskets, or arrows.

    If the enemy shoots down a robotic plane, nobody gets killed.

    What of unintended deaths? Well, there is "friendly fire" and dead civilians with nonrobotic weapons, too.

    How about the ethical implications of war itself? What about the moral imp[lications? After all, "ethics"
  • Leaving people we care about (whether friends, family, neighbors, or fellow citizens) out of the fighting, replaced by robots, lets us ignore the need for war to be more ethical in how it's fought and what it's fought for. It dehumanizes the enemy even more than do the reasons for war and the following propaganda. People will claim the rewards of "bravery" for pushing a button, without having to pull a trigger in the face of return fire.

    Robots will make war more likely. Which is unethical.

    "The Bravery of Be [lyricsdepot.com]
  • I'm not so concerned with bots in warfare as much as I am about placing really frickin unfair spawn points in the battlefield. Isn't unethical to be able to telefrag enemy soldiers with a cheap bot before they even had a chance to see the enemy? Without solving both ethical issues surrounding telefragging and aimbots, I'm afraid warfare will remain an unpopular and unengaging endeavor.
  • ... the primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law...

    Yes, and when you send in the robots to fight insurgents who do not honor the law, the insurgents will win every time. If it comes to all-out-war the first thing that will be tossed is the international rule book.

    When two parties engage in battle, the party that does not abide by all laws will inevitably win.

  • a very advanced AI robot would fight a war by sending statistical odds to the opposing army of robots, upon which time they would realize which side should surrender.
  • It won't happen. You cannot factor death and destruction out of war. No matter how 'ethical' your robots are, no matter how many international treaties you establish, no matter which party you elect, so long as human beings are (a) imperfect and (b) stopped by death, you will need to use deadly force and you will have wars.

    And God help us if human beings are no longer stopped by death.

    That's the part that really worries me about robots in war: by eliminating the need for human beings, you make it almost cer

COMPASS [for the CDC-6000 series] is the sort of assembler one expects from a corporation whose president codes in octal. -- J.N. Gray

Working...