Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics The Military Technology

Weapons Systems That Kill According To Algorithms Are Coming. What To Do? 514

Lasrick writes "Mark Gubrud has another great piece exploring the slippery slope we seem to be traveling down when it comes to autonomous weapons systems: Quote: 'Autonomous weapons are robotic systems that, once activated, can select and engage targets without further intervention by a human operator. Advances in computer technology, artificial intelligence, and robotics may lead to a vast expansion in the development and use of such weapons in the near future. Public opinion runs strongly against killer robots. But many of the same claims that propelled the Cold War are being recycled to justify the pursuit of a nascent robotic arms race. Autonomous weapons could be militarily potent and therefore pose a great threat.'"
This discussion has been archived. No new comments can be posted.

Weapons Systems That Kill According To Algorithms Are Coming. What To Do?

Comments Filter:
  • Skynet (Score:5, Insightful)

    by ackthpt ( 218170 ) on Wednesday January 08, 2014 @06:25PM (#45902517) Homepage Journal

    Yet another predictor.

    Bring on the Terminators.

    • Re:Skynet (Score:5, Funny)

      by Anonymous Coward on Wednesday January 08, 2014 @06:51PM (#45902783)

      The easiest way to avoid being vaporized is to wear a shirt that reads "/dev/null". No intelligent system will send anything your way.

      captcha: toasted (damn, /dev/null has never failed me before)

    • Re:Skynet (Score:5, Interesting)

      by khasim ( 1285 ) <brandioch.conner@gmail.com> on Wednesday January 08, 2014 @07:31PM (#45903139)

      That's pretty much it.

      These are only a problem if they are built and used.

      We cannot stop anyone from building them (in secret). But we can get updates added to the Geneva Conventions. And we can choose how we deal with anyone who uses these.

      Although at the moment it looks like we (USA! USA!) will be the ones using them. So contact your Congress Critters and make sure they know that you'll support them if they vote to ban our usage of these.

      • But we can get updates added to the Geneva Conventions. And we can choose how we deal with anyone who uses these.

        I think countries would need to sign the revised Convention before they would become liable for violation.

        Although at the moment it looks like we (USA! USA!) will be the ones using them.

        I doubt that other major powers are ignoring such technology. I think other powers have a more closed procurement process and greater control over their design/development bureaus. We are less likely to hear about their designs until they are fielded or made available for export.

      • Re:Skynet (Score:4, Insightful)

        by farble1670 ( 803356 ) on Wednesday January 08, 2014 @10:03PM (#45903997)

        These are only a problem if they are built and used.

        how do you know that smart weapons won't result in fewer deaths, and fewer deaths of non-combatants?

        humans have a pretty poor track record and it wouldn't take much to approve upon. if you think the man in the trenches is making good judgments about when and who to kill, you should talk to a vietnam vet.

        • humans have a pretty poor track record and it wouldn't take much to approve upon.

          So how do you program them without said human's intervention? Humans are fallible so all things produced by them are also fallible. While "I Robot" and "RoboCop" are fictions they address very real concerns. It is not a matter of if but when will these systems make incorrect/inaccurate choices and kill innocents. And corruptible humans will sell these things under the table to less than scrupulous individuals for protection/collections purposes. Tanks and missiles go missing from the military all the time.

        • by Lumpy ( 12016 )

          "how do you know that smart weapons won't result in fewer deaths, and fewer deaths of non-combatants?"
          We dont need smart weapons for that. All we need is someone to deem that everyone that died in the attack was a combatant.

          It seems to work well for the government so far.

      • Although at the moment it looks like we (USA! USA!) will be the ones using them. So contact your Congress Critters and make sure they know that you'll support them if they vote to ban our usage of these.

        And while you're at it, land mines kill something like 70 civilians every day, including a lot of children. Remind your congressman that land mines are barbaric, that the US should be opposed to children's legs getting blown off. Urge them to sign the Ottawa treaty. Other non-signatories are the usual countries we think we're morally superior to: Russia, China, Myanmar, United Arab Emirates, Cuba, Egypt, and Iran. (Israel too, but they might think that Israel is also a good guy and take that as a sign t

    • by Grismar ( 840501 )

      Skynet? Really? That's the one thing /.-readers can think of that could go wrong with this technology?

      So, as long as we don't develop self-aware AI that somehow decides to rise against its creator, we're fine with having weaponry that can acquire and engage human targets autonomously? We're fine with armies of these devices at the direction of a few mad men, with just a single conscience deciding the fate of thousands instead of having a human at every trigger?

      We should oppose this type of weapon for the sa

      • Re:Skynet (Score:5, Insightful)

        by YttriumOxide ( 837412 ) <yttriumox@COUGARgmail.com minus cat> on Thursday January 09, 2014 @07:38AM (#45905633) Homepage Journal

        because all evidence shows that the weak point always lies with the soldier that has to pull the trigger and decide to kill a fellow human being.

        All evidence that I've seen shows that a large number - possibly even the majority - of soldiers have been brainwashed in to following orders unconditionally and will commit the most horrendous crimes against humanity when ordered to do so. And - even when not ordered - that same brainwashing includes training in not thinking of 'the enemy' as human, because that causes you to delay in the critical moment. So they dehumanise the enemy to the point that further atrocities can be committed even when not under orders to do so.

        Note that I don't blame the soldiers themselves in a lot of these situations - they are often good people who given time to think and reason it through would not behave that way, but their training has so messed with them that some actions they'll take don't reflect on the person they are.

        Also note that I did say "a large number of soldiers" and not all. There are plenty of cases you can find of soldiers going against orders they believe to be morally reprehensible, but the fact that OTHER soldiers then do it is a testament for the argument and not against it.

  • Select targets? Really? Wait until the system realizes ALL humans are targets.
  • by jjeffries ( 17675 ) on Wednesday January 08, 2014 @06:28PM (#45902539)

    They're not "coming" as if from space. We just need to choose for them not to exist and they won't. These things will (or won't) be made by individuals who can make moral decisions.

    Don't be a terrible individual; don't make or participate in the making of terrible things.

    • Re: (Score:2, Insightful)

      by geekoid ( 135745 )

      Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.

      I don't know why people think they are bad.

      • by Opportunist ( 166417 ) on Wednesday January 08, 2014 @06:52PM (#45902797)

        We have more accurate weapons than ever. Compare the average cruise missile to the average arrow and tell me:

        1. Which one is more accurate?
        2. Which one causes more deaths?

        You will notice that they are NOT mutually exclusive. Quite the opposite.

      • by Anonymous Coward on Wednesday January 08, 2014 @07:14PM (#45902987)

        Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.

        I don't know why people think they are bad.

        Extra-judicial killings of US citizens.

        • by zaft ( 597194 ) on Wednesday January 08, 2014 @07:27PM (#45903117) Homepage

          Except looking at history, they will probable lead to fewer soldier deaths, fewer bystander deaths, more accurate targeting.

          I don't know why people think they are bad.

          Extra-judicial killings of US citizens.

          Let's call it what is is: murder of innocent US citizens.

          (don't think they are innocent? They are innocent until proven guilty!)

          • by Anonymous Coward on Wednesday January 08, 2014 @08:17PM (#45903445)

            How about: Murder of innocent citizens.

            95% of them aren't americans (me included). Why would the distinction be important?

            • by dryeo ( 100693 ) on Wednesday January 08, 2014 @09:28PM (#45903835)

              How about: Murder of innocent citizens.

              95% of them aren't americans (me included). Why would the distinction be important?

              Americans don't seem to think that non-Americans are people, therefore not deserving of rights

      • by Charliemopps ( 1157495 ) on Wednesday January 08, 2014 @07:25PM (#45903097)

        Or... and I know this sounds crazy... we could just not kill people anymore. I know we like to be the super heroes of the world, running around fighting everyones wars and everything... hell, I used to think that way to. But at a certain point you just have to stand back and say "you know what? Fuck it. I'm done blowing 1/3rd of our budget dropping bombs on people I don't know for a cause I barely understand just to have any and all progress erased in a few years because the real problems in other parts of the world have little to do with their totalitarian leaderships."

        • Or... and I know this sounds crazy... we could just not kill people anymore. I know we like to be the super heroes of the world, running around fighting everyones wars and everything... hell, I used to think that way to. But at a certain point you just have to stand back and say "you know what? Fuck it. I'm done blowing 1/3rd of our budget dropping bombs on people I don't know for a cause I barely understand just to have any and all progress erased in a few years because the real problems in other parts of the world have little to do with their totalitarian leaderships."

          You think that's why we go to war? We don't go because we need to save and help people. We go because it is in the nation's perceived interest. We go to maintain and extend US hegemony. We go to make the climate friendly to US businesses. Sure, we could stop killing people. But without the threat of force, how do we get people to do what we want them to do?

          The reasons given publicly for war are almost never the actual reasons. If it seems ridiculous to you that we go to war and don't achieve the obje

      • by sjames ( 1099 ) on Wednesday January 08, 2014 @07:57PM (#45903343) Homepage Journal

        Far too easy for all humans involved to disavow any responsibility when the thing shoots up a busload of children. No ability to decide the CO has gone nutsy cuckoo and report up the chain of command. No ability to decide the CO's order is just plain illegal and refuse.

        Nobody to report back home about how ugly and unnecessary it all is. Killing people, especially lots of people should NOT be cost effective.

        Other than that, it's just great.

    • by timeOday ( 582209 ) on Wednesday January 08, 2014 @06:51PM (#45902775)

      We just need to choose for them not to exist and they won't.

      I disagree. At some point a civilian smartphone, or self-driving car, will contain practically all the technology to be weaponized. (E.g. "avoid people" becomes "pursue people"!) Once you have the sensors, pattern recognition, and mobility, there's no way to control all the possible applications.

    • One guy'll be making a computer vision system to recognize faces "to make it easier to log in to your cellphone".

      Another guy'll be making a robot painting system that aims it's cars "so make a more profitable assembly line".

      Yet another'll make a self-driving car "so you won't have to worry about drunk drivers anymore".

      Once those pieces are all there (hint, today), it doesn't take much for the last guy to glue the 3 together; hand it a gun instead of spraypaint; and load it with a databases of faces you don't like.

      • by TiggertheMad ( 556308 ) on Wednesday January 08, 2014 @07:15PM (#45903007) Journal
        I think that there is a difference, though. It is one thing to create unrelated technology that when linked together is dangerous. It is another thing to just create technology that doesn't have an application outside of killing people. By your argument, every invention all they way back to using flint and tinder to create fire is nothing but a weapon, and why should we even have bothered?

        My prediction is that this technology will float about the edge of popular awareness, until an unbalanced individual sets up a KILLMAX(tm) brand 'smartgun perimeter defense turret' in an elementary school and murders a bunch of children and escapes because he didn't have to be on the scene. Then national outrage will lead to mass bans on such weapons.

        Should we be making such weapons? I don't know, I suppose that the argument can be made that they fill the same role as land mines, but have the upside that there is less problem with getting rid of them when the fighting stops. I find the glee we as a species have in building better was of killing each other to be really depressing on the whole.
        • I think that I find the glee we as a species have in building better was of killing each other to be really depressing on the whole.

          The very existence of flamethrowers proves that sometime, somewhere, someone said to themselves, "You know, I want to set those people over there on fire, but I'm just not close enough to get the job done." ~George Carlin

    • by Hatta ( 162192 ) on Wednesday January 08, 2014 @07:40PM (#45903209) Journal

      No. These things *will* be made, by people who make immoral decisions. The people who get to make those sorts of decisions are already mostly terrible people.

  • by bobbied ( 2522392 ) on Wednesday January 08, 2014 @06:29PM (#45902541)

    Problem solved!

  • Sci-Fi to watch... (Score:2, Insightful)

    by Anonymous Coward

    Terminator

    ST TNG: Arsenal of Freedom

    Etc...

  • by Mikkeles ( 698461 ) on Wednesday January 08, 2014 @06:30PM (#45902561)

    Hack the system with an algorithm that kills the deployers, of course!

  • While the first thing that comes to mind is a machine that instantly targets and destroys, I wonder if this could be something more methodical. Since "friendly" human lives aren't on the line for the decision maker, these could be used to slow down the process of determining whether or not to use lethal force.

    For example, much larger sets of data could be used that just "Looks like a bad guy with a gun and I think he might want to shoot me." With facial recognition, individual enemy combatants could be tr

    • I can think of some situations where you don't even have to use facial recognition per say. If you're in a vehicle and the system detects an RPG fired at you. It's pretty easy to distinguish "RPG" from background noise. It should also be relatively easy to detect the 'source' and immediately return fire.

      If firing an RPG is a guaranteed way to get hit with several belts of radar/IR guided 50 caliber machine gun fire--you might have a really hard time finding people willing to pull the trigger. Similarly

  • Easy (Score:5, Funny)

    by tool462 ( 677306 ) on Wednesday January 08, 2014 @06:35PM (#45902601)

    Wear a tshirt with a message written in a carefully formatted font so it causes a buffer overflow, giving your tshirt root privileges.
    Mine would have the decss code on it, so the drone starts shooting pirated DVDs at everybody. The RIAA will make short work of the problem at that point.

  • I thought that the Aegis weapon system has had something called the Auto-Special mode for quite some time? Basically, you sit back and watch the targets getting destroyed.
    • 2 points. First, while there is no human “in the loop”, there is a human “on the loop”. They have discretion here. Second, it is basically a defensive system to shoot incoming cruise missiles – it’s range of targets is pretty limited. This would be very different than an offensive autonomous system that hunts and kills on it’s own.

      • will keep control at the top where it belongs (till the systems at the top take control and there is no one in the missile silos to stop the launch)

      • The problem with remotely piloted vehicles is the up and down links are the weak link. If you take out your opponents comm links with jamming or by shooting down their relays you take out their entire drone capability at least until you can restore the comm links.

        If you are going to depend on drones the only solution is they have to be autonomous. The only other solution is they have to be manned and introducing pilots entails increased cost, lowers mission duration, increase risk of loss of life and capt

  • by aardvarkjoe ( 156801 ) on Wednesday January 08, 2014 @06:36PM (#45902617)

    What To Do?

    "Endeavor to be one of the people writing the algorithms" would probably be a good idea.

  • Trying to make people forget what originally meant "Blue Screen of Death"
  • by medv4380 ( 1604309 ) on Wednesday January 08, 2014 @06:39PM (#45902645)
    I want Killer Robots that Kill only Killer Robots. Having an army of Killer Robots that kill people is just asking some to run the "kill all humans" command while logged in as root in the All Countries Directory on accident, or on purpose by an anarchist wanting a lot of death.
  • I thought humans need to be in the loop at all times, so AIs can select, but humans need to pull the trigger. "Selected target image displayed above. [cancel] [OK]"
  • Devices which can engage targets without human intervention are fairly common: landmines.
    We do know that they kill hundreds of innocents every year.

    Put some cameras and algorithms, and you may kill/maim less innocents, but you won't get to zero. You can't get to zero when you put a human brain behind the trigger, how do you make a machine decide which teenager is a bad guy?

    Actually, let me offer a simple solution to that last question:
    Connect the machine to a massive database which contains data about every

  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Wednesday January 08, 2014 @06:44PM (#45902687) Homepage Journal

    Weapons Systems That Kill According To Algorithms Are Coming

    I don't get this... Aren't human soldiers killing based on something other than algorithms? Or is it that the implementations are coded in vague human languages, that makes them feel somehow warm and fuzzy? Well, Pentagon's Ada may be considered similar, but only in jest...

    I'd say, whether such systems are bad or good is still up to the algorithms, not the hardware (nor pinkware), that executes them.

    • On the bright side, algorithm-driven machines are unlikely to pull their guns just because they have an attitude problem like some cops do.

      • On the bright side, algorithm-driven machines are unlikely to pull their guns just because they have an attitude problem like some cops do.

        It also wouldn't have to worry about any of those pesky emotions like compassion or remorse slowing down its murder spree.

    • by sociocapitalist ( 2471722 ) on Thursday January 09, 2014 @06:07AM (#45905335)

      Weapons Systems That Kill According To Algorithms Are Coming

      I don't get this... Aren't human soldiers killing based on something other than algorithms? Or is it that the implementations are coded in vague human languages, that makes them feel somehow warm and fuzzy? Well, Pentagon's Ada may be considered similar, but only in jest...

      I'd say, whether such systems are bad or good is still up to the algorithms, not the hardware (nor pinkware), that executes them.

      For me the big difference is that if you activate the military to suppress their own populace when it demonstrates that the soldiers can at least choose not to follow orders.

      The idea of the US (for example) with the ever increasing trend of the suppression of constitutional rights having robots that kill whoever they're activated against is terrifying.

  • by jamiefaye ( 44093 ) <jamie@ f e n t o n i a . c om> on Wednesday January 08, 2014 @06:46PM (#45902715) Homepage

    ... both land and naval. They have become more sophisticated in that they can be triggered by target characteristics, and in the naval case, maneuver.

    • by LWATCDR ( 28044 )

      Yep you also have anti ship missiles that you can fire along a vector that will pick their target. Anti radar missles that will hang from a chute waiting for the radar to come on.... And so on.

    • by dryeo ( 100693 ) on Wednesday January 08, 2014 @09:31PM (#45903841)

      Most of the world has illegalized mines, with the exception of America. I wonder if the Geneva conventions were being drawn up now, the Americans would boycott them?

  • what side do you want?

    1. United States
    2. Russia
    3. United Kingdom
    4. France
    5. China
    6. India
    7. Pakistan
    8. North Korea
    9. Israel

  • by LocalH ( 28506 ) on Wednesday January 08, 2014 @06:50PM (#45902757) Homepage

    Shit just got real.

  • by steveha ( 103154 ) on Wednesday January 08, 2014 @06:51PM (#45902773) Homepage

    David's Sling, a novel by Marc Stiegler, is about the first "information age" weapons systems. These are autonomous robotic weapons that use algorithms to decide which targets to hit, and the algorithms are designed to take out enemy communications and decision-making. The weapons would try to identify important comm relays and take them out, and would analyze comm traffic to decide who is giving orders and take them out.

    The book was written before the fall of the Soviet Union, and the big finale of the book involves a massive Soviet invasion of Europe and the automated weapons save the day.

    Unlike some portrayals of technology, this book covers project planning, testing, and plausible software development. It contains tense scenes of QA testing, where the team makes sure their hardware designs are adequate and that their software mostly works. (They can remote-update the software but of course not the hardware.)

    Mostly they left the weapons autonomous, but there was a memorable scene where a robot was having trouble whether to kill someone, and the humans overrode the robot and had it leave the guy alone. (The guy was injured, and lying there but moving a little bit, and the robot was not sure whether the guy was already killed or should be killed again. Hmm, now that I think about it, this seems rather implausible, but it was a nifty scene in the book.)

    http://www.goodreads.com/book/show/3064877-david-s-sling [goodreads.com]

    P.S. I bought the book when it first came out, and there was an ad for a forthcoming hypertext edition that never came out. I think it was never actually made, but I wish it had been.

  • by istartedi ( 132515 ) on Wednesday January 08, 2014 @06:52PM (#45902789) Journal

    Hack in. Make military-industrialists fit the target profile. Problem solved.

  • The BETA testers of this system....

  • by bziman ( 223162 ) on Wednesday January 08, 2014 @06:54PM (#45902817) Homepage Journal
    It's good in principle, but I oppose it because implementations are never foolproof, and when the result is death, there's no way to change your mind later.
  • From what I understand, Aegis already does this - and it did it a long time ago. Where has subby been, in the basement?

  • by camperdave ( 969942 ) on Wednesday January 08, 2014 @06:57PM (#45902849) Journal
    Haven't we had them for a long time already? I remember reading a couple of years ago about some DIY hobby guy putting together an aliens style sentry gun out of an old camera and a paintball gun. And if a DIY hacker can do it, the military has it. Also, don't Predator drones already have autonomous kill capability?
  • by maliqua ( 1316471 ) on Wednesday January 08, 2014 @07:02PM (#45902889)

    and they can just fight among themselves it could be televised live for everyone and war would suddenly become wholesome entertaining

  • False Postives (Score:4, Interesting)

    by Nyder ( 754090 ) on Wednesday January 08, 2014 @07:10PM (#45902953) Journal

    I'm sure the DMCA has shown you what automated systems can do.

  • by TheloniousToady ( 3343045 ) on Wednesday January 08, 2014 @07:11PM (#45902963)

    We developers have been killing software bugs for decades. Why can't software bugs start killing us?

  • It's not so much killer terminators in the classic sense. A trifecta of air/sea/land operations is what's being done. Autonomous drones across the three game surfaces to eliminate the massive expense of physically present wetware, even remotely is the long term benefit. Being able to classify, analyze, and respond accordingly allows continuous intelligence and strike operations to be maintained 24/7 in any theater we need to be in. You want to be able to move your troops in the area, send a signal to st
  • We ONLY need these weapons to defend ourselves from foriegn attack and invasion .....

    Or when protecting our 'strategic interests' become very important. For instance in order to protect Israel, a nation we can not live without.

    Oh, and also in case any one pisses us off and does anything we do not like.
  • by nurb432 ( 527695 ) on Wednesday January 08, 2014 @08:36PM (#45903561) Homepage Journal

    Die, mostly.

  • by Anonymous Coward on Wednesday January 08, 2014 @08:42PM (#45903595)

    To all the engineers working on this: you're responsible. You are doing this. You are a terrible person.

    • To all the engineers working on this: you're responsible. You are doing this. You are a terrible person.

      Well, you made me feel bad. To make amends, just send me your picture and I'll make sure it's on the do-not-kill roster.

  • by GodfatherofSoul ( 174979 ) on Wednesday January 08, 2014 @10:19PM (#45904069)

    I can't remember the documentary; maybe Fog of War starring Satan's favorite child Robert McNamara. But, they figured out that in combat 25% of of soldiers weren't actually shooting at other people. They were intentionally shooting up in the air to avoid killing. So, part of the Army's training post WWII was to get soldiers to fire without thinking. The outcome was soldiers were more effective in battle. The consequence was soldiers weren't evaluating the act of taking lives until AFTER they'd done it which contributed to the increased mental issues Vietnam-era soldiers endure.

    • The 25% figure was proposed by S.L.A. Marshall in "Under Fire". He claimed it was from extensive interviews with US soldiers. That, at least, was a lie: he had no time to conduct all those interviews, no records have turned up, and no veterans remember such interviews. This doesn't mean the conclusion is wrong, but that the claimed support doesn't exist. David Grossman in "On Killing" claimed that it was reasonably accurate (providing some evidence), and that the number had been boosted to near 100% by

  • by Any Web Loco ( 555458 ) on Wednesday January 08, 2014 @11:26PM (#45904309) Homepage
    Most of the comments on this article seem to be against this which is interesting, because every time an article about gun control gets posted, the highest rating comments are overwhelmingly from gun advocates, often with the argument that "guns don't kill people, people kill people". Whats the difference here? Surely robots don't kill people, people kill people?
    • by MtHuurne ( 602934 ) on Thursday January 09, 2014 @01:35AM (#45904675) Homepage

      To kill someone with a knife, you have to stand very close to them and thrust the weapon into their body. To kill them with a gun, they have to be in line of sight and pull the trigger. To kill them with a drone, you need them on live camera and push a button. To kill them with an autonomous robot, you need to have a description of what they look like and what area they are located in and program that into the robot. Every step becomes more indirect, more emotionally detached.

      "Guns don't kill people" is just a slogan. A gun is a tool. For killing people. The real questions include "Do guns deter crime or make it more violent?" and "Does home gun ownership help prevent a government from turning on its own people?", but those have no simple answers, so they are not as useful in propaganda.

  • by DarthVain ( 724186 ) on Thursday January 09, 2014 @02:19PM (#45909531)

    Already exist. In our human behavior for one as part of instinct. As part of learned moral code. As part of operational orders such as rules of engagement. Simply codifying them and allowing a machine to do it isn't necessarily a bad thing. For one it takes away the negative mental effects it must have on human operators to have to make such life and death decisions.

    What we are really talking about is A) how well can it be coded, and B) avoiding potential mistakes, like "Kill all Humans!" or " All Humans must Die", or more serious making a distinction between soldier and non-combatant (assuming there is such a thing in the distant future).

    As war had taught us anything (and apparently it hasn't) Humans are perfectly capable of making mistakes and fucking that up all by themselves. Friendly fire happens all the time, and I can't give you a statistic, but it is a significant amount of issue and always has been. Civilian casualties particularly in urban centers has also been an issue since such things as urban centers have ever existed.

    At least if a machine is doing it, it will do it in a consistent, and discoverable way that is hopefully correctable, and not because some soldiers get mentally messed up by all the stress that putting people in those situations is bound to produce (or trying to desensitize them by making the enemy appear subhuman).

    Hopefully in the future all wars will be fought by autonomous robots, fighting other autonomous robots, who once they kill off all the opposing robot forces simply send a C3PO type representative to the defeated leadership to tell them they lost the war. I would imagine it would even make for pretty good TV (and betting opportunity: Go 23rd Fighting Heavy Mech Robot Battalion!).

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...