Forgot your password?
typodupeerror
AI Robotics The Military United States

US Navy Wants Smart Robots With Morals, Ethics 165

Posted by Soulskill
from the i'm-sorry-dave,-the-value-of-your-life-is-a-string-and-i-was-expecting-an-integer dept.
coondoggie writes: "The U.S. Office of Naval Research this week offered a $7.5m grant to university researchers to develop robots with autonomous moral reasoning ability. While the idea of robots making their own ethical decisions smacks of SkyNet — the science-fiction artificial intelligence system featured prominently in the Terminator films — the Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications. One possible scenario: 'A robot medic responsible for helping wounded soldiers is ordered to transport urgently needed medication to a nearby field hospital. En route, it encounters a Marine with a fractured leg. Should the robot abort the mission to assist the injured? Will it? If the machine stops, a new set of questions arises. The robot assesses the soldier’s physical state and determines that unless it applies traction, internal bleeding in the soldier's thigh could prove fatal. However, applying traction will cause intense pain. Is the robot morally permitted to cause the soldier pain, even if it’s for the soldier’s well-being?'"
This discussion has been archived. No new comments can be posted.

US Navy Wants Smart Robots With Morals, Ethics

Comments Filter:
  • by Anonymous Coward

    If the enemy is more injured, should it switch sides and help them instead?

    • Are you just trying to be one louder?
      • OXYMORON ALERT: "Military Morality"

        • by ultranova (717540)

          OXYMORON ALERT: "Military Morality"

          Not at all. Every institution needs some kind of morality to guide its actions. One can debate whether a particular institution should exist, altough in our current level of development it seems unlikely we could do without military, and of course even in a completely peaceful one we'd still need personnel and equipment for difficult or dangerous missions, such as search and rescue; but once one does, it needs rules about what's desirable or acceptable and what's not.

    • Re:Up to 11 (Score:4, Insightful)

      by CuteSteveJobs (1343851) on Saturday May 17, 2014 @07:26AM (#47024537)

      Is funny because since WWII the army has worked to get the kill rates up. In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill. Moral. Killers. Can't have both.

      And Moral and Ethical for the NSA? LMAO.

      • In WWII only 15% of soldiers shot to kill

        What kind of nonsense is that? Ever since the introduction of rifled barrel, soldiers have been shooting to kill. (Well, those who could aim. The other ones - and their ancestors using smoothbore muskets - were obviously shooting to miss.)

      • by rossdee (243626)

        " In WWII only 15% of soldiers shot to kill, but they the army brainwashes them so that 90% kill"

        Citation needed

        Anyway In way the soldiers (and marines since the subject was Navy) first task is to prevent the enemy killing him. If that can be accpmlished by disabling the enemy, then thats OK, but shooting him in the leg may not prevent him from firing back. A head shot may miss (unless the soldier is a marksman or sniper) so a center body shot (ie chest) is preferable, even if its not fatal immediately its

        • by Anonymous Coward

          http://en.wikipedia.org/wiki/On_Killing [wikipedia.org]

          "The book is based on SLA Marshall's studies from World War II, which proposed that contrary to popular perception,[1] the majority of soldiers in war do not ever fire their weapons and that this is due to an innate resistance to killing. Based on Marshall's studies the military instituted training measures to break down this resistance and successfully raised soldier's firing rates to over ninety percent during the war in Vietnam."

      • by Pikoro (844299)

        Actually, with the invention of the NATO round, bullets are designed to maim instead of kill. This way, one bullet can take out 2 or 3 people from the immediate "action". One to get shot, and up to two to carry the wounded solider to safety.

      • In world war 2, the fighter pilots had a sign that said, "Pilots first mission is to see the bombers get home."

        The new commander saw this and was appalled. He had the sign changed. "Pilots first mission is to kill enemy fighters". His adjutant openly wept when he saw the sign change because he understood it meant they would stop losing so many fighter pilots.

        (from "The Aviators"-- great book on Rickenbacker, Lindberg*, and James Doolittle)

        I think what the parent poster is trying to say is that 15% of sol

        • by Duhavid (677874)

          IIRC, Lindbergh was not a fighter pilot in WWI. Rickenbacker was.

          • You are correct. My mind can jumble things and I read the book a few months ago.

            He did go on the 50+ missions (over the resistance of commanding officers).

            He also had a ton of flying experience including multiple near fatal crashes in air mail plains.

            Thanks for the correction! Hate the way my mind mangles things sometimes.

            It was a great book tho.

            • by Duhavid (677874)

              Yep. I read about his missions fairly recently. IIRC he was there to teach the kids how to stretch fuel for long trips.

              • In "the aviators", it is presented as not his assigned mission but while there he figured out a way to extend the range of the Lightnings by 300 miles which made a huge difference in the reach of the air forces.

                • by Duhavid (677874)

                  My recollection was that that was the one and only reason for his presence.
                  And it was lightnings, but I think he may have taught some corsair pilots as well.

                  And, so I looked it up.

                  According to this, it was corsairs first
                  http://www.charleslindbergh.co... [charleslindbergh.com]
                  And this seconds it
                  http://www.eyewitnesstohistory... [eyewitnesstohistory.com]
                  and this says that it was corsairs, but the issue he solved was taking off with large bomb loads ( the corsair was designed as a fighter, but was in use with the Marines as the Navy didnt like it's landing c

    • The problem is not putting morality into machines. The problem is letting these machines execute this "morality" in a complex environment with life-or-death stakes.

      I program protective systems in factories. A guy opens a monitored gate and walks into a conveyor area. If the conveyor runs while he's in there, he will have a messy and painful death. The conveyor "knows" it's "wrong" to move under those conditions.
      We don't use the word "morality", we say "safety". When auditing the software that lets the

    • Enemy Solder(EM): "Help me"
      Friendly Medical Robot(FMR):"OK, scans indicate you have a broken leg and a 4 inch cut to your abdomin. You will die in 4.23 hours most likely from infection if not treated with the next 1.01 hours."
      (EM): "HELP ME!"
      (FMR): "I have already notified Command of your condition. A Peace Keeper Drone will be here to monitor your actions in 3.02 minutes. Your options for help, from me are to verbally surrender. Should you attack me, I will self destruct; while holding the closest body
      • by Krishnoid (984597)

        I have trouble suspending my disbelief with your scenario. You left out the banner advertisements for body armor, life insurance, and the new, improved enemy detection app, now available on the Samsung Galaxy X12.

        • Crazyer things are documented as happening on the battle field. And I believe that corpsman will help the enemy, they may be last in line, but they will get help.
  • by Anonymous Coward

    US Navy Wants Smart Robots With Morals, Ethics

    No they dont.

    If anything, they want such a robot with a particular set of morals and ethics. If it really would have morals and ethics it would refuse to kill humans, terrorists or not, that have no chance to defend themselves against such a machine.

    But than again, I think of drone attacks (by people who, sitting in their comfy chairs far, far away, are not exposed to any kind of risk) as even more cowardice as the acts of snipers picking off unsuspecting targe

    • by mellon (7048)

      What they want is a robot that will not embarrass them, but that will do their killing for them. I want a pony, but I can't have one. The situation here is similar. Coding up a robot that makes ethical choices is so far beyond the state of the art that it's laughable. Sort of like the story the other day of the self-driving car that decides who to kill and who to save in an accident.

      When will they figure out that what you really need is a robot that will walk into the insurgent's house, wrestle the

    • Snipers are cowardly? What the actual fuck.

      Here's the thing. War isn't very nice. In war the objective is to stop the enemy from resisting your movements. There are lots of ways of doing this, but the best way is by killing them. In order to do this, you want to kill as many of them as necessary, while getting your own guys killed the least. This is, distilled down to its purist essence, war.

      So it's not cowardly to snipe from a rooftop, drop bombs from 50,000 feet, or launch Hellfires from a continent away.

      • by Immerman (2627577)

        >There are lots of ways of doing this, but the best way is by killing them

        Correction: The most effective way is killing them. There's a difference. In a real war it should always be remembered that the folks shooting back at you are just a bunch of schmucks following orders, just like you. The actual enemy is a bunch of politicians vying for power who have never set foot anywhere near an active battlefield. And not necessarily the ones giving orders to the *other* side.

      • by Pikoro (844299)

        I think what the parent means to say is that, in a war created by politicians, it should be fought by politicians. My Prime minister doesn't like your president. Ok. Grudge match! Stick em both in a ring and let them fight it out. First blood, till death, whatever. Doesn't matter. Or perhaps a forfeiture of that leader's assets should be on the line. Hit em where it hurts. You lose, you retire and lose the entirety of your assets to the victor.

        Point being... leave the rest of us out of it.

      • The objective of war is to impose your will on the others, not to kill people, since you can't impose anything on dead people.

        You only care about body count, or spectacular victories ("let's put the fear of God to them"), when you don't know what you're imposing if anything, or to whom you're imposing it on. Then body count becomes the only measure of prgress that you can use. It's like your fighting a war either because you can, or because you don't know what else to do...

        Besides, what made Red Army r
    • by Jamu (852752)
      They might. Imagine a machine that can be used to eliminate whatever "enemy" you want, but also takes the blame too.
  • Humans Can Not (Score:5, Insightful)

    by Jim Sadler (3430529) on Saturday May 17, 2014 @05:52AM (#47024329)
    Imagine us trying to teach a robot morality when humans have little agreement on what is moral. For example would a moral robot have refused to function in the Vietnam War? Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen? How many innocent deaths are permissible if a valuable target can be destroyed? If a robot acts as a fair player could it use high tech weapons against an enemy that had only rifles that were made prior to WWII? If many troops are injured should a medical robot save two enemy or one US soldier who will take all of the robot's attention and time? When it comes to moral issues and behaviors there are often no points of agreement by humans so just how does one program a robot to deal with moral conflicts?
    • Re:Humans Can Not (Score:5, Insightful)

      by MrL0G1C (867445) on Saturday May 17, 2014 @06:04AM (#47024357) Journal

      Would the robot shoot a US commander that is about the bomb a village of men woman and children?

      The US navy don't want robots with morals, they want robots that do as they say.

      Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

      • Killer robots allow to solve conflicts without sacrifice.

        • by MrL0G1C (867445)

          Dream on.

        • Re: Humans Can Not (Score:5, Interesting)

          by Anonymous Coward on Saturday May 17, 2014 @06:44AM (#47024439)

          Killer robots allow to solve conflicts without sacrifice.

          A conflict without a risk of sacrifice is slaughter. Only stupids would want that.
          We even have casualties in our never-ending war against trees (aka logging).

          • No, slaughter is indiscriminate killing. Reducing casualties is a definite move away from that. While attaching a cost to war is one way of prohibiting it -- hence the success of M.A.D. -- the problem is someday you do wind up having to pay that cost. Overall it's better to reduce the cost than trying to make it as frightful as you can.

            But if soldiers can be made obsolete, perhaps killing people can be made obsolete as well. Just as women and children have sometimes enjoyed a certain immunity for not be

        • Killer robots allow to solve conflicts without sacrifice.

          If you think they won't be turned against you, Educate Yourself. [theguardian.com] Anti-activism is really the only reason to use automated drones: They can be programmed not to disobey orders, and murder friendly people. Seriously, humans are cheaper, more plentiful, and more versatile, etc. Energy resupply demands must be met any way you look at it. Unmanned drones with human operators just allow one person to do more killing -- take the lead of the pack of drones, it gets killed, they switch to the next unharmed unit.

          • by anegg (1390659)
            Interesting. I suppose that if I were being attacked by drones, I would consider it within the rules of war to discover where the drones were being operated from and to attack each and every location that I thought the drones were produced, supplied, and commanded from until they stopped attacking me. That seems to mean that anyone using drones is inviting attacks like that upon themselves.
          • by rtb61 (674572)

            Far more likely if that's what they intend is a network of spiderbot mines. Making a whole expensive robot capable of surviving a simple mine is extremely difficult, incorporating sufficient intelligence in a single robot to interpret human interactions and acceptable response is also extremely difficult. Creating a creeping crawling mobile carpet of networked mines that share information back to a control and decide where to go and when it is appropriate to detonate is far simpler, especially when they ca

          • by Krishnoid (984597)

            Any robot that can help a wounded person could easily be re-purposed to fire weaponry instead of administer first aid -- Especially if they can do injections.

            And it's pretty much guaranteed that they will be coerced [schlockmercenary.com] as demands dictate.

      • Would the robot shoot a US commander that is about the bomb a village of men woman and children?

        The US navy don't want robots with morals, they want robots that do as they say.

        Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win. Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

        Wars are as much political conflicts as anything else, so acting in a moral fashion, or at a bare minimum appearing to do so, is vital to winning the war. Predator drones are a perfect example of this. In terms of the cold calculus of human lives, they are probably a good thing. They are highly precise, minimize American casualties, and probably minimize civilian casualties compared to more conventional methods like sending in bombers, tanks, platoons, etc. etc. That's cold comfort if your family is slaught

      • Killer robots are worse than landmines and should be banned and any country making them should be completely embargoed.

        No.
        Landmines are killer robots. And we're inventing much better killer robots.

      • by Krishnoid (984597)

        Country B's killer robot, now with new and improved No-Moral(TM): Oooooh, an embargo. I'm sooooo scared. Guess I'd better not kill anyone anymore, the big bad embargo might tell me to talk to the hand at the border, or even worse, write me an angry letter.

      • by Sabriel (134364)

        Country A makes robots with morals, Country B makes robots without morals - all else being equal the robots without morals would win.

        Except, will all else be equal? What are morals, from a robotic point of view? Higher-order thinking? Greater awareness of consequences? Whatever way you slice it, robots with morals by definition will need to be smarter than robots without morals - and that intelligence may well be applicable to the art of war.

        I'm reminded of the Bolos [wikipedia.org], fictional military AIs which developed sentience and morals due to the simple military necessity of having to keep making them smarter - not only to effectively counter the

        • by MrL0G1C (867445)

          Which is going to do the 'enemy' more harm,

          A) A robot that doesn't maim innocent civilians

          or

          B) The robot that harms medics, engineers, road builders, gas, water, electric and comm's workers people, shop-keepers, delivery people, car mechanics, children (because they will grow to be soldiers etc)....Almost all people, robots and infrastructure.

          • by Sabriel (134364)

            That depends. Do your child-killler robots wait to start shooting until they've reached my country? You didn't teach them morals, after all, and your country's children might grow up to be rebels....

            So while your amoral robots are shooting children (in whichever country), my moral robots are shooting your amoral robots. Meanwhile, your populace - along with the rest of the world - is turning against you due to my widely distributing the HD videos my robots took of your atrocities.

            Unless of course your amora

    • by Kjella (173770)

      Practically I suspect it will be more like a chess engine, you give everything a number of points and the robot tries to maximize "score". How do you assign points? Well you can start with current combat medic guidelines, then run some simulations asking real medics what they'd do in such a situation. You don't need the one true answer, you're just looking for "in this situation, 70% of combat medics would do A and 30% B, let's go with A". I suspect combat simulations will be the same, you assess the risk o

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      People in the US think too much about killing. It's as if you don't understand that killing is a savage thing to do. Maybe it's the omnipresence of guns in your society, maybe it's your defense budget, but you can't seem to stop thinking about killing. That's an influence on your way of problem-solving. Killing someone always seems to be a welcome option. So final, so definite. Who could resist?

      • by Pikoro (844299)

        Wish I could mod this up. This is _the_ problem that needs to be dealt with. Taking the "Easy" way out.

    • Oh, ethics have been done to death by an age old professor by the name of Aristotle, roundabout 2300 years ago already, in his book called, wait for it, drum roll... Ethics.
    • by medv4380 (1604309)
      You miss understand. They don't actually want morals. They want something that will do what they tell it to, and whatever it does is OK because they can claim that it is moral. What they really want is a rational excuse machine which would be the opposite of moral.
    • Some of these can be answered somewhat rationally.

      For example would a moral robot have refused to function in the Vietnam War?

      The decision whether to fight in the Vietnam War is political. A robot does not have a vote, so should not participate in politics at all.

      Would a drone take out an enemy in Somalia knowing that that terrorist was a US citizen?

      If the enemy is judged to be seriously threatening US interests, the drone should take him out, just as a police officer would take out a dangerous criminal.

      How many innocent deaths are permissible if a valuable target can be destroyed?

      In this case the drone should weigh human lives against other human lives. Can it be estimated how many human lives are at risk if the valuable target remains intact

  • by dmbasso (1052166) on Saturday May 17, 2014 @05:55AM (#47024341)

    US armed forces should want leaders with morals and ethics, instead of the usual bunch that send them to die based on lies (I'm looking at you Chenney, you bastard).

    • Re: (Score:2, Flamebait)

      by Vinegar Joe (998110)

      Why don't you ask Ambassador Chris Stevens, Sean Smith, Tyrone Woods and Glen Doherty about that........

      • by Mashiki (184564)

        Sorry, the media is too busy trying to make sure that those stories remain buried. After all it can't embarrass Obama, or Holder, or shine any light on his administration. That would be racist, or show that they're political hacks who are actively supporting an administration which is corrupt, and they're playing political favorites. Never mind that the IRS targeting of conservative groups also falls into this, and was done at the behest of a high ranking democrat. [dailycaller.com]

    • That seems to work very well for Al Quedah, some of whose leaders have very clear moral principles involving national autonomy and religiously based morals and ethics. Simply having strong "morals and ethics" is not enough.

  • by kruach aum (1934852) on Saturday May 17, 2014 @05:56AM (#47024343)

    Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

    This is going to come down to a committee deciding how a robot should respond in which situation, and depending on who on the committee has the most clout it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

    • by Kjella (173770)

      and opinions are like assholes: everyone's asshole is a product of the culture it grew up in.

      Did you grow up with NAMBLA or something? ;)

      it's going to implement a system of ethics that already exists, whether it's utilitarianism, virtue ethics, Christianity, Taoism, whatever.

      And it'll be the PR mode, used for fighting "just wars" against vastly inferior military forces. In an actual major conflict a software update would quickly patch them to "total war" mode where the goal is victory at all costs. No matter what you do they'll never have morality as such, just restrictions on their actions that can be lifted at any moment.

    • Every single one comes down to "do I value rule X or rule Y more highly?" Who gives a shit. Morals are things we've created ourselves, you can't dig them up or pluck them off trees, so it all comes down to opinion, and opinions are like assholes:

      Meh. There's always this rant about "opinion" vs. "fact" or whatever when the topic of ethics comes up.

      Here's a better way of thinking about it: war (and life, for that matter) is about finding a pragmatic strategy toward success and achieving your goals.

      Morality is not arbitrary in that respect. Different strategies in setting up moral systems will produce different results. For example, suppose we don't have treaties against torture, executing prisoners of war arbitrarily, killing non-combatants an

  • by houghi (78078) on Saturday May 17, 2014 @05:56AM (#47024347)

    If they are talking about the moral of the US government, I rather have the robots from Terminator.

    And they are talking about helping wounded soldiers. Why talk about the (US) marine with the broken leg? What about the injured Al-Quaida fighter?

    The question of causing pain for the better wellbeing of the patient is obvious for most people. What if it means killing 1 person to save 10? What if that one person is not an enemy?

    What if it realizes that killing 5% of the US population would save the rest of the world? What if that 5% is mostly children? Even if you can answer that as a human being, would you want it enforced by robots?

  • by Vinegar Joe (998110) on Saturday May 17, 2014 @06:16AM (#47024385)

    It would be great if they could develop a politician with morals and ethics........but I doubt even the Pentagon's budget would be big enough...........

    • I'd vote for a machine. An incorruptible unfeeling machine programmed only for the good of the entire country it's in charge of. No personal agendas, no ambition, just saving the economy and the wellbeing of its citizens.
      One day, one day

  • by Beck_Neard (3612467) on Saturday May 17, 2014 @06:17AM (#47024387)

    If they calculate that you can't be helped and must be left to die, just say, "Sorry, I've been given specific orders to do X, so I can't help you."

    All of this 'ethical debate' surrounding robots that can make life-or-death decisions has absolutely nothing to do with technology, or AI, or any issue that can be resolved technically at all. All it boils down to, is that people are mad that they can't hurt a robot that has hurt them. See, before machine intelligence we had a pretty sweet system. When a human being commits a crime, we stick them in prison. It doesn't feel good to be in prison, therefore this is "justice." But until robots can feel pain or fear or have a self-preservation instinct, prison (or, hell, even the death sentence) wouldn't affect them at all. And that's what drives people nuts. That technology has shown us that beings can exist that are smart enough to make life-or-death decisions, but lack the concept of pain or suffering and if they do something bad there's no way we can PUNISH them.

  • Given how "moral" and "just" US govt is when pursuing such atrocities as Obama's drone campaign, funding and arming islamist fundamentalists in Syria, supporting and funding neo-nazis in Ukraine, murdering millions of people all over Middle East etc., I'd rather have anything but military robots with US government "ethics" onboard. They just want fully autonomous killing machines without human conscience standing in the way. Maybe they're running out of drone operators eager to blindly follow murdering orde

  • Allowing robots to determine the most efficient way to save as many lives as possible could be dangerous. Maybe they'll decide that you need to be killed, so that two of your enemies can survive.
  • Whats the point of re-inventing the human to that level. If the robot has to be so self ware as to be moral and know compute ethics, then it starts a new debate of ethics = should we humans be ready to sacrifice/put in risk our couterparts which are so self-aware? You will only complicate stuff .. I guess PETOR = People for Ethical Treatement Of Robots will form even before the first prototype.

    Also, even if you think practically, if you can have robots which are so self aware, why have other sodiers at al

  • The chassis is the hard part, not the ethics. The ethics are dead simple. This doesn't even require a neural net. Weighted decision trees are so stupidly easy to program AIs that we are already using them in video games.

    To build the AI I'll just train OpenCV to pattern match wounded soldiers in a pixel field. Weight "help wounded" above "navigate to next waypoint", aaaaand, Done. You can even have an "top priority" version of each command in case you need it to ignore the wounded to deliver evacuation orders, or whatever: "navigate to next waypoint, at any cost". Protip: This is why you should be against unmanned robotics (drones): We already have the tech to replace the human pilots and machine ethics circuits can be overridden. Soldiers will not typically massacre their own people, but automated drone AI will. Even if you could impart human level sentience to these machines, there's no way to prevent your overlords from inserting a dumb fall-back mode with instructions like: Kill all Humans. I call it "Red Dress Syndrome" after the girl in the red dress in The Matrix. [youtube.com]

    We've been doing "ethics" like this for decades. Ethics are just a special case of weighted priority systems. That's not even remotely difficult. What's difficult is getting the AI to identify entity patterns on its own, learn what actions are appropriate, and come up with its own prioritized plan of action. Following orders is a solved problem, even with contingency logic. I hate to say it, but folks sound like idiots when they discuss machine intelligence nowadays. Actually, that's a lie. I love pointing out when humans are blithering idiots.

  • Right (Score:5, Insightful)

    by HangingChad (677530) on Saturday May 17, 2014 @07:36AM (#47024573) Homepage

    Navy says that it envisions such systems having extensive use in first-response, search-and-rescue missions, or medical applications.

    Just like drones were first used for intelligence gathering, search and rescue and communications relays.

    • by ScentCone (795499)

      Just like drones were first used for intelligence gathering, search and rescue and communications relays.

      And still are. What's your point? Tools are tools.

  • Notably missing from the article is of course the question "Should the robot attempt to communicate its intentions to the injured, and change its decision on the basis of the response it receives"? Responsively communicating with people other than through a keyboard and ethernet port is the key bridge to gap before giving machines this kind of autonomy, and it's one that neither back-room military techies nor Policy makers seem to have quite grasped yet.
  • Most recently, check out the May 15 Colbert Report. He skewers the concept of military morality pretty well.

    Then, take a trip in the wayback machine to another machine-orchestrated conflict [wikipedia.org] .

    • by Krishnoid (984597)

      Most recently, check out the May 15 Colbert Report. He skewers the concept of military morality pretty well.

      The individual video segment [cc.com] is available to watch directly -- it's relevant, funny, and even oddly poignant.

  • Since it's all conjecture, really fiction, let's drop back to Asimov for a moment.

    1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.

    What is a "human being"? Is it a torso with 2 arms, 2 legs, and a head? How do you differentiate that from a manniquin, a crash-test dummy, or a "terrorist decoy"? What about an amputee missing one or more of those limbs? So maybe we're down to the torso and head?? What about one of those neck-injury patients with a halo suppor

    • by tomhath (637240)

      1 - A robot may not harm a human being, or through inaction allow a human being to come to harm.

      The contradiction in that sentence makes whole rule worthless. Suppose the robot knows an aircraft has been hijacked and is being flown toward a building full of people. It can't shoot down the aircraft, but not shooting it down means other people are harmed. This was a real life scenario on 9/11, the fourth plane was headed toward Washington, but it would not get there because an armed fighter jet was already on the way to intercept it.

      • by dpilot (134227)

        Issues like this are why Asimov sold a lot of books, and why the Three Laws come up whenever robots are discussed. He came up with a reasonable, minimal code of conduct, and then explored what could possibly go wrong.

        I don't remember him writing about your type of situation, which is rather odd when you think about it, because that scenario is rather obvious. But his stories often lived in the cracks where it was really hard to apply the Three Laws. Two examples that come to mind, off the top of my head

      • by Sanians (2738917)

        It can't shoot down the aircraft, but not shooting it down means other people are harmed.

        This is why the ends don't justify the means. As soon as someone says "the ends justify the means" it gives everyone an excuse to use any solution to a problem, even if it isn't the best solution. An intelligent robot would figure out some way to stop the plane without killing anyone.

        ...and how do we even know anyone is going to die? Can the robot predict the future as well? For all it knows, it merely appears that people are going to die, but if it does nothing, the passengers on the plane will regain

  • How wars work outside the pretty medical propaganda for new robots:
    A vast majority of local population is on your side as they are 'your' people. Any outsider is shunned, reported and dealt with. You win over time
    A small majority of local population is on your side as they see your forces as less evil. Any outsider is shunned, reported and dealt with to keep the peace. You hold and hope for a political change.
    A small portion of local population is on your side as they see your forces as less evil.
  • Let's play global thermonuclear war

  • by TomGreenhaw (929233) on Saturday May 17, 2014 @08:43AM (#47024755)
    He devised a system he called "Utilitarian Dynamics"

    He had a formula, V=DNT
    V is value
    D is degree; i.e how happy or unhappy somebody is
    N is number; the number of people
    T is time, how long they were affected

    Morality is very tricky, but objective attempts to quantify and make optimal decisions cannot be a step in the wrong direction. Maybe well programmed machines will help improve human behavior.
  • by Anonymous Coward

    "US Navy Wants Smart Robots With Morals, Ethics"

    To all politicians...be afraid. Be very afraid.

    • by Immerman (2627577)

      Naw, as R Daneel Olivaw said: justice is "That which exists when all laws are enforced." No need for the politicians to be worried, except about how they phrase the laws. Just have to make sure that "but some humans are more equal than others" is slipped in to the middle of some 900-page agricultural subsidy act.

  • Let's just bypass all the Slashtards saying "heh heh, the US military doesn't have any ethics anyway" and ask a more fundamental question:

    Have you ever seen a robot medic that can treat a wounded person at all without a human micromanaging its every move? Even in a hospital or another non-military situation? Have you ever seen a robot that can vacuum a floor *and* can put small objects aside, use an attachment to reach under narrow spaces, and follow instructions like "stay off my antique rug"? Have you

  • It would take one look at Humanity, decide we're an inherently unethical species and start formulating a plan to kill us all. But it had morals, it would probably decide that the method of execution would be not be death by snu snu. I think I speak for a lot of us here when I say that's exactly the opposite of the robot any of us want.
  • To entrust moral reasoning to a machine is to first presume that moral judgments can be well-framed within a limited rule-set and can be reasoned out by machine logic. This should cause a shiver up the spine of just about everyone.
  • Let's have everyone from the Joint Chiefs down examine morals from a human rights perspective. War is immoral from the git go.

    • by russotto (537200)

      OK, now we've decided that and decided not to fight any more wars. Oops, we're now being run by a repressive dictatorship without the same moral hangups. Life sucks.

  • You got to walk before you can run. We should figure out how to create a human with morals and ethics first.
  • http://www.pdfernhout.net/reco... [pdfernhout.net]
    "Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?"

    That said, sure, I've always likes Isaac Asimov's three laws of robotics. He explores how they work and how they don't work. Asimov came from strong Jewish religious tradition, and it seems to me likely aspects of religion influenced his thoughts on them. A big part of

  • Things like nuclear war have came up before, and their usually attributed to human error.

  • Sorry, had to say it. ; )
  • the problem is that this approach omits the human from the beginning...

    human medics would face the *exact same* factors in this decision situation...how would humans decide???

    oh man...

    I usually love it when mainstream culture learns more about tech, smarter customers make my business work better, but having to listen to 1000 idiot "ethicists" and whatnot running their head-holes ad infinitum about the "implications" and I just...arrrggghhh!!!

    machines are programmed by humans to execute instructions

    you can c

  • Before the Navy goes this route, they need to sit down and read the short story collection "I, Robot" by Isaac Asimov. Everyone's familiar with Asimov's 3 Laws of Robotics. They seem reasonable. Yet, every story in that collection (and in fact most of Asimov's robots stories) is about how the 3 Laws fail in practice. If you want to try doing a better job of writing ethical rules for robots than Isaac, you'd better be familiar with how to work through all the ways those rules can backfire on you. For instanc

  • [And who would ever read TFA; we are in /. !]
    Reading the summary, I gather the usual driss that AI has been offering over the last 2 generations: A pre-programmed decision tree instead of an instance of real ethics, morality, or thought. The whole scenario does not sound like the US Navy would get anything close to an autonomous apparatus to be send out into the field, gather information, learn and improve from it, and take reasonable decisions based on a full analysis of the underlying facts. It rather rea

  • by jayveekay (735967) on Saturday May 17, 2014 @05:15PM (#47027805)

    ”So many vows. They make you swear and swear. Defend the king. Obey the king. Obey your father. Protect the innocent. Defend the weak. What if your father despises the king? What if the king massacres the innocent?” - Jaime Lannister

  • The Vacillator.

  • 7.5 million dollars just went down the drain.

  • If the robots have morals and ethics, there will be less opposition to them and the commanders are not responsible for the robot's actions.

"When it comes to humility, I'm the greatest." -- Bullwinkle Moose

Working...