Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics The Military

Halting Problem Proves That Lethal Robots Cannot Correctly Decide To Kill Humans 335

KentuckyFC writes: The halting problem is to determine whether an arbitrary computer program, once started, will ever finish running or whether it will continue forever. In 1936, Alan Turing famously showed that there is no general algorithm that can solve this problem. Now a group of computer scientists and ethicists have used the halting problem to tackle the question of how a weaponized robot could decide to kill a human. Their trick is to reformulate the problem in algorithmic terms by considering an evil computer programmer who writes a piece of software on which human lives depend.

The question is whether the software is entirely benign or whether it can ever operate in a way that ends up killing people. In general, a robot could never decide the answer to this question. As a result, autonomous robots should never be designed to kill or harm humans, say the authors, even though various lethal autonomous robots are already available. One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.
This discussion has been archived. No new comments can be posted.

Halting Problem Proves That Lethal Robots Cannot Correctly Decide To Kill Humans

Comments Filter:
  • I think (Score:5, Insightful)

    by fyngyrz ( 762201 ) on Wednesday November 19, 2014 @12:55PM (#48418443) Homepage Journal

    I'm just going to reformulate the problem by considering idiots who use unrealistic, not-supported-by-evidence premises to make general statements as one that calls for sending killer robots after said idiots.

    • branch to the HCF operand on any error.

      (newbies, that is the Halt, Catch Fire command)

      • by fisted ( 2295862 )
        Would've been a good joke if you hadn't ruined it by explaining it right away.
        0/10 didn't even chuckle.
    • Re:I think (Score:5, Insightful)

      by ShanghaiBill ( 739463 ) on Wednesday November 19, 2014 @01:42PM (#48418947)

      The premise of TFA is that killer robots need to be perfect. They don't. They just need to be better than humans.

      Which is more likely to shoot a civilian:
      1. A carefully programmed and thoroughly tested robot.
      2. A scared, tired, and jumpy 18 year old soldier, who hasn't slept in two days, and saw his best friend get his legs blown off by a booby trap an hour ago.

      • Re:I think (Score:5, Insightful)

        by Anonymous Coward on Wednesday November 19, 2014 @02:06PM (#48419207)

        1, when ordered to shoot civilians.

        • by fyngyrz ( 762201 )

          In that case, the robot didn't make the decision. So, no.

          • Re: (Score:3, Insightful)

            by Anonymous Coward

            That which a bunch of Ethicists say should happen, and that which does happen, are often in dis-accord.

            The military will build killer robots. The primary concern for behavior development won't so much be safety mechanisms against killing innocents or children as safety mechanisms against losing control. They don't want their robots turning on them. Beyond that, they want the robots to obediently slaughter whatever they are pointed at. This level of obedience is precisely what makes them useful as a weap

      • Both are pretty likely. Let's start by defining civilian. Is the farmer who supports the militants cause and brings them goat cheese and steel a civilian? What about the farmer who is afraid of them and does the exact same thing? What if the farmer knows the danger level and carries a gun for personal defense?

        You can't compute us and them in an analogue world where the real value is never actually 0 or 1 but always a shifting value in between and usually multiple shifting values in between. YOU can't, and n
        • Surely, all of those are definitions of a civilian.

      • Which is more likely to shoot a civilian...

        That's not entirely the right question. You need to account for which is more predictable for another human. If you are in the middle of a war zone with soldiers getting blown up by booby traps then you might expect a human soldier to be rather nervous and so you would approach them with extreme caution or get out of the way. However if you have a robot wandering down a street in a peaceful area and the right set of circumstances just happen to cause it to misidentify a random, innocent person as a target

      • by rtb61 ( 674572 )

        Now which is more likely to occur:
        1. A carefully programmed and thoroughly tested robot.
        or
        2. Lowest tender robot with just barely sufficient buggy code to get past the tender process. Generating a huge bonus for the psychopathic executive team when they produce 10,000 of them.

        Bad luck under capitalism, corporations will never ever produce a "A carefully programmed and thoroughly tested robot", it just wont happen, it can't happen, corporate greed controlled by psychopaths guarantees it wont happen, that

    • Excellent, sending a drone for you now.
  • By the same logic (Score:5, Insightful)

    by Anonymous Coward on Wednesday November 19, 2014 @12:56PM (#48418449)

    By the same logic, computers should not be allowed in any life-critical situation. That includes hospital equipment, airplanes, traffic control, etc. etc.

    Fortunately, we don't judge the reliability of computers based on the ability to mathematically prove that nobody has put evil code in on purpose.

    • Re: (Score:2, Interesting)

      by i kan reed ( 749298 )

      And let's not forget the "Better at it than humans" heuristic. As long as "Jaywalking under the influence of melanin" is sometimes a capital crime, that's not a hard target to hit.

    • Re:By the same logic (Score:4, Interesting)

      by tlhIngan ( 30335 ) <slashdot.worf@net> on Wednesday November 19, 2014 @01:31PM (#48418819)

      By the same logic, computers should not be allowed in any life-critical situation. That includes hospital equipment, airplanes, traffic control, etc. etc.

      Fortunately, we don't judge the reliability of computers based on the ability to mathematically prove that nobody has put evil code in on purpose.

      In your examples, there are humans in the loop.

      In this case, you have a robot trying to autonomously decide "kill" or "don't kill" when it encounters a human.

      Hospital equipment - it's generally observed by personnel who after failures can decide to not use the equipment further (see Therac 25), or that changes need to be made in order to use the equipment. The equipment never hooks itself up to a patient automatically and provides treatment without a human involved. Sure there are errors that kill people unintentionally, but then there's a human choice to simply take the equipment out of service. E.g., an AED is mostly autonomous, but if a model of AED consistently fails in its diagnosis, humans can easily replace said AED with a different model. (You can't trust said AED to take itself out of service).

      Airplanes - you still have humans "in the loop" and there have been many a time when said humans have to be told that some equipment can't be used in the way it was used. Again, the airplane doesn't takeoff, fly, and land without human intervention. In bad cases, the FAA can issue a mandatory airworthiness directive that says said plane cannot leave the ground without changes being made. In which case human pilots check for those changes before they decide to fly it. The airplane won't take off on its own.

      Traffic control - again, humans in the loop. You'll get accidents and gridlock when lights fail, but the traffic light doesn't force you to hit the gas - you can decide that because of the mess, to simply stay put and not get involved.

      Remember, in an autonomous system, you need a mechanism to determine if the system is functioning normally. Of course, said system cannot be a part of the autonomous system, because anomalous behavior may be missed (it's anomalous, so you can't even trust the system that's supposed to detect the behavior).

      In all those cases, the monitoring system is external and can be made to halt a anomalous system - equipment can be put aside and not used, avoiding hazardous situations by disobeying, etc.

      Sure, humans are very prone to failure, that's why we have computers which are far less prone to failure, But the fact that a computer is far less prone to making an error doesn't mean we have to trust it implicitly because we're more prone to making a mistake. it's why we don't trust computers to do everything for us - we expect things to work but when indications are that it doesn't, we have measures to try to prevent a situation from getting worse.

      • by plover ( 150551 )

        So how many humans have to die before recognizing the AED is faulty? If it's a subtle fault, it might be delivering a barely ineffective treatment, and confused with an unsaveable patient. The THERAC 25 failure was a bit more dramatic, but it still killed many patients.

        Would we accept the same levels of failure from the Kill-O-Bot 2000? We already fire missiles into crowds of people or convoys in order to take out a single high value target. If the Kill-O-Bot was more specific than a missile, but less tha

      • And somehow there is no system in place for killer robots?

    • by Dutch Gun ( 899105 ) on Wednesday November 19, 2014 @01:48PM (#48419015)

      Agreed. The authors set up a nearly impossibly complex ethical dilemma that would freeze even a human brain into probable inaction, let alone a computer one, and then claims "See? Because a computer can't guarantee the correct outcome, we can therefore never let a computer make that decision." It seems to be almost the very definition of a straw man to me.

      The entire exercise seems to be a deliberate attempt to reach this conclusion, which they helpfully spell out in case anyone missed the not-so-subtle lead: "Robots should not be designed solely or primarily to kill or harm humans."

      I'm in no hurry to turn loose an army of armed robots either, but saying that you can "prove" that an algorithm can't make a fuzzy decision 100% of the time? Well, yeah, no shit. A human sure as hell can't either. But what if the computer can do it far more accurately in 99% of the cases, because it doesn't have all those pesky "I'm fearing for my life and hopped up on adrenaline so I'm going to shoot now and think later" reflexes of a human?

      • by jandrese ( 485 )
        To be fair, the Halting Problem has always confused me because the counterexample to it is highly contrived and it seems like you could reword the problem slightly to avoid the issue. I assume that the description I got in school was incomplete and that it's really the tip of the iceberg of some enormous mathematical model that may or may not be applicable to real life.
  • by Anonymous Coward on Wednesday November 19, 2014 @12:56PM (#48418457)

    Exhibit A, the human skull: Not enough room for an infinite tape.

    • Exhibit B, God (or Gods), generally regarded as being infinite/omnipresent/omnipotent/otherwise not subject to laws of physics - hence plenty of room for an infinite tape.

      God does the complicated bit of deciding whether puny humans should kill or not - the "why" - leaving the humans to decide the simple bits like "when / who" (goes first), "how" (which bits to cut / shoot / throttle / stone), and which way up to hold the camera.

    • The problem with Turing machines is that they are by definition a deterministic system. A certain input will give a specific output. That is why they can't make a "judgment" call.

      The universe as a whole is NOT deterministic as Quantum Mechanics proves. QM is based on true randomness (obvious a simplification but go with it for this conversation). Our 'machines' deal with this randomness and even incorporate it into some operations. So a specific input will NOT always generate the same response.

      It is th

      • I'm not sure I'd describe a "judgement call" as being non-deterministic. It's really better described as fuzzy logic, and computers do it all the time, such as in spam filters. The difference is that humans have a lifetime of learning and context for them to help make those judgments, where most computer algorithms don't have that extended context to draw from.

        I don't see how true randomness has anything to do with these sorts of decision-making processes or with quantum mechanics in general.

        • by clonan ( 64380 )

          My original post was simply pointing out that the human brain is NOT and can never be a Turing machine due to the fundamental randomness of the universe. This means that no study of Turing Machines can make any claim on human judgment calls.

          I am not sure the random nature of the universe is sufficient to allow for true 'judgement' but it MIGHT.

          • by radtea ( 464814 )

            My original post was simply pointing out that the human brain is NOT and can never be a Turing machine

            This is true but it has exactly nothing to do with quantum mechanics or randomness. To see this, understand that we can't tell if QM is "truly" (metaphysically) random or just mocking it up really cleverly. Or rather, we can tell, but using inferences so indirect that they make no difference to the operation of the human brain, which is an extremely strongly coupled environment that is completely unlike the areas where "true" quantum randomness exhibits itself. No process in the brain depends in any way on

  • by jlv ( 5619 ) on Wednesday November 19, 2014 @12:56PM (#48418461)

    Does that mean we have to file a bug report if they decide to kill a human?

  • by Galaga88 ( 148206 ) on Wednesday November 19, 2014 @01:00PM (#48418493)

    Presuming that this proof reached via impressively tortured logic does have merit: Does it mean that it is also impossible to build a purely evil robot that would always kill maliciously?

    • by phantomfive ( 622387 ) on Wednesday November 19, 2014 @01:13PM (#48418631) Journal
      The proof essentially involves saying, "there is no way to build an automated process that will determine that the source code of the robot works correctly. Therefore it is impossible to build source code of the robot correctly." By requiring that the code be tested automatically, they can invoke the halting problem.

      Of course, there are ways to make sure it will halt, you can show that a program is making progress towards its goal at each step; in other words a huge subset of programs will indeed halt.

      You may need to manually test or formally prove that the code works by hand, instead of using an automated code prover to show that your code works. Really, I wonder what journal publishes this stuff, it's more like a joke paper. Oh, Arxiv.
    • Isn't an atomic bomb just a very, very simple robot?

      while (altitude() > TARGET_ALTITUDE)
              sleep(1);
      explode();

      And yes, it is impossible to determine if that algorithm will ever terminate.

      • by plover ( 150551 )

        Isn't an atomic bomb just a very, very simple robot?

        while (altitude() > TARGET_ALTITUDE)

        sleep(1);
        explode();

        And yes, it is impossible to determine if that algorithm will ever terminate.

        A "good" compiler should throw an error and refuse to compile it, because the function's return can never be reached. An "evil" compiler will spit out an ignorable warning, but let you build your bomb. That implies we need to use evil compilers to program the Kill-O-Bots.

        • Oh, nonsense!

          it's not like the compiler can tell that explode() precludes further processing.

          Any more than the killbot-2100 can tell whether it has killed the last human on Earth, thus leaving its programming eternally unfulfilled....

        • Only if the compiler knows that explode() prevents execution past that point, which it won't, unless specifically designed to look for the case of explode() twiddling an I/O register that it knows is going to cause a catastrophic fission chain reaction
      • No, because atomic bombs don't have computers in them.

      • by Bengie ( 1121981 )
        Does your atomic bomb have a multi-tasking OS? "sleep(1);" Afraid to tie up the CPU for other processing to run?
    • If it turns out to be impossible to build a purely evil robot that would always kill maliciously, does that mean that a purely evil robot would occasionally kill for the sheer joy of watching someone die?

  • It's just wrong (Score:3, Insightful)

    by Anonymous Coward on Wednesday November 19, 2014 @01:01PM (#48418507)

    Englert and co say a robot can never solve this conundrum because of the halting problem. This is the problem of determining whether an arbitrary computer program, once started, will ever finish running or whether it will continue forever.

    This is simply incorrect. The conundrum (RTFA for details) doesn't involve an arbitrary computer program. It involves a computer program that performs a specific known function. It is perfectly possible for an automated system to verify any reasonable implementation of the known function against the specification. If such a system fails it is because byzantine coding practices have been used - in which case, guilt can be assumed. The Halting problem doesn't apply unless you HAVE to get a correct answer for ALL programs. In this case you just have to get a correct answer for reasonable programs.

    • Actually you only need a correct answer for 1 program: the one running on the firmware of the robot. At which point the question is simply "over the input space, can the program provide outputs which end human lives?"

      Of course depending how you define that, you can take this right into crazy town - plenty of cellphones have ended human lives over the possible input space.

    • Re:It's just wrong (Score:4, Insightful)

      by sexconker ( 1179573 ) on Wednesday November 19, 2014 @01:21PM (#48418717)

      This.
      The halting problem is about determining whether a general program will terminate or not.
      When you already have a defined program (and machine in this case) in front of you for review, then you can determine whether or not it will halt, whether or not it works, and whether or not it is evil. You have to actually test and inspect it, though. You can't run it through a pre-built automated test and be sure. That is the only consequence of the halting problem.

      The authors make the following leaps:

      We can't know if a program will ever terminate.
      (False - you can, you just can't do so with a general algorithm written before the program.)

      Therefore we can't know all of the things a program can do.
      (False - you know all inputs and outputs and their ranges. You can't know all possible sequences if the program runs forever, but you can know each individual state.)

      Therefore we can't trust that a program isn't malicious.
      (False - you can trust it to a degree of confidence based on the completeness of your testing.)

      Therefore programs shouldn't be given the capability to do harmful things.
      (Stupid - this isn't a logical conclusion. What if we want to build malicious programs? We can and do already. Further, if our goal is to not create malicious programs, then simply having a confidence level greater than when giving humans the same capabilities, it's already an improvement.)

      • Theoretically yes, you may be able to determine if a particular program will halt by testing and inspecting.

        Practically, you may not be able to determine if a program will halt. See the Collatz conjecture [wikipedia.org]. Assume a program that accepts as input a positive integer n and returns the number of steps before the first time the Collatz iteration reaches 1. Does that program halt for all possible legal input values?

        As another point, regardless of whether or not a program or robot can _choose_ to kill a human, Asim

  • No, they can't and it shows.. Furthermore, humans aren't qualified to rule over other humans either. *Might makes right* will always come out on top. That is how nature works.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      *Might makes right* will always come out on top. That is how nature works.

      That's not how nature works. Ever seen a badger chase away a bear? Underdogs win on a regular basis because the stakes are asymmetrical. The weaker side is fighting for survival while the stronger side is fighting for a cheap dinner.

      Might only allows one to destroy an opponent's ability to fight, but that's not how the vast majority of battles are decided. Most battles end when one side loses the desire to fight. Domination at all costs is not a trait that survives and gets selected for.

  • by Jaime2 ( 824950 ) on Wednesday November 19, 2014 @01:05PM (#48418545)
    What the paper said is that computers can't provably always make the right choice. Neither can we. I'll bet computers are capable of doing a lot better than humans, especially given the rate of the increase in the number of things a computer can do compared to the rate that humans are (aren't) gaining new abilities.
    • Re: (Score:3, Interesting)

      by medv4380 ( 1604309 )
      That's how you read it? I read it as if you create a robot that tries to evaluate weather or not it should kill someone based on ethics the program will never complete. You can certainly make one that can always kill what you tell it to, but not one that can choose whether or not a given human is a rebel to be shot on site, or a human that is apart of the new world order. However, I'm more likely to have it kill all humans not holding an IFF tag.
  • by ZombieBraintrust ( 1685608 ) on Wednesday November 19, 2014 @01:05PM (#48418547)

    that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist

    The article misunderstands the halting problem. You could replace robots with humans and murder with any descision involing other people and come to the same conclusion. AI does not try to create perfect solutions. Instead you try to create solutions that work most of the time. Approaches that can evolve with trail and error. Ethically you weigh the positive benifits of success against the negative consequences of your failures.

  • One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either

    Well, since thousands of years of human society hasn't produced a definitive, objective, and universal bit of moral reasoning, I'm going with a big fat "DUH!" here.

    There's an awful lot of people who think killing is a terrible sin, unless you're doing to someone in the form of punishment.

    Or that abortion is murder, and murder is bad, unless you happen to be bombing civilians as collateral dama

    • Or that abortion is murder, and murder is bad, unless you happen to be bombing civilians as collateral damage while looking for terrorists.

      You don't even need to expand to terrorism in this example. There are people who think that abortion is murder and murder is bad, but killing doctors who perform abortion is ethically valid behavior.

    • by TheCarp ( 96830 )

      This right here. We can;t even agree, and the actual problem is so nuanced that its almost laughable that a robot as we understand them today could even begin to evaluate the situation.

      for example.... If someone is coming at you branshing a gun, and pointing it at you, is it correct to shoot and possibly kill him?

      On its face, this is simple, of course you can defend yourself. Can a robot? Is a robots continued operation worth a human life? (I may argue it could be with the imaginary hollywood style AI, but

  • "One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of."

    Of course they go away from that, because otherwise their foolishness and hypocrisy would be exposed.

  • I know they were looking at this in a very theoretical way, but in the real world there are of course physical constraints. We already have "robots" that kill people autonomously, in the form of guided missiles, cruise missiles and smart bombs. However, I think in this case we're talking about identifying an object as a human, and then killing that human. The most simple form of this, which is what we're likely to see in use next, is an autonomous gun turret. However, with any sort of weapon of this kin

  • by netsavior ( 627338 ) on Wednesday November 19, 2014 @01:06PM (#48418567)
    John: Just put up your hand and say, 'I swear I won't kill anyone.'
    Terminator: [Raises hand] I swear I will not kill anyone.
    [stands up and shoots the guard on both knees]



    He'll live.
  • by raymorris ( 2726007 ) on Wednesday November 19, 2014 @01:12PM (#48418623) Journal

    What a silly article, and a waste of three minutes to read it. What they actually showed is that it's possible to construct a scenario in which it's impossible to know for certain what the best decision is, due to lack of information.

    That fact, and their argument, is true whether it's AI making the decision or a human. Sometimes you can't know the outcome of your decisions. So what, decisions still must be made, and can be made.

    Their logic also falls down completely because the logic is basically:

    a) It's possible to imagine one scenario involving life and death scenario in which you can't be sure of the outcome.
    b) Therefore, no life-and-death decisions can be made.
    (wrong, a) just means that _some_ decisions are hard to make, not that _all_ decisions are impossible to make).

    Note the exact same logic is true without the "life-and-death" qualifier:
    a) In some situations, you don't know what the outcome of the decision will be.
    b) Therefore, no decisions can be made (/correctly).

    Again, a) applies to some, not to all. Secondly, just because you can't prove ahead of time which decision will have the best outcome doesn't mean you make make a decision, and even know that that is the correct decision. An example:

    I offer to make a bet with you regarding the winner of this weekend's football game.
    I say you have to give me a 100 point spread, meaning your team has to win by at least 100 points or else you have to pay me.
    It's an even-money bet.

    The right decision is to not make the bet, because you'd almost surely lose. Sure, it's _possible_ that your team might win by 150 points, so it's _possible_ that taking the bet would have the best outcome. That's a very unlikely outcome, though, so the correct decision _right_now_ is to decline the bet. What happens later, when the game is played, has no effect on what the correct decision was today.

    • by geekoid ( 135745 )

      " due to lack of information."
      I would say:
      due to lack of infinite information.

      Lets say the spread on that game is even.
      So if we both pick a team and bet, it's even money.
      But you say 'Give me 3 points and it's 2 to 1. Every dollar I bet, you will give me two.

      Should I take the bet? 3 to 1? 4 to 1?
      If you wanted me to give you 100 points, but you would pay a million to 1, I would probably bet a buck. Not with any hope of winning, but with the ope that I'll have a great story about the time I made a mill

    • Well there's the crux of their whole flawed argument. They're conflating "correct decision" with "best outcome" possible. Human judgement and morals don't work on what will result in the best outcome, but what will result in the most reasonable outcome.

  • by c ( 8461 ) <beauregardcp@gmail.com> on Wednesday November 19, 2014 @01:12PM (#48418625)

    "Robots don't kill people. Robot programmers kill people."

  • Artificial Intelligence doesn't work like this. Instead, AI will test a number of outputs and then adjust its attempts at getting a 'right' answer as the process begins to resonate on being right more frequently. And so when faced with a question about killing humans, it boils down to finding out if killing humans is one of the most likely responses to achieve the desired outcome. That desired outcome can be quite abstract, too. It doesn't have to be something like "There's a bad guy in front of you wit

    • AI not required. If movement detected in object of predetermined size within weapon range, shoot it until it stops moving. Example [gamesradar.com].

      Reserve the AI effort for hunting/gathering ammunition.
      We're all gonna die.

  • Wow, that's a really convoluted path they take to get to "we don't like autonomous kill bots".
    Hey, that's great and everything. Very noble of you. I'm sure people like you also lamented the invention of repeating rifles, the air force, and ICBMs. But it REALLY doesn't change much of anything. An academic paper on how killing is, like, BAD duuuuuude, just doesn't impact the people wanting, making, buying, selling, or using these these things.

    Let me put it this way: You can tell the scorpion not to sting you.

    • by geekoid ( 135745 )

      It has nothing ti do with his nature and everything to do with the scorpions inability to understand you.
      I always hated that saying, and double so for the fable. Why doesn't anyone note that the frog acted outside it's nature?

  • A Turing machine requires an infinite memory. The human brain is, at best, a linear bounded automaton.

  • by Anonymous Coward

    Fry: "I heard one time you single-handedly defeated a horde of rampaging somethings in the something something system"
    Brannigan: "Killbots? A trifle. It was simply a matter of outsmarting them."
    Fry: "Wow, I never would've thought of that."
    Brannigan: "You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down."

  • It is certainly interesting that deciding whether or not to kill some fleshy humans can be demonstrated to be circumscribed by the halting problem; but it's always a bit irksome to see another proof-of-limitiations-of-turing-complete-system that (either by omission, or in more optimistic cases directly) ignores the distinct possibility that humans are no more than turing complete.

    Humans certainly are enormously capable at approximate solutions to brutally nasty problems(eg. computational linguistics vs.
  • One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.

    It's not curious at all. The goal was to determine if a computer can decide with certainty whether another agent intends to do harm. This is obviously unsolvable, even for humans. Of course, we don't require humans to be absolutely certain in all cases before pulling the trigger, we just expect reasonable belief that oneself or others

  • By this logic, computers couldn't do anything. Since there are conditions in which the machine must do a thing and not do a thing. And they are generally pretty reliable once properly set up to not do certain things they've been programmed to not do.

    What this argument is saying is that despite the fact that computers are known to be reliable in many situations we can't rely upon them to do this specific thing.

    Because.... ?

    Now am I fan of using robots to kill people? No. I'd rather prefer not to have that ha

  • The halting problem says that you cannot determine if any completely arbitrary program will necessarily end, and this can be generalized to show that the output of programs cannot always be predicted. It does not say that it is impossible to determine if any program in a set bounded by certain restrictions cannot be predicted.

    Take an X-ray machine for example. We know these can kill people (look up Therac-25). However, if we write an overall program that calls a supplied program to calculate the treat

  • The problem isn't usually the "halting problem", it's lack of complete information, errors, and a whole host of other limitations of humans and the real world.

    We have a way of dealing with that in the real world: "when in doubt, avoid killing people" and "do the best you can".

    It's no different for robots. Even the best robots will accidentally kill people. As long as they do it less than humans in similar situations, we still come out ahead.

    • We have a way of dealing with that in the real world: "when in doubt, avoid killing people" and "do the best you can".

      Unfortunately the US solves this with "use a bigger bomb and classify all the dead innocent bystanders as combatants".

  • The "researchers" did not prove anything to do with what the article claims. What the article really proved is that it is impossible for a robot to make an ethical decision, if that ethical decision is based on analyzing source code.

    They created a scenario where the "robot" must determine if a computer program was written correctly or not. An ethical decision hinges on that. If the program is written correctly, it must do one thing, and if the program is written maliciously then it must do another. Then

  • by jd.schmidt ( 919212 ) on Wednesday November 19, 2014 @01:51PM (#48419035)
    The flaw in their logic is this, we don't really care I if works every time, just most of the time. So if the robot can do the right thing more often than not, rather like people, to such a degree that we view it as being a net benefit, we are willing to accept the occasional mistake or failure for a net overall viewed good. So they would have to prove the program would fail more often than succeed, which they probably can't. That said, I DID wish it were possible enforce Asimov's laws of robotics. Maybe some day..
  • Behold how the blind lead each other
    The philosopher
    You know so much about nothing at all

  • by istartedi ( 132515 ) on Wednesday November 19, 2014 @02:04PM (#48419177) Journal

    When theory conflicts with observation, You have two choices. You can modify your theory to fit the observation, or your observations to fit your theory. The first choice is what we generally regard as science. The second choice occurs in a number of circumstances including, but by no means limited to: religion, politics, mental illness, and general stupidity.

    Note, checking to make sure that your observations are accurate is not the same thing as modifying them. "Did I fail to see the gorilla?" is valid when theory indicates gorillas should be present. "I saw a gorilla because my guru said I should" isn't.

  • Or use the current method ... "Kill them all and let $DIETY sort it out."

  • A well programmed police bot will not fire 3 shots in the back of a fleeing teenager. It may well only fire shots when innocent humans are in immediate danger and permit its own destruction otherwise, as more bots can always be sent to complete the arrest non-lethally.

    The same bot might roll over a toddler hiding under the blanket because its programming doesn't cover this exact case and it doesn't have imagination. However, these mistakes will be rarer than human police/soldier. And after they happen once,

  • One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.

    Instead of considering an 'Evil Programmer'..... consider 'Evil Judge', 'Evil Military General', 'Nazi', or 'Evil Dictator'

    And instead of just deciding this issue; add the problem of surviving this issue together with the problem of deciding how to maximize your chances at survival and happiness in concert with previous issue

  • The real problem is that the actions of people, in some circumstances, are considered beyond good and evil, and all the silly hypothetical situations in the world doesn't begin to capture this. In the heat of the moment, with only seconds to decide, people can't be relied on to make a choice that conforms to some explicit moral code. On account of that, when faced with passing judgement on the actions of people in emergency situations, we don't pass judgement; rather, we forgive them.

    Robots, however, are pr

  • by Rinikusu ( 28164 ) on Wednesday November 19, 2014 @02:55PM (#48419715)

    I just finished my ISIS killing robot and it's doing just fine. It hasn't killed any ISIS members, yet, but it does seem to be doing a fine job killing hipsters. I might not fix that for awhile...
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    (I've totally got an ISIS beard.. Please don't kill me, robot.)

  • Sorry, but that is not what the halting problem say.

    The halting problems state that "For any interesting property(In this example: "Is this robot code safe to run") there exists programs with this property, but where you can not prove that the program has the property.

    That is: There exists robot programs which are safe to run, but where we can newer prove that they are safe.

    And the general solution is to only run programs where we can prove that they are safe. This mean that we do reject safe programs because we can't prove that they are safe*, but it does not in any way change the programs which we can express. That is: For any program which is safe, but where safety cant' be proved, there exists a program which behave in exactly the same way for all input, but which is safe.**

    *If we can't prove that a program is safe, then it is either because no such prof exists, or it is because we are not good enough to prove it.

    **No this does not contradict the halting problem, due to the assumption that the program is safe. If the program is not safe, then the transformation will convert the program to a safe program which obviously will not do the same

    • by Animats ( 122034 )

      Mod parent up.

      That's correct. The best known demonstration of this is the Microsoft Static Driver Verifier [microsoft.com], which every signed driver since Windows 7 has passed. It's a proof of correctness system which checks drivers for buffer overflows, bad pointers, and bad parameters to the APIs drivers use. It works by symbolically tracing through the program, forking off a sub-analysis at each branch point. It can be slow, but it works.

      Microsoft Research reports that in about 5% of the cases, the Verifier canno

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...