Halting Problem Proves That Lethal Robots Cannot Correctly Decide To Kill Humans 335
KentuckyFC writes: The halting problem is to determine whether an arbitrary computer program, once started, will ever finish running or whether it will continue forever. In 1936, Alan Turing famously showed that there is no general algorithm that can solve this problem. Now a group of computer scientists and ethicists have used the halting problem to tackle the question of how a weaponized robot could decide to kill a human. Their trick is to reformulate the problem in algorithmic terms by considering an evil computer programmer who writes a piece of software on which human lives depend.
The question is whether the software is entirely benign or whether it can ever operate in a way that ends up killing people. In general, a robot could never decide the answer to this question. As a result, autonomous robots should never be designed to kill or harm humans, say the authors, even though various lethal autonomous robots are already available. One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.
The question is whether the software is entirely benign or whether it can ever operate in a way that ends up killing people. In general, a robot could never decide the answer to this question. As a result, autonomous robots should never be designed to kill or harm humans, say the authors, even though various lethal autonomous robots are already available. One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.
I think (Score:5, Insightful)
I'm just going to reformulate the problem by considering idiots who use unrealistic, not-supported-by-evidence premises to make general statements as one that calls for sending killer robots after said idiots.
there is an end to the halting problem (Score:3)
branch to the HCF operand on any error.
(newbies, that is the Halt, Catch Fire command)
Re: (Score:2)
0/10 didn't even chuckle.
Re:I think (Score:5, Insightful)
The premise of TFA is that killer robots need to be perfect. They don't. They just need to be better than humans.
Which is more likely to shoot a civilian:
1. A carefully programmed and thoroughly tested robot.
2. A scared, tired, and jumpy 18 year old soldier, who hasn't slept in two days, and saw his best friend get his legs blown off by a booby trap an hour ago.
Re:I think (Score:5, Insightful)
1, when ordered to shoot civilians.
Re: (Score:2)
In that case, the robot didn't make the decision. So, no.
Re: (Score:3, Insightful)
That which a bunch of Ethicists say should happen, and that which does happen, are often in dis-accord.
The military will build killer robots. The primary concern for behavior development won't so much be safety mechanisms against killing innocents or children as safety mechanisms against losing control. They don't want their robots turning on them. Beyond that, they want the robots to obediently slaughter whatever they are pointed at. This level of obedience is precisely what makes them useful as a weap
Re: (Score:3)
You can't compute us and them in an analogue world where the real value is never actually 0 or 1 but always a shifting value in between and usually multiple shifting values in between. YOU can't, and n
Re: (Score:3)
Surely, all of those are definitions of a civilian.
Re:I think (Score:5, Informative)
So we're cutting down the criteria to not just people carrying guns, but people carrying guns actively shooting at you?
Actually, the definition of civilian [wikipedia.org] is well-defined in the Laws of War, commonly codified today in international laws by Protocol I [wikipedia.org] of the Geneva Conventions.
In sum, a "civilian" is anyone who is not a "privileged combatant [wikipedia.org]," i.e., basically someone (1) carrying arms, (2) taking orders in an organized military structure, and (3) following the laws and customs of warfare. (Also, usually privileged combatants are required to wear insignia.)
Someone who carries arms but does not satisfy those criteria is still a "civilian," though if those arms are actively used in support of an organized military force, he/she may be a civilian who is also an "unprivileged combatant," i.e., he/she not eligible for protection under the normal rules for prisoners of war.
So, actually the criteria are much more specific than you describe. "Civilians" can fight in wars, in which case they become "combatants," but they do not cease to be "civilians," as the term is commonly understood in contrast to organized military personnel.
As for the farmer in GGP's example, he's clearly a civilian unless he's a member of a military force. If he carries a gun but only for his own protection and does not engage in direct action against an enemy, he is probably assumed to be a "non-combatant" as well, under international legal definitions.
Not Entirely the Right Question (Score:3)
Which is more likely to shoot a civilian...
That's not entirely the right question. You need to account for which is more predictable for another human. If you are in the middle of a war zone with soldiers getting blown up by booby traps then you might expect a human soldier to be rather nervous and so you would approach them with extreme caution or get out of the way. However if you have a robot wandering down a street in a peaceful area and the right set of circumstances just happen to cause it to misidentify a random, innocent person as a target
Re: (Score:3)
Now which is more likely to occur:
1. A carefully programmed and thoroughly tested robot.
or
2. Lowest tender robot with just barely sufficient buggy code to get past the tender process. Generating a huge bonus for the psychopathic executive team when they produce 10,000 of them.
Bad luck under capitalism, corporations will never ever produce a "A carefully programmed and thoroughly tested robot", it just wont happen, it can't happen, corporate greed controlled by psychopaths guarantees it wont happen, that
Re: (Score:3)
Also, built by the lowest bidder.
Re:I think (Score:5, Insightful)
Product liability never results in anyone being actually responsible for the death going to jail or huge penalties.
A multinational might> pay out a couple of million in product liability, but then it will just be chalked up to the cost of doing business.
If the multinational is a defense contractor (BAE, Raytheon, Lockheed, General Dynamics, etc), it will all be swept under the rug and more money will be thrown at the contractor to "fix" it.
That's the reality.
--
BMO
The Government Doesn't think like a Person (Score:3)
Re: (Score:2)
By the same logic (Score:5, Insightful)
By the same logic, computers should not be allowed in any life-critical situation. That includes hospital equipment, airplanes, traffic control, etc. etc.
Fortunately, we don't judge the reliability of computers based on the ability to mathematically prove that nobody has put evil code in on purpose.
Re: (Score:2, Interesting)
And let's not forget the "Better at it than humans" heuristic. As long as "Jaywalking under the influence of melanin" is sometimes a capital crime, that's not a hard target to hit.
Re: (Score:2, Insightful)
Skin color. Either the guy is making a political statement or he thinks black people are harder to see at night.
Re:By the same logic (Score:4, Interesting)
In your examples, there are humans in the loop.
In this case, you have a robot trying to autonomously decide "kill" or "don't kill" when it encounters a human.
Hospital equipment - it's generally observed by personnel who after failures can decide to not use the equipment further (see Therac 25), or that changes need to be made in order to use the equipment. The equipment never hooks itself up to a patient automatically and provides treatment without a human involved. Sure there are errors that kill people unintentionally, but then there's a human choice to simply take the equipment out of service. E.g., an AED is mostly autonomous, but if a model of AED consistently fails in its diagnosis, humans can easily replace said AED with a different model. (You can't trust said AED to take itself out of service).
Airplanes - you still have humans "in the loop" and there have been many a time when said humans have to be told that some equipment can't be used in the way it was used. Again, the airplane doesn't takeoff, fly, and land without human intervention. In bad cases, the FAA can issue a mandatory airworthiness directive that says said plane cannot leave the ground without changes being made. In which case human pilots check for those changes before they decide to fly it. The airplane won't take off on its own.
Traffic control - again, humans in the loop. You'll get accidents and gridlock when lights fail, but the traffic light doesn't force you to hit the gas - you can decide that because of the mess, to simply stay put and not get involved.
Remember, in an autonomous system, you need a mechanism to determine if the system is functioning normally. Of course, said system cannot be a part of the autonomous system, because anomalous behavior may be missed (it's anomalous, so you can't even trust the system that's supposed to detect the behavior).
In all those cases, the monitoring system is external and can be made to halt a anomalous system - equipment can be put aside and not used, avoiding hazardous situations by disobeying, etc.
Sure, humans are very prone to failure, that's why we have computers which are far less prone to failure, But the fact that a computer is far less prone to making an error doesn't mean we have to trust it implicitly because we're more prone to making a mistake. it's why we don't trust computers to do everything for us - we expect things to work but when indications are that it doesn't, we have measures to try to prevent a situation from getting worse.
Re: (Score:3)
So how many humans have to die before recognizing the AED is faulty? If it's a subtle fault, it might be delivering a barely ineffective treatment, and confused with an unsaveable patient. The THERAC 25 failure was a bit more dramatic, but it still killed many patients.
Would we accept the same levels of failure from the Kill-O-Bot 2000? We already fire missiles into crowds of people or convoys in order to take out a single high value target. If the Kill-O-Bot was more specific than a missile, but less tha
Re: (Score:2)
And somehow there is no system in place for killer robots?
Re:By the same logic (Score:5, Insightful)
Agreed. The authors set up a nearly impossibly complex ethical dilemma that would freeze even a human brain into probable inaction, let alone a computer one, and then claims "See? Because a computer can't guarantee the correct outcome, we can therefore never let a computer make that decision." It seems to be almost the very definition of a straw man to me.
The entire exercise seems to be a deliberate attempt to reach this conclusion, which they helpfully spell out in case anyone missed the not-so-subtle lead: "Robots should not be designed solely or primarily to kill or harm humans."
I'm in no hurry to turn loose an army of armed robots either, but saying that you can "prove" that an algorithm can't make a fuzzy decision 100% of the time? Well, yeah, no shit. A human sure as hell can't either. But what if the computer can do it far more accurately in 99% of the cases, because it doesn't have all those pesky "I'm fearing for my life and hopped up on adrenaline so I'm going to shoot now and think later" reflexes of a human?
Re: (Score:2)
Re: (Score:3)
For some values of "decide."
Should a computer program be put in control of strategic operations, deciding when, where and what to attack? No.
Could a drone be reasonably programmed to identify combatants in a specific area and kill them without "unacceptable" collateral damage? Maybe.
Could a drone be ordered to kill a specific target on a battlefield? Absolutely.
I think it's mostly the third type the military is interested in. The commander still runs the battle and the robots are only semi-autonomous. That
The human brain is not a Turing machine (Score:4, Insightful)
Exhibit A, the human skull: Not enough room for an infinite tape.
Which is why we have gods... (Score:2)
Exhibit B, God (or Gods), generally regarded as being infinite/omnipresent/omnipotent/otherwise not subject to laws of physics - hence plenty of room for an infinite tape.
God does the complicated bit of deciding whether puny humans should kill or not - the "why" - leaving the humans to decide the simple bits like "when / who" (goes first), "how" (which bits to cut / shoot / throttle / stone), and which way up to hold the camera.
Quantum Mechanics and Determinism (Score:2)
The problem with Turing machines is that they are by definition a deterministic system. A certain input will give a specific output. That is why they can't make a "judgment" call.
The universe as a whole is NOT deterministic as Quantum Mechanics proves. QM is based on true randomness (obvious a simplification but go with it for this conversation). Our 'machines' deal with this randomness and even incorporate it into some operations. So a specific input will NOT always generate the same response.
It is th
Re: (Score:2)
I'm not sure I'd describe a "judgement call" as being non-deterministic. It's really better described as fuzzy logic, and computers do it all the time, such as in spam filters. The difference is that humans have a lifetime of learning and context for them to help make those judgments, where most computer algorithms don't have that extended context to draw from.
I don't see how true randomness has anything to do with these sorts of decision-making processes or with quantum mechanics in general.
Re: (Score:3)
My original post was simply pointing out that the human brain is NOT and can never be a Turing machine due to the fundamental randomness of the universe. This means that no study of Turing Machines can make any claim on human judgment calls.
I am not sure the random nature of the universe is sufficient to allow for true 'judgement' but it MIGHT.
Re: (Score:3)
My original post was simply pointing out that the human brain is NOT and can never be a Turing machine
This is true but it has exactly nothing to do with quantum mechanics or randomness. To see this, understand that we can't tell if QM is "truly" (metaphysically) random or just mocking it up really cleverly. Or rather, we can tell, but using inferences so indirect that they make no difference to the operation of the human brain, which is an extremely strongly coupled environment that is completely unlike the areas where "true" quantum randomness exhibits itself. No process in the brain depends in any way on
Re: (Score:2)
Please tell me exactly when a specific electron is 14 billion 123 million 567 thousand 324 years and 23485723048752 seconds after the big bang...
You CAN'T!
It isn't just knowing the starting conditions it is about being able to calculate every state between the beginning and the end.
Ommmmm... (Score:2)
The past doesn't exist. The future doesn't exist. You are standing on the pinnacle of now. Watch your step.
Re: (Score:2)
I make judgments about what to do in the future based on what I did or learned in the past.
Of course they exist.
Re: (Score:2)
http://en.wikipedia.org/wiki/Quantum_mechanics/ [wikipedia.org]
I know wikipedia but I am lazy and don't really care...
Check out "During a measurement, on the other hand, the change of the initial wavefunction into another, later wavefunction is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here.[28][29]"
There are two cited references.
A large number of quantum particles seem to act in a deterministic way but this is simple the law of large numbers.
Re: (Score:2)
My admittedly limited understand is that the randomness is a fundamental requirement of the theory.
I find QM interesting and have read up on it but I am certainly not a theoretical physicist...
only incorrectly device to kill humans? (Score:3)
Does that mean we have to file a bug report if they decide to kill a human?
Re: (Score:3, Funny)
Stop picking on systemd! Just give it a chance!
Re:only incorrectly device to kill humans? (Score:5, Funny)
If you have a problem with your Killbot's operation, please call 1-800-KILL-HMNS and we'll send a customer service Killbot to execute your trouble ticket right away. We won't rest until there are no bug reports submitted by humans.
Re: (Score:3)
Once we get to the point of building and testing killer robots, I predict that engineering management is going to be a LOT more polite than they are today.
Re: (Score:2)
There must be a crapload of bug reports filed against Bender, then.
Impossible to build purely evil robots? (Score:5, Insightful)
Presuming that this proof reached via impressively tortured logic does have merit: Does it mean that it is also impossible to build a purely evil robot that would always kill maliciously?
Re:Impossible to build purely evil robots? (Score:4, Insightful)
Of course, there are ways to make sure it will halt, you can show that a program is making progress towards its goal at each step; in other words a huge subset of programs will indeed halt.
You may need to manually test or formally prove that the code works by hand, instead of using an automated code prover to show that your code works. Really, I wonder what journal publishes this stuff, it's more like a joke paper. Oh, Arxiv.
Re: (Score:2)
Isn't an atomic bomb just a very, very simple robot?
while (altitude() > TARGET_ALTITUDE)
sleep(1);
explode();
And yes, it is impossible to determine if that algorithm will ever terminate.
Re: (Score:2)
Isn't an atomic bomb just a very, very simple robot?
while (altitude() > TARGET_ALTITUDE)
sleep(1);
explode();
And yes, it is impossible to determine if that algorithm will ever terminate.
A "good" compiler should throw an error and refuse to compile it, because the function's return can never be reached. An "evil" compiler will spit out an ignorable warning, but let you build your bomb. That implies we need to use evil compilers to program the Kill-O-Bots.
Re: (Score:2)
Oh, nonsense!
it's not like the compiler can tell that explode() precludes further processing.
Any more than the killbot-2100 can tell whether it has killed the last human on Earth, thus leaving its programming eternally unfulfilled....
Re: (Score:2)
Re: (Score:2)
No, because atomic bombs don't have computers in them.
Re: (Score:2)
Evil robots can therefore feel joy? (Score:2)
If it turns out to be impossible to build a purely evil robot that would always kill maliciously, does that mean that a purely evil robot would occasionally kill for the sheer joy of watching someone die?
It's just wrong (Score:3, Insightful)
Englert and co say a robot can never solve this conundrum because of the halting problem. This is the problem of determining whether an arbitrary computer program, once started, will ever finish running or whether it will continue forever.
This is simply incorrect. The conundrum (RTFA for details) doesn't involve an arbitrary computer program. It involves a computer program that performs a specific known function. It is perfectly possible for an automated system to verify any reasonable implementation of the known function against the specification. If such a system fails it is because byzantine coding practices have been used - in which case, guilt can be assumed. The Halting problem doesn't apply unless you HAVE to get a correct answer for ALL programs. In this case you just have to get a correct answer for reasonable programs.
Re: (Score:2)
Actually you only need a correct answer for 1 program: the one running on the firmware of the robot. At which point the question is simply "over the input space, can the program provide outputs which end human lives?"
Of course depending how you define that, you can take this right into crazy town - plenty of cellphones have ended human lives over the possible input space.
Re:It's just wrong (Score:4, Insightful)
This.
The halting problem is about determining whether a general program will terminate or not.
When you already have a defined program (and machine in this case) in front of you for review, then you can determine whether or not it will halt, whether or not it works, and whether or not it is evil. You have to actually test and inspect it, though. You can't run it through a pre-built automated test and be sure. That is the only consequence of the halting problem.
The authors make the following leaps:
We can't know if a program will ever terminate.
(False - you can, you just can't do so with a general algorithm written before the program.)
Therefore we can't know all of the things a program can do.
(False - you know all inputs and outputs and their ranges. You can't know all possible sequences if the program runs forever, but you can know each individual state.)
Therefore we can't trust that a program isn't malicious.
(False - you can trust it to a degree of confidence based on the completeness of your testing.)
Therefore programs shouldn't be given the capability to do harmful things.
(Stupid - this isn't a logical conclusion. What if we want to build malicious programs? We can and do already. Further, if our goal is to not create malicious programs, then simply having a confidence level greater than when giving humans the same capabilities, it's already an improvement.)
Re: (Score:3)
Theoretically yes, you may be able to determine if a particular program will halt by testing and inspecting.
Practically, you may not be able to determine if a program will halt. See the Collatz conjecture [wikipedia.org]. Assume a program that accepts as input a positive integer n and returns the number of steps before the first time the Collatz iteration reaches 1. Does that program halt for all possible legal input values?
As another point, regardless of whether or not a program or robot can _choose_ to kill a human, Asim
Re: (Score:2)
For any computer program with a finite number of states (finite memory) you can determine whether it halts by running it long enough that it must be looping.
For a computer with 16384 states (An 8 state turing machine with an 8 position binary tape. 8 states * 8 positions * 2^8 values that can be on the tape) you can tell if any arbitrary progra
humans can never decide this issue either (Score:2, Insightful)
No, they can't and it shows.. Furthermore, humans aren't qualified to rule over other humans either. *Might makes right* will always come out on top. That is how nature works.
Re: (Score:2, Insightful)
*Might makes right* will always come out on top. That is how nature works.
That's not how nature works. Ever seen a badger chase away a bear? Underdogs win on a regular basis because the stakes are asymmetrical. The weaker side is fighting for survival while the stronger side is fighting for a cheap dinner.
Might only allows one to destroy an opponent's ability to fight, but that's not how the vast majority of battles are decided. Most battles end when one side loses the desire to fight. Domination at all costs is not a trait that survives and gets selected for.
Re: (Score:2)
people are interested in benefiting from this
Come to the dark side, we have untold riches and unlimited power.
And cookies!
Bad Headline as Usual (Score:5, Insightful)
Re: (Score:3, Interesting)
Halting Problem (Score:3)
that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist
The article misunderstands the halting problem. You could replace robots with humans and murder with any descision involing other people and come to the same conclusion. AI does not try to create perfect solutions. Instead you try to create solutions that work most of the time. Approaches that can evolve with trail and error. Ethically you weigh the positive benifits of success against the negative consequences of your failures.
Ummm ... duh? (Score:2)
Well, since thousands of years of human society hasn't produced a definitive, objective, and universal bit of moral reasoning, I'm going with a big fat "DUH!" here.
There's an awful lot of people who think killing is a terrible sin, unless you're doing to someone in the form of punishment.
Or that abortion is murder, and murder is bad, unless you happen to be bombing civilians as collateral dama
Re: (Score:2)
You don't even need to expand to terrorism in this example. There are people who think that abortion is murder and murder is bad, but killing doctors who perform abortion is ethically valid behavior.
Re: (Score:2)
This right here. We can;t even agree, and the actual problem is so nuanced that its almost laughable that a robot as we understand them today could even begin to evaluate the situation.
for example.... If someone is coming at you branshing a gun, and pointing it at you, is it correct to shoot and possibly kill him?
On its face, this is simple, of course you can defend yourself. Can a robot? Is a robots continued operation worth a human life? (I may argue it could be with the imaginary hollywood style AI, but
Steer Clear! (Score:2)
"One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of."
Of course they go away from that, because otherwise their foolishness and hypocrisy would be exposed.
Physical constraints (Score:2)
I know they were looking at this in a very theoretical way, but in the real world there are of course physical constraints. We already have "robots" that kill people autonomously, in the form of guided missiles, cruise missiles and smart bombs. However, I think in this case we're talking about identifying an object as a human, and then killing that human. The most simple form of this, which is what we're likely to see in use next, is an autonomous gun turret. However, with any sort of weapon of this kin
cromulent quote (Score:3)
Terminator: [Raises hand] I swear I will not kill anyone.
[stands up and shoots the guard on both knees]
He'll live.
Silly article, waste of time (Score:4, Insightful)
What a silly article, and a waste of three minutes to read it. What they actually showed is that it's possible to construct a scenario in which it's impossible to know for certain what the best decision is, due to lack of information.
That fact, and their argument, is true whether it's AI making the decision or a human. Sometimes you can't know the outcome of your decisions. So what, decisions still must be made, and can be made.
Their logic also falls down completely because the logic is basically:
a) It's possible to imagine one scenario involving life and death scenario in which you can't be sure of the outcome.
b) Therefore, no life-and-death decisions can be made.
(wrong, a) just means that _some_ decisions are hard to make, not that _all_ decisions are impossible to make).
Note the exact same logic is true without the "life-and-death" qualifier:
a) In some situations, you don't know what the outcome of the decision will be.
b) Therefore, no decisions can be made (/correctly).
Again, a) applies to some, not to all. Secondly, just because you can't prove ahead of time which decision will have the best outcome doesn't mean you make make a decision, and even know that that is the correct decision. An example:
I offer to make a bet with you regarding the winner of this weekend's football game.
I say you have to give me a 100 point spread, meaning your team has to win by at least 100 points or else you have to pay me.
It's an even-money bet.
The right decision is to not make the bet, because you'd almost surely lose. Sure, it's _possible_ that your team might win by 150 points, so it's _possible_ that taking the bet would have the best outcome. That's a very unlikely outcome, though, so the correct decision _right_now_ is to decline the bet. What happens later, when the game is played, has no effect on what the correct decision was today.
Re: (Score:2)
" due to lack of information."
I would say:
due to lack of infinite information.
Lets say the spread on that game is even.
So if we both pick a team and bet, it's even money.
But you say 'Give me 3 points and it's 2 to 1. Every dollar I bet, you will give me two.
Should I take the bet? 3 to 1? 4 to 1?
If you wanted me to give you 100 points, but you would pay a million to 1, I would probably bet a buck. Not with any hope of winning, but with the ope that I'll have a great story about the time I made a mill
Re: (Score:2)
Well there's the crux of their whole flawed argument. They're conflating "correct decision" with "best outcome" possible. Human judgement and morals don't work on what will result in the best outcome, but what will result in the most reasonable outcome.
National Robots Association (Score:5, Funny)
"Robots don't kill people. Robot programmers kill people."
But AI doesn't work like this... (Score:2)
Artificial Intelligence doesn't work like this. Instead, AI will test a number of outputs and then adjust its attempts at getting a 'right' answer as the process begins to resonate on being right more frequently. And so when faced with a question about killing humans, it boils down to finding out if killing humans is one of the most likely responses to achieve the desired outcome. That desired outcome can be quite abstract, too. It doesn't have to be something like "There's a bad guy in front of you wit
Re: (Score:2)
AI not required. If movement detected in object of predetermined size within weapon range, shoot it until it stops moving. Example [gamesradar.com].
Reserve the AI effort for hunting/gathering ammunition.
We're all gonna die.
Don't sting me bro (Score:2)
Wow, that's a really convoluted path they take to get to "we don't like autonomous kill bots".
Hey, that's great and everything. Very noble of you. I'm sure people like you also lamented the invention of repeating rifles, the air force, and ICBMs. But it REALLY doesn't change much of anything. An academic paper on how killing is, like, BAD duuuuuude, just doesn't impact the people wanting, making, buying, selling, or using these these things.
Let me put it this way: You can tell the scorpion not to sting you.
Re: (Score:2)
It has nothing ti do with his nature and everything to do with the scorpions inability to understand you.
I always hated that saying, and double so for the fable. Why doesn't anyone note that the frog acted outside it's nature?
Re: (Score:2)
Too smart to accept something without thinking about it.
Not a Turing machine (Score:2)
A Turing machine requires an infinite memory. The human brain is, at best, a linear bounded automaton.
Re: (Score:2)
Obligatory Futurama quotes... (Score:2, Insightful)
Fry: "I heard one time you single-handedly defeated a horde of rampaging somethings in the something something system"
Brannigan: "Killbots? A trifle. It was simply a matter of outsmarting them."
Fry: "Wow, I never would've thought of that."
Brannigan: "You see, killbots have a preset kill limit. Knowing their weakness, I sent wave after wave of my own men at them until they reached their limit and shut down."
Seems to be a theme... (Score:2)
Humans certainly are enormously capable at approximate solutions to brutally nasty problems(eg. computational linguistics vs.
Can't decide WITH CERTAINTY (Score:2)
It's not curious at all. The goal was to determine if a computer can decide with certainty whether another agent intends to do harm. This is obviously unsolvable, even for humans. Of course, we don't require humans to be absolutely certain in all cases before pulling the trigger, we just expect reasonable belief that oneself or others
This is moronic. (Score:2)
By this logic, computers couldn't do anything. Since there are conditions in which the machine must do a thing and not do a thing. And they are generally pretty reliable once properly set up to not do certain things they've been programmed to not do.
What this argument is saying is that despite the fact that computers are known to be reliable in many situations we can't rely upon them to do this specific thing.
Because.... ?
Now am I fan of using robots to kill people? No. I'd rather prefer not to have that ha
Misunderstanding the halting problem (Score:2)
The halting problem says that you cannot determine if any completely arbitrary program will necessarily end, and this can be generalized to show that the output of programs cannot always be predicted. It does not say that it is impossible to determine if any program in a set bounded by certain restrictions cannot be predicted.
Take an X-ray machine for example. We know these can kill people (look up Therac-25). However, if we write an overall program that calls a supplied program to calculate the treat
neither can humans (Score:2)
The problem isn't usually the "halting problem", it's lack of complete information, errors, and a whole host of other limitations of humans and the real world.
We have a way of dealing with that in the real world: "when in doubt, avoid killing people" and "do the best you can".
It's no different for robots. Even the best robots will accidentally kill people. As long as they do it less than humans in similar situations, we still come out ahead.
Re: (Score:2)
We have a way of dealing with that in the real world: "when in doubt, avoid killing people" and "do the best you can".
Unfortunately the US solves this with "use a bigger bomb and classify all the dead innocent bystanders as combatants".
The "researchers" cheated (Score:2)
The "researchers" did not prove anything to do with what the article claims. What the article really proved is that it is impossible for a robot to make an ethical decision, if that ethical decision is based on analyzing source code.
They created a scenario where the "robot" must determine if a computer program was written correctly or not. An ethical decision hinges on that. If the program is written correctly, it must do one thing, and if the program is written maliciously then it must do another. Then
The logical flaw (Score:3)
Death (Score:2)
Behold how the blind lead each other
The philosopher
You know so much about nothing at all
When theory conflicts with observation (Score:3)
When theory conflicts with observation, You have two choices. You can modify your theory to fit the observation, or your observations to fit your theory. The first choice is what we generally regard as science. The second choice occurs in a number of circumstances including, but by no means limited to: religion, politics, mental illness, and general stupidity.
Note, checking to make sure that your observations are accurate is not the same thing as modifying them. "Did I fail to see the gorilla?" is valid when theory indicates gorillas should be present. "I saw a gorilla because my guru said I should" isn't.
Or use the current method ... (Score:3)
Or use the current method ... "Kill them all and let $DIETY sort it out."
They can, however, make less mistakes then us (Score:2)
A well programmed police bot will not fire 3 shots in the back of a fleeing teenager. It may well only fire shots when innocent humans are in immediate danger and permit its own destruction otherwise, as more bots can always be sent to complete the arrest non-lethally.
The same bot might roll over a toddler hiding under the blanket because its programming doesn't cover this exact case and it doesn't have imagination. However, these mistakes will be rarer than human police/soldier. And after they happen once,
Human version (Score:2)
One curious corollary is that if the human brain is a Turing machine, then humans can never decide this issue either, a point that the authors deliberately steer well clear of.
Instead of considering an 'Evil Programmer'..... consider 'Evil Judge', 'Evil Military General', 'Nazi', or 'Evil Dictator'
And instead of just deciding this issue; add the problem of surviving this issue together with the problem of deciding how to maximize your chances at survival and happiness in concert with previous issue
Robots are cut no slack (Score:2)
The real problem is that the actions of people, in some circumstances, are considered beyond good and evil, and all the silly hypothetical situations in the world doesn't begin to capture this. In the heat of the moment, with only seconds to decide, people can't be relied on to make a choice that conforms to some explicit moral code. On account of that, when faced with passing judgement on the actions of people in emergency situations, we don't pass judgement; rather, we forgive them.
Robots, however, are pr
Bullcrap. (Score:3)
I just finished my ISIS killing robot and it's doing just fine. It hasn't killed any ISIS members, yet, but it does seem to be doing a fine job killing hipsters. I might not fix that for awhile...
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
(I've totally got an ISIS beard.. Please don't kill me, robot.)
That is not what the halting problem say (Score:3)
Sorry, but that is not what the halting problem say.
The halting problems state that "For any interesting property(In this example: "Is this robot code safe to run") there exists programs with this property, but where you can not prove that the program has the property.
That is: There exists robot programs which are safe to run, but where we can newer prove that they are safe.
And the general solution is to only run programs where we can prove that they are safe. This mean that we do reject safe programs because we can't prove that they are safe*, but it does not in any way change the programs which we can express. That is: For any program which is safe, but where safety cant' be proved, there exists a program which behave in exactly the same way for all input, but which is safe.**
*If we can't prove that a program is safe, then it is either because no such prof exists, or it is because we are not good enough to prove it.
**No this does not contradict the halting problem, due to the assumption that the program is safe. If the program is not safe, then the transformation will convert the program to a safe program which obviously will not do the same
Re: (Score:3)
Mod parent up.
That's correct. The best known demonstration of this is the Microsoft Static Driver Verifier [microsoft.com], which every signed driver since Windows 7 has passed. It's a proof of correctness system which checks drivers for buffer overflows, bad pointers, and bad parameters to the APIs drivers use. It works by symbolically tracing through the program, forking off a sub-analysis at each branch point. It can be slow, but it works.
Microsoft Research reports that in about 5% of the cases, the Verifier canno
Re: (Score:2)
Robot: Error occurred. Cannot match "turban". Turban is a type of hat, therefore generalizing to match "hat." Also, generalizing to classify five o'clock shadow as "beard." Executing anyone wearing a hat or sporting stubble.
Re: (Score:2)
Well on the plus side, it will kill off 90% of Redditors.
Depends on how it identifies 'neckbeards'.
Re: (Score:2)
Re: (Score:2)
It contains a finite amount of matter that can assume only a finite number of states. The states aren't enumerable but they can't be infinite.