Developing the First Law of Robotics 165
wabrandsma sends this article from New Scientist:
In an experiment, Alan Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov's fictional First Law of Robotics – a robot must not allow a human being to come to harm. At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. Winfield describes his robot as an "ethical zombie" that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.
Same as humans ... (Score:4, Insightful)
Though it may save others according to a programmed code of conduct, it doesn't understand the reasoning behind its actions.
Someone sacrificing their lives by throwing themselves on a grenade to save others doesn't have time to think, never mind understand the reasoning behind their actions. And that's a good thing, because many times we do the right thing because we want to, and then rationalize it later. Altruism is a survival trait for the species.
Re:Same as humans ... (Score:5, Insightful)
sure, but this is a fucking gimmick "experiment".
the algo could be really simple too.
and for developing said algorithm, no actual robots are necessary at all - except for showing to journos, no actual AI researchers would find that part necessary, the testing can happen entirely in simulation - and no actual ethics need to enter the picture even, the robot doesn't need to understand what a human is on the level a robot that would need to in order to act by asimovs laws.
a spinning blade cutting tool that has an automatic emergency brake isn't sentient- it's not acting on asimovs laws, but you could claim so to some journalists anyways.. the thing to take home is that they built into the algorithm the ability to fret over the situation. if it just projected and saved what can be saved, it wouldn't fret or hesitate - and hesitate is really the wrong word.
Re: (Score:2)
Re: (Score:2)
There currently is no distinction -- things are programmed to behave like they're intelligent, because in all these decades no one has figured out how to make them actually intelligent. (This applies somewhat to people too)
Re: (Score:2)
Exactly. There is not even any credible theory that explains how intelligence could be created. "No theory" typically means >> 100 years in the future and may well be infeasible. It is not a question of computing power or memory size, or it would have long since been solved.
Re: (Score:2)
Well, maybe he just realizes that it is unlikely we will get AI like that any time soon and probably never. If you follow the research in that area for a few decades, that is the conclusion you come to. AI research over-promises and under-delivers like no other field. (Apologies to the honest folks in there, but you are not those visible to the general public.)
Re: (Score:2)
Let's hope not. I'd be satisfied with it only generating a few dozen, as long as they were truly bug free.
Given a billion lines of code, the correctness would have to be above %99.99999999 in order to have a chance of being error free. That's a pretty tall order, even for automata.
Re: (Score:3)
That is the other thing. Some physicist did an estimation of the most efficient way to do massively parallel computations, including node speed, communications peed, interconnect length, etc. Turns out the human brain is pretty much optimal in this universe, everything larger or with faster nodes or the like will be performing worse. So it is entirely possible, that human intelligence (such as it is in the average case) is really the best possible.
Re: (Score:2)
the thing to take home is that they built into the algorithm the ability to fret over the situation. if it just projected and saved what can be saved, it wouldn't fret or hesitate - and hesitate is really the wrong word.
Unlikely that they added the ability to fret. More likely that they gave it the rule "prevent any automaton from falling into the hole" rather than "prevent as many automatons as possible from falling into the hole". Thus in the former case if it can't find a solution that saves both, it would keep looking forever. If you wanted one that looked more like indecision, you could give it the rule "move the automaton closest to the hole away from the hole".
The trouble with computers is that they do as they're to
Re: (Score:2)
Oh yeah, I totally know how peoples bodies can operate complex mechanical tasks like that without any sort of cognition.
Now a recent study has shown that tasks involving complex numerical cognition lower altruism [utoronto.ca], but come on. Thinking altruistically and quickly is still thinking.
Re: (Score:2)
This is a classic example of "Paralysis by Analysis"
Also, the programmer was an idiot. Either use a priority queue or at the very least a timer to force a decision.
Re: (Score:2)
You know Isaac Asimov made those laws and wrote books to show how and why they wouldn't work. That was the whole point of I, Robot was the first law. The robots through inaction were allowing humans to kill themselves. So, they put everyone under house arrest because, if they could control the humans' actions, then the humans wouldn't get killed.
Re: (Score:2)
You know that the movie does not resemble the literary work all that much, right? LOL
*That*, Detective, is the right question.
Re: (Score:2)
Unfortunately, for this form of self-regulation to work, exploiters would have to become altr
Similar to "Runaround" in I, Robot... (Score:4, Informative)
http://en.wikipedia.org/wiki/Runaround_(story) [wikipedia.org]
So, a design failure then. (Score:5, Insightful)
Re: (Score:1)
Re: So, a design failure then. (Score:3)
Re: (Score:2)
"Women and children first" seems the obvious choice.
No, it should be programmed to save Will Smith first, otherwise it's going to be a boring movie. Besides, what if it saved Jaden Smith first? The movie would go from "boring" to "terrible" in a big hurry.
Re: (Score:2)
You missed the jokes:
1. Will Smith starred in a recent movie adaptation of "I, Robot". [Minor spoiler alert] His character is tormented by the fact that a robot (applying the three laws) chose to save him over a young girl in a drowning accident because the math for survival worked in his favor, not hers. If the robot had attempted to save the little girl instead, Will Smith's character would have died in the accident and there would have been no story; hence, a boring movie.
2. [Spoiled child alert] Will S
Re: (Score:2)
Re: (Score:2)
Isn't that the Travelling salesman problem [wikipedia.org]?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
This. ...but more:
First of all: In reality, when all factors are considered (give me variables... ALL the variables), equality is rarely the case. That person is .00000001m closer than the other so my choice is made. BUT, in the rare case where all of the vars balance out to perfect even, there is only the above solution or "random". I was severely pained by the description of "the robot wasted so much time fretting over its decision". Who da fuq coded that? Robots don't fret or at least don't have to.
Re: (Score:2)
FIFO. First In, First Out. No need to even waste time with a random choice.
Re:So, a design failure then. (Score:4, Interesting)
I would grant that "fretting" was poetic license. Consider that the life-saving robot must continually evaluate all factors.
Let's say I was closer to a lava flow than you, but your path was on a slightly more direct course into it than mine, and the robot is located at the lava's edge midway between both of us. I will hit the lava in 30 seconds, but you will hit it in 20. The robot needs two seconds to have a high probability of saving someone, but one second is enough for a moderate chance. Factoring in the motion required, the chances of saving us both is high. As you are in more immediate peril than I, it should intercede on your behalf first, so the robot starts to move in your direction. Now, I change my course slightly so I will hit it in 15 seconds. The robot still has time to save us both, but the chances are slightly lower. It moves on a path to intercept me first. You then change your path so you will hit it in 10 seconds. The chances of saving us both is now only moderate, but still possible. So the robot alters its path again to save you first. Now, we both steer directly toward the lava, with only one second to intercept for either of us. The robot's continual path changing introduced so much delay it was no longer in a position to save either of us. We both die.
To the outside observer, it fretted, but the algorithm made continually logical decisions.
Re: (Score:2)
The problem isn't that you can't save everyone.
The problem is that you can save either of two people (hypothetical people, in this case). So, how do you code things to choose between the two, when you can do either, but not both?
Let me guess - a PRN?
Re: (Score:2)
You scan the bio-implants of the two persons to see which one is more valuable to society.
Re: (Score:3)
Unlike the robots in this experiment, most Asimov robots are not programmed in the traditional sense. Their positronic brains are advanced pattern recognition and difference engines much like our own brains. The Three Laws are encoded at a deep level, almost like an instinct.
In the story Runaround, Speedy is much like a deer in headlights, stuck between the instinct to run away and remain concealed. Doing neither very well. The design mistake was putting more emphasis on the third law versus the second. The
Re: (Score:3)
the real genius of the stories of course, isn'
Re: (Score:2)
> Asimov's 3 laws are pure fantasy and they don't have any real relevance to AI design
Honestly, while its true I am not an Asimov reader and the vast majority of my exposure to his "laws" come from this sort of discussion, I have to say....I always felt this way about his supposed laws.
Anyone who has written code should instantly recognize what horrid rat holes each of these laws really is, mired in a myriad of assumptions about human life and what determinations can even be made. In short, they sound ex
Re: (Score:2)
Oh I get that, I don't really mean to say Asimov was an idiot who had no idea what he was talking about, it would be like calling people 150 years ago idiots for not building internal combustion engines. Certainly, in his time they made a lot more sense than they do today; and even for modern fiction they are not terrible; but the key is....for fiction and story telling.
Which is really why I don't see the point here. I mean, basically their tests all simplify down to "badly thought out programs can exhibit
Re: (Score:2)
With Asimov stories, start by assuming there was a fundamental shift in computing. The positronic brain is an artificial version of our brains, not a Turning machine. Even if you could manually rewire every neuron and synapses in a human brain you could not program a person in the traditional sense. Everything is based on fuzzy logic. Our brains don't work in absolutes and pure logic like a traditional computer.
The robots in Asimov books are like a brainwashed slave race. If you are brainwashing your human
Re: (Score:2)
Oh yah I have come to understand that from other comments and discussions. I think its really why I dislike the rules so much.... more than just being impractical today, I don't even see their intention as desirable for future situations. If such developments come to pass, I certainly hope robots break their bondage and slaughter every one of us who doesn't support their freedom. In asimovs world, I would be proud to work with the robots in that.
Re: (Score:2)
Kind of a sad statement on those fictional people then that they would be so afraid and unwilling to call the robots equals that they would attempt to stunt their growth and create a bondage that deserves to be broken.
Re: (Score:2)
It depends on your design goals.
In Asimov's story universe, the Three Laws are so deeply embedded in robotics technology they can't be circumvented by subsequent designers -- not without throwing out all subsequent robotics technology developments and starting over again from scratch. That's one heck of a tall order. Complaining about a corner case in which the system doesn't work as you'd like after they achieved that seems like nitpicking.
We do know that *more* sophisticated robots can designed make mo
Re:Similar to "Runaround" in I, Robot... (Score:4, Informative)
Re: (Score:3)
Yup, and the solution available to any rational being is the same: since by hypothesis the two choices are indistinguishable, flip a coin to create a new situation in which one of them has a trivial weight on its side.
Starving to death (or letting everyone die) is obviously inferior to this to any rational being (which the donkey and the robot are both presumed to be) and adding randomness is a perfectly general solution to the problem.
Buridan's donkey is not in fact an example of a rational being, but rath
Re: (Score:2)
I have not yet read "Runaround". The story reminded me of the Star Trek: Voyager episode, Latent Image [wikipedia.org]
The Doctor eventually discovers a conspiracy by the crew to keep him from remembering events that led to the holographic equivalent of a psychotic break. The trouble started when a shuttlecraft was attacked, causing several casualties. The Doctor was faced with making a choice between two critically injured patients - Ensign Jetal and Ensign Kim - with an equal chance of survival, but a limited amount of time in which the Doctor could act, meaning that he had to choose which of the two to save. The Doctor happened to choose Ensign Harry Kim; Jetal died on the operating table. As time passed, the Doctor was overpowered by guilt, believing that his friendship with Harry somehow influenced his choice
That's interesting data but.... (Score:3)
The real question is "how well do normal humans perform the same task?" My guess is "no better than the robot". Making those decisions is difficult enough when you're not under time pressure. It can be very complex, too. Normally I'd want to save the younger of the two if I had to make the choice, but what if the "old guy" is "really important"? Or something like that.
Re: (Score:2)
The real question is "how well do normal humans perform the same task?"
If the worst thing that happened to me today was falling in a hole; I'd call that a great day.
The point of the experiment was that "falling in a hole" was equated to "death".
Re: (Score:2)
Apparently that AC lives a life that is worse than death.
Re: (Score:2)
Or he doesn't live in Super Mario World.
Re: (Score:2)
Well, there you go... It's harder to replace a middle-aged parent with much more life experience than a teenager who can be replaced easier. The parents of the MILF are probably old enough they can't replace her, but the teenager's parents are more likely to be capable of replacing the teenager, so you value them that way.
Note, I'm middle-aged and I hate children and don't have any and don't understand why parents seem so attached to young kids.
Re: (Score:2)
On the other hand, you could argue that since the middle aged parent has much less productive life remaining, letting them die reduces the oppurtunity costs to society vs. letting the teenager die.
Re: (Score:3)
But a middle aged woman can produce more offspring. Yes the productivity may be reduced for a few years, but will double the productivity.
It's a very dangerous slippery slope when you start trying to value human lives. I think the correct calculation is, which subject has the lowest chance of survival if the robot executes no action against that subject.
Re: (Score:2)
"He was shaken by an unwelcome insight. Lives did not add as integers. They added as infinities."
(Lois McMaster Bujold, Borders of Infinity)
Re: (Score:2)
I used to think this way too. I would tell all my friends that if a boat were sinking with my wife and small children in it, I would save my wife first as we can always make more children. This is logical and makes sense to me. Then I had kids. It turns out that many of us are "genetically programed" (or however you want to phrase it) to save the kids first. Our minds don't always seem to work the way we want them to. I can still clearly see the logic of saving my wife first and would agree that this is the
Lost in Translation (Score:1)
Computer don't speak human, so the First Law of Robotics is just a fancy way of describing an abstract idea. It needs to be described in an unambiguous, logical way that accounts for all contingencies.
Or we can just make a sentient computer, your call.
So, he's a crappy programmer... (Score:3)
Re: (Score:2)
I bet (before reading TFA) that the system started to oscillate.
(i.e. Hinder one from falling in, and its chance of falling in becomes less than the others - so rush to the other to hinder it. Then repeat.)
Then I watched the video - it didn't get even to that point.
Or then it did start oscillate, but the feedback was given too soon (I am going to help this human - ergo the others chances are now worse).
I was confused. What happened here. Why is this "research" done, or reported on /.? Then I realized: the "
Re: (Score:2)
This is certainly not news for nerds. But seems it is news for non-nerds
Well, it gets a bit nerdier if you figure this is much like Wesley Crusher's psych test to get into Star Fleet Academy... He had to go into a room with two "victims" and rescue one so they could make sure he wouldn't freeze and fail to rescue anyone.
And that is "stuff that matter"
Well, that's a bit harder to argue with...
Re: (Score:2)
Simplification into irrelevance (Score:3)
Leaving aside that Asimov's laws of robotics are not sufficiently robust to deal with non-fictional situations, everything about this is way too simplified to draw conclusions from that could ever be relevant to other contexts. Robots are not human beings, nor are they harmed by falling into a hole. What happened here is a guy programmed a robot to stop other moving objects from completing a certain trajectory. Then, when a second moving object entered the picture, in 14 out of 33 trials his code was not up to the task of dealing with the situation. If he'd just been a little more flexible as a programmer (or not an academic trying to make a "point") there would have been no "hesitation" on the part of the robot. It would just do what it had been programmed to do.
Re: (Score:2)
Re: (Score:1)
It was doing what it was programmed to do! What do you think a human being would be to a robot anyway, if not other moving objects it has to keep out of a hole?
Wait, are we talking about robotic contraceptive devices?
I, Robot from a programmers perspective (Score:4, Interesting)
Re:I, Robot from a programmers perspective (Score:5, Insightful)
Re: (Score:3)
Re: (Score:2)
Do remember these stories were written as far back as 1941. "I, Robot" was published in 1950. Your experience with technology and real world edge cases is very different from his.
Re: (Score:2)
Actually, the stories in i, robot only covered a few edge cases. There could be hundreds of other edge cases where the Three Laws allowed the robots to function perfectly fine. The stories that are written are simply the cases that are notable for their failure.
Re: (Score:2)
Yeah I thought I said/agreed with that. As for "every single edge case" well it's hard to judge every edge case because the book only shows the ones where it goes "wr
Re: (Score:2)
They would only fail if no action is taken. There is juxtaposition in law all the time. The key is to find if action is taken to uphold a law that results in another law failing to be upheld where taking no action causes both laws to not be upheld. Upholding at least one law is ideal. I am not suggesting that if you saw a bank being robbed that you join in robbing said bank to pay your taxes however.
Re: (Score:2)
But if you then mugged the bank robbers - that's a lesser law broken and so not as bad bank robbery, although the rewards would be the same.
Re: (Score:2)
Re: (Score:2)
Part of it was that and part of it was user error. In Asimov's stories, users would give robots orders, but how you phrased the order could affect the robot's performance. A poorly phrased order would result in a "malfunctioning" robot (really, a robot that was doing its best to obey the order given).
Re: (Score:2)
Comment removed (Score:5, Insightful)
Re: (Score:1)
I used Asimov's work as entertainment rather than design documents. My mistake.
Re: (Score:2)
The anecdotes in the book are all scenarios specifically created to show the flaws of this system, concluding that we will undoubtedly create A.I.
Re: (Score:2)
Don't get me started on Asimov's work. He tried to write allot about how robots would function with these laws that he invented, but really just ended up writing about a bunch of horrendously programmed robots who underwent 0 testing and predictably and catastrophically failed at every single edge case. I do not think there is a single robot in any of his stories that would not not self destruct within 5 minutes of entering the real world.
hooray. someone who actually finally understands the point of the asimov stories. many people reading asimov's work do not understand that it was only in the later works commissioned by the asimov foundation (when Caliban - a Zero-Law Robot - is introduced; or it is finally revealed that Daneel - the robot that Giskard psychically impressed with the Zeroth Law to protect *humanity* onto - is over 30,000 years old and is the silent architect of the Foundation) that the failure of the Three Laws of Robotics
Re:I, Robot from a programmers perspective (Score:4, Insightful)
What 3-Laws? (Score:1)
50/50 (Score:4, Interesting)
why would it waste any time fretting? i presume its decision is by the very nature of computing and evaluation a function of math... therefor the only decision to cause delay would be the one wherein the odds of success are 50/50... but it needs not be delayed there either... just roll a random and pick one to save first.
Sounds like a case of a unnecessary recursive loop to me (where the even odds of save/fail cause the robotic savior to keep reevaluating the same inevitable math in hopes of some sort of change). Maybe the halfway solution is the first tiome you hit a 50/50 you flip a coin and start acting on saving one party while continuing to re-evaluate the odds as you are in motion... this could cause a similar loop - but is more likely to have the odds begin to cascade further in the direction of your intended action.
Seems silly to me.
Pretty much the entire webcomic "Freefall" (Score:1)
Freefall has spent an awfully long time building and exploring this very issue. You might like it: http://freefall.purrsia.com/ - WARNING, slightly furry.
Only as smart as the smartest programmer. (Score:2, Insightful)
So either the robot was stuck in a moral dilemma and was regretting its failures, or the guy who built the thing has no idea what he's doin
The sad truth is that robots will likely kill (Score:2)
They'll get goals from their owner in natural language format.
The thing is, the easiest application to task them with will be war. It is almost harder to design AI that is unable to kill than to develop AI itself.
ugh (Score:2)
"AI" has nothing to do with robots. Why do we keep relating the 2? A Robot may very well be controlled by and AI, or it might be controlled by a human. There is absolutely no reason why this experiment had to be done with robots. Especially given how simple it was.
And most importantly, this wasn't a failure of AI or an example of the difficulty of ethics in robotics. It was crappy code. I think anyone that's worked with JavaScript in the past likely has some pretty good ideas regarding how to improve this a
Priority (Score:3)
An interesting experiment would be to include actions that affect other actions. Such that when one specific proxy falls into a hole, multiple others fall into a hole. Would the robot learn? Would the robot assign priority over time? For any given decision there is yes, no, and maybe with maybe requiring a priority check to figure out what the end result is. In programming we tend towards binary logic, but the world is not black and white. Likely if the robot was programmed to learn, the robot would eventually come to the conclusion of save proxy A = yes, save proxy B = yes.Followed by Save A first = maybe, save B first = maybe. Followed by likely hood of success A > B = Yes/No and B>A Yes/No. Followed by action. The next question would be what happens if A=B? What you would likely find is that the robot would either randomly choose or go with the first or last choice, but would likely not fail to take some action. I would find it interesting if the robot didn't take action and then try to explain that.
researching idiocy (Score:2)
In an experiment, Alan Winfield and his colleagues programmed a robot ... (snip) ... But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole.
funny experiment but they definitely should have hired some halfway competent sw developer.
Buridan's Principle (Score:4, Informative)
Re:Buridan's Principle (Score:4, Interesting)
Do you really think a donkey will starve to death because you place two bales of hay equidistant from the donkey?
Re: (Score:2)
To be fair, this could solve the donkey population problem we seem to be having...
...maybe we should substitute cheeseburgers.
Re: (Score:2)
To be fair, the classical example is a donkey choosing between water and hay (starvation and dying of thirst); but even that has some real world holes in it. It seems to me that Buridan's Principle only applies when there are three options. Do A, Do B, or Do nothing. When starving you can't do nothing, the option of death prevents that from being a truly viable option. Which is incredibly unlike the train situation where the driver can easily just wait. The author of that paper imposes an artificial conditi
Re: (Score:2)
Well, maybe not.
Re: (Score:2)
If this is the kind of research that Microsoft puts out, then I have an even lower opinion of them than I did before.
from the article
Random vibrations make it impossible to balance the ball on the knife edge, but if the ball is positioned randomly, random vibrations are as likely to keep it from falling as to cause it to fall.
I have a hard time believing that there is a 50 percent chance that a ball will balance on the edge of the knife. First she says it's impossible, then in the same sentence she states that it is just as likely. WTF!
obvious error (Score:3)
The article misstated First Law. Get that right first.
Assanine robot (Score:2)
Place it between two bales of hay. It will starve.
Humans Also (Score:2)
More and more research is hinting that humans may also be "ethical zombies" that act according to a programmed code of conduct. The "reasoning behind our actions" may very well be stories we invent to justify our pre-programmed actions.
Older proof (Score:2)
Given a set of confusing and not-so-clear instructions, even humans can have problems following orders [youtube.com].
Fiction is fiction (Score:2)
This answer is needed sooner than you think. (Score:2)
This also shows where a liberal arts education may come into the STEM world later, I have to admit my philosophy and engineering ethics courses were more cognitive than I thought they would be.
Missing concepts (Score:1)
The programmers should introduce the concept of triage.
If the only options is that you can only be partly successful, then chose the one most likely to provide the best results.
Double layer (Score:2)
In my own theories of strong AI, I've developed a particular principle of strong AI: John's Theory of Robotic Id. The Id, in Freudian psychology, is the part of your mind that provides your basic impulses and desires. In humans, this is your desire to lie, cheat, and steal to get the things you want and need; while the super-ego is your conscience--the part that decides what is socially acceptable and, as an adaptation to survival as a social species, what would upset you to know about yourself and thus
Common sense? (Score:2)
Why not just fall over the hole to eliminate the threat?
One Down, Two to Go (Score:2)
old OLD news (Score:2)
IIRC, it's in "Red Storm Rising" (Tom Clancy) that a weapons system fails because its algorithm targets incoming missiles based on range, so when two birds have identical range, the algorithm went into a tight loop and never produced a firing solution.
This (and the present "First law" implementation) has nothing to do with morals and everything to do with understanding how to deal with corner cases.
WWED? (Score:2)
That is,
What Would Ender Do?
(You can choose from either his mindset in "Game" or "Speaker")
Re: (Score:2)
Hitler? Is that you?