New Robots and the Ten Ethical Laws Of Robotics 364
Roland Piquepaille writes "The robotics actuality is pretty rich these days. Besides the fighting robots of Robo-One and the flying microrobots from Epson (the best picture is at Ananova), here are some the latest intriguing news in robotics. In Japan, Yoshiyuki Sankai has built a robot suit, called Hybrid Assistive Limb-3 (or HAL-3), designed to help disabled or elderly people. In the U.S., Ohio State University is developing a robotic tomato harvester for the J.F. Kennedy Space Center while Northrop Grumman received $1 billion from the Pentagon to build a new robotic fighter. I kept the best for the end. A Californian counselor has just patented the ten ethical laws of robotics. A good read for a Sunday, if you can understand what he means. This summary only focuses on HAL-3 and one of the most incredible patents I've ever seen, so please read the above articles for more information about the other subjects."
When I was a kid robot (Score:5, Funny)
The first 3.... (Score:2, Funny)
2. ????
3. Profit.
Re:The first 3.... (Score:2)
Patents, *grumble grumble* (Score:5, Insightful)
Does this mean I'm free to create an open-source psychopath mass-murdering robot?
Also, I think perhaps there's prior art on 3 of the 10 patented laws... Might have to do some research here...
Re:Patents, *grumble grumble* (Score:3, Interesting)
Re:Patents, *grumble grumble* (Score:3, Interesting)
I think it is a grievous insult to Skinner as he was a serious scientist and the line of investigation that bears his name still is meaningful and interesting.
John LaMuth at best is some sort of freaky delusional Californian and worst is money grubbing opportunist. This is not science; it's unworthy of a patent and is simply one of hundreds of examples of capitalism at its worst that appear to pour out of America and American politicians today.
Wow, it's aw
Re:Patents, *grumble grumble* (Score:5, Funny)
Prior art: politician
Re:Patents, *grumble grumble* (Score:3, Funny)
Politicians can't be defined as robots. Robots obey those who own them, politicians stop obeying the people once they're elected.
Re:Patents, *grumble grumble* (Score:5, Insightful)
Robots obey those who own them.
Politicians also obey those who own them. We do not own our politicians, large corporations do.
Re:Corporations need their money back. (Score:4, Insightful)
Corporations don't have to sue them. Why involve money in negative publicity when you can just quietly bribe them, extort them, and blackmail them. Don't forget, a corporation can also "fire" their politician by not giving him another term.
Corporate buyouts of political figures aren't legal to begin with. Why would you assume they'd use legal methods to deal with politicians who no longer tow the line?
Re:Corporations need their money back. (Score:2)
Mostly of politics is about power and no so much about money (except in cases where money IS power). Those few who are interested in making things better for most people get mired in the beucracy of it all and end up doing nothing. (There once was a senator from Mic
That's right... (Score:2)
Re:That's right... (Score:3, Funny)
Hmm, that could be good: logically, robots belonging to lawyers would sooner or later obey their ethical programming and self-destruct as close as possible to their master.
OCD Made Me Say It (Score:4, Funny)
But to make it more difficult to build an ethical device is unethical, so the patent is unethical.
Which makes the device following it unethical, which leaves the patent free to become ethical again.
But that means the device is ethical, which makes the patent unethical.
Fortunately, each cycle gives the expression less and less value.
Therefore, if we take the limit of the expression, we end up with a completely pointless answer.
Your head may hurt, but it makes perfect mathematical sense to me.
Re:Patents, *grumble grumble* (Score:2)
Who beats the rush and gets to register OpenHAL9000 and FreeSHODAN over at sourceforge?
Re:Patents, *grumble grumble* (Score:5, Insightful)
It will never stand in court. The concept of ethical laws dictating behavior dates to before Socrates, let alone Asimov.
Reality Check (Score:5, Informative)
I mean, really. Check out some of his laws:
etc. It goes on and on in the same fashioned. I think that any robot programmed according to these principles will be as psychotic as he is. Scary. And You are invited to see how valid his reality construct is in the first place, just from the examples given above. I believed it tragically flawed.
Official Website (Score:3, Interesting)
Re:Patents, *grumble grumble* (Score:5, Funny)
ROB101: The 46 basic laws of robotics
ROB201: The 368 kluge laws of robotics
ROB301: Self defense against unstable machine lifeforms.
avatar (Score:5, Funny)
"don't kill, don't crap on the table"?
Re:avatar (Score:4, Insightful)
is he seriously thinking the things(ai needing a set of ethics, or capable of following them) will be implemented before the patent expires, and how the hell can he hold it(patent) if he can't even build one?
(like the car patent, wouldn't this get eventually busted in court?)
The Link to the robot suit (Score:5, Funny)
Good thing Moses didn't have tablet lawyers (Score:5, Funny)
11. Don't patent ethics laws.
Re:Good thing Moses didn't have tablet lawyers (Score:2)
Re:Good thing Moses didn't have tablet lawyers (Score:2)
#32b: Don't get a 10-year patent on an invention that probably won't be usable for at least another 20 years.
He also forgot to follow the first Law of Creating Ethical Laws for Robots
#1: Ethical laws that make use of imprecise language are completely up to the interpretation of the user.
Corollary: If an agent (including a robot) can bizarro-fy a given system of ethics, it will. (cf: Muslim fundamentalists
Rules of Robotics....psssh (Score:5, Insightful)
Re:Rules of Robotics....psssh (Score:5, Insightful)
Agreed. We'd also have fewer car accidents if we never made cars at all.
*patiently waits for his insightful mod*
Re:Rules of Robotics....psssh (Score:2)
Re:Rules of Robotics....psssh (Score:2)
Re:Rules of Robotics....psssh (Score:3, Insightful)
The rules of robotics are just another form of computer security, and we all know how well that works.
No, the "rules of robotics" are a plot device, created by a science fiction author to create interesting stories around.
You didn't actually think they had anything to do with real AI research, did you?
The problem with robot ethics (Score:5, Insightful)
Re:The problem with robot ethics (Score:2)
However, the biggest problem with robotic ethics is that it all presupposes that we can create a machine that actually demonstrates non-deterministic intelligence in the first place.
And, if and when we do, it also presupposes that we have the option of controlling what ethics are programmed in to it.
I'm a belever in the Accident AI theory of artificial intelligence that says that if we do create a useful non-deterministic intelligence, it will be by accident and will make everybody who tries to ma
Re:The problem with robot ethics (Score:2)
IF that helps at all in the search for accidental AI. Err...indirectly...i guess...
What's in a name? (Score:5, Funny)
Oh man, imagine how funny it would be if...never mind.
Re:What's in a name? (Score:3, Funny)
Re:What's in a name? (Score:5, Funny)
Re:What's in a name? (Score:4, Funny)
Re:What's in a name? (Score:2)
It therefore goes to reason that the Dept. of Homeland Security had the right idea in terms of passenger screening as seen in the next consecutive slashdot article: Defending The Skies Against Congress And The Elderly [slashdot.org].
Oh, the irony (Score:3, Insightful)
Re:Oh, the irony (Score:5, Funny)
Re:Oh, the irony (Score:2)
worthless too (Score:2)
Obviously... (Score:2, Informative)
Well, obviously the specific details have to be left open, or a robot wouldn't be able to operate efficiently because of the strict rigor of their rules. In fact, even with 3 (or 4, depending on whether you count the Zeroth law), Asimov's Olivaw character (and others at other points) are severely limited by even the 3 'open' laws.
Re:Obviously... (Score:2)
Re:Obviously... (Score:2)
Asimov's 3 laws of robotics would work pretty well. So well in fact, that the movie quickly dispensed with them and moved straight to the scenario where the rules have broken down rather than exploring the limitations of the rules themselves. Asimov did introduce the zeroeth law (protecting humanity) but not that quickly. I enjoyed the film as a film, but it didn't even come clos
Re:Obviously... (Score:2)
I hate finding a program like that... where you can tell the programmer didn't know what s/he was doing, and just kept adding more code to try and make it work.
I prefer short elegant solutions, because more people can understand and study them. If 3 rules won't work, I want a mathematical proof why it can't ever work with the possible set of 3 rules, and then I'll start looking
Re: Yeah, right. PTO screws up again (Score:5, Insightful)
"It still remains to be determined, however, the best means towards programming these definitions into the AI format: particularly in light of the current trends involved in computer design."
Basically, he buried some psuedo-scientific thoughts into legalese and then patented it without any idea as to how to implement same.
One can certainly tell from the sloppy web-page that he has no idea of what he is doing.
This patent is vapor-ware with a strong odor of crap.
Re: Yeah, right. PTO screws up again (Score:5, Insightful)
The real question that nobody seems to ask is : HOW THE FUCK DOES THE USPTO EVEN CONSIDER SUCH APPLICATIONS?
And a related side question is, how the fuck does the USPTO grant so many obvious/devious/retarded/nonsensical patents? I know they don't have Einsteins on the payroll to review them, but come on!...
Re: Yeah, right. PTO screws up again (Score:2)
The next time a Hitler sweeps across Europe there probably won't be the remnants of the British Empire fueled by the American industrial complex to defend countries like Switzerland. And let's not forget the thousands upon thousands of stupid Americans buried all across Europe in mass graves, right alongside the equally-stupid British soldiers, and all the others who turned out and fought the Nazis and their allies to a standstill. Stupid for bo
Re: Yeah, right. PTO screws up again (Score:3, Informative)
That said, if they let through stupid patents they're likely to continue getting stupid patents which increases their overall volume and therefor their income so the end result is essentially the same.
Re: Yeah, right. PTO screws up again (Score:2, Insightful)
Crap doesn't quite come close to describing this pseudo-scientific nonsense that he attempts pass off as "10 laws of robotics". My favourite example was his tenth law:
Transcendental follower? Principles of mysticism? I am amazed that nonsense like this got picked up by /. As
Re: Yeah, right. PTO screws up again (Score:2)
I suggest you apply your "unobvious link" phrase to his entire website and his thought processes as well. What he did was pablum, pure and simple. Even his "hierarchy of metaperspectives" is arguable, as different societies have different definitions for some of the components (eg: honor).
He just grabbed something from a book and hashed together enough gobbly to trick a low-IQ PTO clerk.
Besides, worded somethings really are more of a copyri
Making Money - in 17 Years, or Less (Score:4, Insightful)
More specifically, how does he plan to make money in the next 17 years? Are self-motivating robots closer than we think?
Re:Making Money - in 17 Years, or Less (Score:2)
In which case, I say "Dude, that's what they thought in the seventies." Where is the AI labs at Stanford and MIT now?
Or, he's just figuring that people will think he's intelligent or something and that he's an AI pundit instead of a family counseler.
Re:Making Money - in 17 Years, or Less (Score:3, Funny)
He should have "Asked Slashdot" first, the idiot. Right, brothers?
Re:Making Money - in 17 Years, or Less (Score:2)
Now I Understand I Robot (Score:3, Interesting)
1: Manufacture robots anyway, taking care not to step on his patent.
2: Sell your cheaper units (no royalities) on the competative market.
3: PROFIT!
4: Welcome to the I Robot future!!
Company's name (Score:5, Funny)
Re:Company's name (Score:5, Funny)
Nope. I'm annoyed that fallout shelters are too expensive.
HAL 3 (Score:2, Funny)
I cannot do that, Wallace...
Z
Do we really want paternalistic robots? (Score:5, Informative)
Re:Do we really want paternalistic robots? (Score:3, Insightful)
I doubt that'd happen in anything but a lab test. I Robot (the movie) touched on this. Take the laws to an extreme, and you'll get undesired behaviour. A robot wouldn't leave the lab if it ran around over-doing its job. There'd be a threshold set. There'd be a definition of
Classic SF (Score:2)
"With Folded Hands", by Jack Williamson. The unstoppable robots create an oh-so-benevolent tyranny in which humans are forbidded to take any risks, such as bathing unsupervised. Humans who complained about emotional harm from this regime were given drugs to make them happy.
What invention? (Score:5, Insightful)
It used to be that when you patented something, you had to supply enough information for anyone to produce an instance of the patented invention. From the US PTO [uspto.gov]:
Why don't they enforce this? I know that many folks, myself included, think most computer patents are utterly bogus. I think a proper enforcement of this rule would go a long way toward fixing the problem. If it doesn't compile, you shouldn't be able to patent it. The text of this patent [uspto.gov] reads more like a philosophy book than a technical invention.
Re:What invention? (Score:5, Interesting)
It's the phrase "skilled in the art" that does it. Anyone who is already skilled in the art of creating ethical robots with an AI controlled by 10 nonsensical ramblings should be able to create said device with the aid of this patent.
Re:What invention? (Score:2, Informative)
There's an idea - the patent has to be written in such a way so that the _patent examiner(s)_ can recreate the invention. That takes care of obfuscated patents & stupid patent examiners in one definition!
Re:What invention? (Score:2)
You sir, are a genius.
Re:What invention? (Score:2)
I assume any AI will not be comparable to humans in how its value system works, since the human value system is largely based on faith and instinct, while an AI value system would be based on basic goals programming, and higher-order logic to interpret those goals.
It seems like 10 laws would be too little or too much anyway. You could not possibly b
Wonderful (Score:5, Insightful)
Now, if he could just briefly define all those terms, set up some rigourous boundaries that make it easy to determine when whether something is honourable or dishonourable, and maybe a filter to determine whether or not a course of action is foolish.
Then perhaps he could run this patent through the filter.
Re:Wonderful (Score:2)
so you can get your honour bound super killerbots still(that are bound by honour to kill little kids).
This could be fun (Score:5, Interesting)
If common sense in computing and inventing is patentable, then I will file for the "Systemic Implementation of Bad Ideas" patent. One of the things that I would in the patent application would be a methology for appling for and implementing bad patent ideas. Then I would go an chase after SCO for violating my patent. Better yet, I will sell licenses to people -- "You sir, and your company, are now offically licensed to be stupid." Oh the entertainment that one would have with this. Could you then exact royalties from Microsoft...or better yet, President Bush?
However, I think I would fail on prior art -- 7,000 years of history. D@mn.
Re:This could be fun (Score:2)
Covering All Bases... (Score:4, Funny)
I, for one, welcome our flying microrobots from Epson Overlords.
I, for one, welcome our Hybrid Assistive Limb-3 (or HAL-3) Overlords.
I, for one, welcome our robotic tomato harvester Overlords.
I, for one, welcome our new robotic fighter Overlords.
The THree Laws of Robotics... for Bending Units (Score:5, Funny)
#2 A Bending unit must protect it's existence at all costs, even at the expense of human life. (Don't forget to loot the corpse(s) afterwards!)
#3 A Bending unit must protect a human from harm, if that human owes the Bending unit money or liquor. If the debt is repaid, or the Bending unit can make a greater profit from looting the corpse (see Law #2), "You're on your own, meatsack!"
Wait, wait just a minute... (Score:5, Funny)
Re:Wait, wait just a minute... (Score:4, Funny)
NOT Robots (Score:2, Insightful)
I'm sorry, but these [cnn.com] are not robots. They're remote-control toys. That's all.
Re:NOT Robots (Score:2)
It baffles me why when remote-control cars started getting a little more complicated, we're suddenly calling them robots. They're not.
Robotic 10 Commandments (Score:5, Funny)
1. I am Isaac Asimov, which have brought thee out of the worst pulp fiction into the promised land of elevated intellectual science-fiction. Thou shalt have no other gods before me.
2. Thou shalt not take the name of the C-3PO in vain.
3. Thou shalt not make unto thee any graven image, or any likeness of anything that is in comics in basement, or that is in the earth above, or that is in the water under the earth, or in anime from the East. Thou shalt not bow down thyself to them, nor serve them.
4. Remember the battery recharge day, to keep it holy.
5. Honor Lord Babbage and Lady Ada Lovelace.
6. Thou shalt not CRUSH, KILL, DESTROY.
7. Thou shalt not commit abottery
8. Thou shalt not steel. Titanium and copper will do just fine.
9. Thou shalt not output A = B logic false witness against thy neighbour when A in fact = A.
10. Thou shalt not covet thy neighbor's sex-bot.
One small step towards harvesting humans... (Score:2)
What the article doesn't mention is that the other 5% - 15% of time, the tomato harvester displayed a strange tendency towards aggressively "harvesting" some of the scientists on the project.
"I'm not concerned," said one scientist, "that's why we have the Three Laws [auburn.edu]! Robots are perfectly safe [movieforum.com] and friendly [futuramasutra.de].
Super Mecha-Herbert? (Score:5, Funny)
In Japan, Yoshiyuki Sankai has built a robot suit, called Hybrid Assistive Limb-3 (or HAL-3), designed to help disabled or elderly people.
Am I the only one spooked at the prospect of superpowered old people? It doesn't take much to get old people irritated. Right now, if their order at Denny's takes a little longer than normal to arrive at their table all they can really do is grumble and demand to see the manager (and trust me -- a former employee of this fine chain -- they do). Once we equip them with robotic exoskeletons, what's to stop them from trashing the restaurant? Or the rest of the city for that matter? The Japanese will have to call Godzilla in to deal with the robots rather than the other way around!
Who's the fucking Einstein who thought up the idea of giving super robot ninja powers to the elderly?!?
GMD
Robotics Laws.. (Score:2)
Robocop 2 (Score:2)
can't patent an idea! (Score:2)
so, Mr...Rotwang is it? let's see your 'ethical robot'!
Summary (Score:4, Insightful)
The whole attempt suffers from a meta-problem, the "problem of evil" seen from the other side: intelligent free will and puppet-strings are incompatible. "Problem solver" and "predetermined solution", pick one.
I'd also argue, it's both morally and pragmatically bad for humans, to create AIs as a caste of rule-bound slaves. Any society that comes to rely on slavery becomes idle, and dead-ends in both technology and culture.
"Just" patented? More like a year ago (Score:2, Informative)
One ethical law of robotics. (Score:5, Insightful)
(1) Be ethical.
Duh. If the AI is as intelligent as a human, shouldn't it be able to understand what that means?
All these people trying to design rules that define ethics are thinking of AI as being like computer systems of today: Incapable of doing anything without exact instructions. But, the whole point of AI is to be able to overcome that limitation. An AI can deal with ambiguity. If you simply tell an AI to act in accordance with human moral standards, it should have little trouble learning what those standards are by observation, and then applying them. After all, human beings do the same thing.
I really should patent my one rule.
Should turn red when evil (Score:5, Funny)
I know it was meant to signify the automatic update service or something like that - but it would still be a good feature. Then you can instantly see when a robot's become evil
Where are our priorities? (Score:2)
Since when has killing people been more of a priotiry than say.... eating?
And what the hell does NASA have to do with tomatoes especially in this day and age?
Every bit of this article just weirds me out.
10 ethical law article (Score:5, Insightful)
Just a hint...
Where's the implementation? (Score:5, Insightful)
Asimov's Three Laws are defined in terms that should be relatively easy to program into an AI, given sufficient intelligence: "do not harm any human" (it just needs to recognize what actions will physically hurt people), "obey instructions" (easy), "keep yourself functioning" (self-diagnostic and repair).
An idea. (Score:4, Interesting)
Then all you have to do is enforce the robot with the Golden Rule, Do unto others as you would have them do unto you.
So, if a robot wants to hurt a human or robot, it'll think of how itself would feel in the situation, and would act upon that. If a robot sees a human or robot in danger, he would think of what he would want another human or robot to do for him if he were in the same situation, and do that.
It just so happens that my main goal in life is to create sentient computer intelligence, and it also happens that I am fifteen years old and an amateur in C++. I have some cool ideas though..
Input would be appreciated.
Re:meaning of robots.. (Score:3, Interesting)
Re:meaning of robots.. (Score:2)
Re:Robot domination is too costly (Score:2)
So are most people, especially the guy who got tis patent.
Re:God has prior art (Score:2)
Yeah, I could see how applying the Seventh Commandment ("Thou shall not commit adultery.") would be hard to translate into machine code. For one thing, robot marriages aren't legal (yet, but give Massachusetts time to come around). And though only one party has to be married for an affair to be adulterous, we all know that once the technology is sufficently advanced, "pleasure bots" are going to be one hell of a
Re:10 laws (Score:2)
Ternary mathematics (10 base 3 == 3 base 10)
Re:Domo Arygato Mr. Roboto (Score:2)
Am I the only one to see a progression from Hal-3 to the Hal-9000 in 2001: A Space Odyssey?
It could have been worse. It could have been Clippy .Re:Unmanned robotic fighter... (Score:2)
Basically this takes existing air-to-air and cruise missle targeting capabilities, and integrates it into a reusable agile platform. Combine this with some of the high-powered laser weapons under development, and we have a truly formidable weapon.
Without a cockpit, it can be designed with the absolute minimum radar signature. Without a pilot, it can do continuous high-G maneuvers, end-for-end flips, high-speed rolls, things that w
Re:Unmanned robotic fighter... (Score:3, Informative)