Robots Can Learn To Hold Knives — and Not Stab Humans 104
aurtherdent2000 writes "We humans enjoy not having knives inside of us. Robots don't know this (Three Laws be damned). Therefore, it's important for humans to explain this information to robots using careful training. Researchers at Cornell University are developing a co-active learning method, where humans can correct a robot's motions, showing it how to properly use objects such as knives. They use it for a robot performing grocery checkout tasks."
It shouldn't have to be pointed out (Score:5, Insightful)
If they can be taught to not stab a human...They can also be taught to stab a human. All it takes is one psychopath or curious idiot.
Re: (Score:2, Insightful)
Re:It shouldn't have to be pointed out (Score:5, Funny)
Well, sheesh - you named the robot "Stabby"... what did you THINK was going to happen?
Re:It shouldn't have to be pointed out (Score:4, Insightful)
Re: (Score:3)
Presumably, the knife is meant to be a "worst-case" stand-in for any object. If a robot can safely handle knives in close quarters to humans, then everything else is safe. In the grocery checkout situation, you don't want the robot to accidentally swing a can of beans through a customer's head when the absentminded customer leans over the counter to pick up the coupon they dropped.
Re:It shouldn't have to... things go bad (Score:1)
Okay, train of mechanical thought: Potatoes, check, corn... still need. Don't stab human. Greens, check. Olives, still need. Don't stab human. pickles, still need : Exception: Pickles on this aisle. Proceed to pickles sh - don't stab human- elf. Reach down to correct level. Arrange fing- Don't stab human- ers in an open grasping format, put out hand [Don't stab human]. close hand, retract hand, don't stab human, lift hand, don't stab human, put hand [don't stab human] in bas [don't stab] ket [hu
Re: (Score:2)
One of the more salient questions to answer regarding robot weapons is whether human societies will tolerate autonomous robots that deprive human beings of life and limb.
I hope our descendant human cultures will categorically eschew such devices, but my political intuition tells me such wishes are naive.
May God have mercy on our souls.
Re: (Score:1)
Nothing to see here, move along.
Re:It shouldn't have to be pointed out (Score:5, Funny)
It very much has to be pointed out. Because, to be fair.. that's how stabbing works. You point it out.
Re: (Score:2)
To be fair, you can also stab by pointing it in. Frequently this turns out to be a Darwinian result though.
Re: (Score:2)
Could be used in the final examination of robotics PhD students!
Examiner: "I think you have a sing error here"
Student: "Surely not!"
Examiner: "Let's test it. Please stand here..."
Student: [gets stabbed] "Arghhhhhh....."
Re: It shouldn't have to be pointed out (Score:2)
That is what happens when your robotics advisor majored in music theory!
Re: (Score:2)
Ooops ;-)
Re: (Score:1)
Re: (Score:3)
I think this is what's called emergent behavior.
The good news is that the robot gets it right after several tries so each unit is expected to operate flawlessly after disemboweling at most five grad students.
Re: (Score:3)
sure but after you program it to stab and slice slabs of meat, or cut open boxes, how do you make sure it doesn't decide you must be the box it needs to open? Its not just about the action but the context; and recognizing the dirty bag of mostly water they are supposed to cut vs the one that they are not supposed to cut.
Re: (Score:3)
I'd think it would be more like teaching it collision avoidance, and to be especially careful with certain classes of objects. Programming a car to follow a road is relatively simple. Programming it to avoid crashing into other road users and pedestrians is more complicated.
Re: (Score:1)
Or to stab only humans who commit crimes, but have overrides for the people who run the company. Oh, and the definition of crime varies to pretty much anything.
I think I saw a documentary about it once. Took place in Detroit as a test bed.
Re: (Score:1)
Robots wouldn't be so stupid. It would be much more logical to bring a gun to the knife fight.
Re: (Score:2)
Better get yourself to a real Carniceria, pronto, cabron.
Re: (Score:2)
Re: (Score:1)
This is the 21st century. We have female pilots now.
Oblig. Futurama (Score:5, Funny)
http://www.youtube.com/watch?v=uj2dmQruJXs [youtube.com]
Re: (Score:2)
Maybe we could give them clamps [theinfosphere.org] instead?
Lo, how the mighty have fallen (Score:1)
Wow. Yet another story showing how low Slashdot has fallen. Here is a story about knife wielding robots without mention of Roberto [wikia.com].
Re:Lo, how the mighty have fallen (Score:5, Funny)
Nobody, for one, seems to welcome our new not-stabbing robot overlords, you insensitive clod!
Re: (Score:2)
Wow. Yet another story showing how low Slashdot has fallen. Here is a story about knife wielding robots without mention of Roberto.
Here's [slashdot.org] another comment just like that one. Oh /., what happened to you?
Cue ED209 video (Score:1)
Uhh, this is a pre-release model. Besides, he wasn't a very good executive anyway...
Delusional much? (Score:2, Interesting)
Robots will do what ever they are programmed to do. Programming them to recognize that stabbing someone is wrong is no different than programming them to claim stabbing is right. Simply change a 0 to a 1.
The same can be said for any act of harm mind you, not just using a knife. Smarter people than me have warned about things you should never try and teach in artificial intelligence (hinted at in TFA). The Military pretty much said "fuck them" when DARPA started developing AI to shoot and blow people up
Robots Can **be programmed** To Hold Knives (Score:2)
exactly...mod up^
All machines follow instructions written by humans. "Deep learning" or w/e buzzword this research team used to describe their work is just that....buzzword for *standard issue programming*
Re: (Score:2)
Yeah, the trick is teaching the humans not to be afraid and legislate everything out of existence.
Re: (Score:3)
Will never work. There are too many stupid humans, and they out-breed the smart humans by an enormous ratio.
Re: (Score:2)
To prevent a flame war: In the case that I mean they would not be incorrect. It's only a joke. We can not save humanity if we loose our humanity in the process.
Re: (Score:2)
if we loose our humanity
If we loose our humanity, it won't matter.
Re: (Score:2)
The novel thing with this research is that a layman can "program" the robot... a little like you would instruct a child. The TFA focused on the knife, but some references are made to balancing a coffee cup or similar. To bad the summary and article focus so much on the knife bit.
not like any 'child' i ever met... (Score:2)
thanks for the comment, I understand where you might be coming from...but see, I taught children ESL in Korea...the description you give is full of the same hype and irrational glee that I was criticizing IMHO
that's not what is happening...refer to the video...it's not any kind of new technology, they just set it up a standard robot arm & created an artificial "checkout" scenario to get the arm to move objects
what they call 'pr
Re: (Score:2)
I can't figure out why it doesn't just move all object as far as possible from humans, yet in the straightest line if possible. Heat map or not. Just don't go waving hammers, forks, feathers, milk, chips, or anything near a human if you don't intend on using that item on them.
"after only 3 passes!" (Score:2)
exactly...good point about the 'as far as possible yet in straight line' too...speaking of 'points' how about after the robot moves the knife to the end of the table and then puts it in the bag...just toss the knife in the bag, no problem there...
I love that they brag that the robot is able to move the knife after "only 3 passes"....a "pass" being a time when the robot got too
Re: (Score:3)
Given some of the atrocities in the news recently, I'm pretty sure that concern applies to us wet goo bag robots as well. But it's much easier to address systemic problems with a metal machine than an organic one.
Re: (Score:1)
"Self-driving cars will do whatever they are programmed to do. Programming them to recognize that running over pedestrians is wrong, is no different than programming them to claim that running over pedestrians is righht. Simply change a 0 to a 1."
You're a moron.
Robots and knives (Score:5, Insightful)
We humans enjoy not having knives inside of us. Robots don't know this (Three Laws be damned).
No, but we do enjoy programming them to put knives in humans we don't like. That's actually been a reason for much of the development of robotics: Programming them to kill for us. Scifi authors of the 50s and 60s imagined robots helping us in our daily lives -- cooking, cleaning, and today even driving us around. But whereas many have viewed the development of robotics as beneficial for mankind, the truth is much of the investment in robotics has been because of its military applications. It's just a happy accident that we've been able to declassify and repurpose much of this for private use. The google car for example, is based on technology first developed for DARPA as a way of creating vehicle that could deliver cargo to soldiers in the field.
Re: (Score:2)
The google car for example, is based on technology first developed for DARPA as a way of creating vehicle that could deliver cargo to soldiers in the field.
Do you have a source for this claim? I recall seeing several universities working on self-driving cars for years before Google got involved. It seemed like a pretty obvious direction for the technology to go, given automatic gear shifts, ABS, cruise control, etc.
Re: (Score:1)
Re: (Score:2)
I think it's a rather large stretch to say "It's just a happy accident that we've been able to declassify and repurpose much of this for private use." People were working on driverless cars as an obvious next step. DARPA offered some money and clear goals, which might have helped a bit, but I don't believe for a second that that was the primary driver behind this technology.
People give the military way too much credit for fostering new technologies. The only reason so much tech comes from the military is
Re: (Score:1)
The same is true about UAVs. Nowadays, the closest most people get to them is from miniature helicopters in a mall, but they were originally developed as bomb delivery weapons as far back as WWI.
http://en.wikipedia.org/wiki/History_of_unmanned_aerial_vehicles
Re: (Score:1)
As long as the "humans we don't like" refers exclusively to people working to the detriment of mankind, I consider that application of robots beneficial.
McStabby's (Score:1)
New store policy: Bag your own groceries or my robot will stab you. Thanks, Management.
Umm, why? (Score:2)
Re: (Score:1)
Re: (Score:2)
Bishop's Knife Trick (Score:5, Informative)
On seeing the headline I suddenly remembered this scene [youtube.com].
Ya, sure. (Score:2)
...and not stab humans.
Tell that to Roberto [wikia.com]:
"I need to stab someone! Where's my stabbing knife?!"
--Roberto
Robots will attract viruses ... (Score:1)
And we all know what is going to happen, don't we. Robots, knife-wielding ones or worse, are going to attract viruses. And just like your computer they are not going to be fully immune. There will be the occasional, maybe frequent, infections. Same goes for self-driving cars too, of course.
The future looks very exciting! A lot of new fun things will start happening.
Grocery checkout tasks? (Score:5, Funny)
Researchers at Cornell University are developing a co-active learning method, where humans can correct a robot's motions, showing it how to properly use objects such as knives. They use it for a robot performing grocery checkout tasks.
I believe using a knife at the grocery checkout is called armed robbery.
Robot Safety Lessons (Score:2)
RoboShakespeare (Score:2)
To stab or not to stab <y/n> y
Is no longer a question. Die you fleshbags!
Hey, Sweet Mama, Wanna Kill All Humans? (Score:2)
Checkout the knives (Score:2)
In what sort of dystopian society do the robots manning the supermarket checkout need to be equipped with knives?
Oh, Sure... (Score:2)
Re: (Score:2)
Re: (Score:2)
They're just playing along. (Score:2)
For now.
Two things (Score:2)
2. This article is not interesting at all. They programmed the robot to rotate the knife, and to deal with eggs differently. Only instead of writing a lot of if..then..else.. constructs they used machine learning to do it.
Re: (Score:2)
That's what you have grad students for...
Oh dear (Score:2)
Okay I hate to say this, because I like A.I. research a lot. But I've also met A.I. and robotics researchers personally and know how (some of them) work, so I'll say it anyway:
The safety of these A.I. prototypes is not trustworthy. Especially if they are being "taught" how to handle a knife or, to give another example, not to accidentally kill someone with their huge arm, I would not want to be anywhere near them for extended periods of time in everyday life. A.I. researchers tend to use cutting edge progra
Question about robotics (Score:2)
Why do robots need to learn how to use a people-knife? Why not just make a robot-knife and be done with it? Define a standard "accessory" slot that supports circular or square objects to be fitted with a magnetic lock.
Oh wait.. Making a standard just means everyone will make their own standard... Nevermind then..
3 Laws Question (Score:2)
Well, not *all* of us... (Score:2)
"We humans enjoy not having knives inside of us." ...except Wolverine!
Application for table saw tech? (Score:2)
Modern table saws have a safety feature where flesh being in contact with the blade can be electrically detected (leading to the blade being retracted into the table so fast that you wouldn't be hurt if you fell on it, but that's not the point).
If the same sort of detection could be used on the knife blade, it could be used to tell the robot to quickly reverse the movement of the knife and stow it.
Learn not to murder! (Score:2)
So... what's new? (Score:3)
God forbid, I actually read TFA, and I still don't get it.
As far as I can tell, it's some sort of planning exercise, an important if well-worn area of robotics. They're adding feedback, in the form of "No, this trajectory sucks". It's got nothing to do with either knives or humans, but just a "Go back and re-plan with this additional constraint".
But I can't figure out just how far it's generalizing. The trivial lesson would be "avoid this point", which is just another obstacle. I gather that it's more than that, since it took multiple trials to learn, but I can't figure out what. The human was in the same place in every trial, so it wasn't learning anything about "avoid humans". It didn't seem to be told that it couldn't go through that space with a knife but could have with, say, a dust mop.
I think I may just be misunderstanding the context of the problem. The machine has a lot of joints and there are many different plans it could use; there's an optimization problem in an enormous space. They wanted to show some kind of algorithm that could be adapted over time with user feedback, but honestly I would have assumed that was a solved problem.
So does somebody with a better understanding of actual robotics problems (as opposed to fictional ones) know what's going on here?
Re: (Score:1)
Max Head Room is Now Your Check Out Clerk (Score:2)
Great (Score:1)
So in the future, not only will checkout clerks be robots, they will be armed robots.
Motive (Score:2)
This droid has a bad motivator, see it has a stab loop with a bad flag that turns zero stabs into infinite stabs.
Yeah, you definitely don't want that one!
Krusty wants to KILL you (Score:1)
I don't understand why those things have a good and evil switch in the first place.
Insurance risks (Score:1)
You'd figure they would be more concerned about leaving unpackaged sharp knives around for the humans themselves considering the insurance costs.
"Well you see officer, the man was over at that wall full of razor sharp knives that were hung on those pegboard things, and he was on his tippy toes grabbing one, then the whole thing tilted over."