Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Medicine Robotics Hardware Science

Robot With Knives Used In Robotics Injury Study 132

An anonymous reader writes "IEEE Spectrum reports that German researchers, seeking to find out what would happen if a robot handling a sharp tool accidentally struck a human, set out to perform a series of cutting, stabbing, and puncturing tests. They used a robotic manipulator arm, fitted with various sharp tools (kitchen knife, scalpel, screwdriver) and performed striking tests at a block of silicone, a pig leg, and at one point, even the arm of a human volunteer. Volunteer, really?! The story includes video of the tests."
This discussion has been archived. No new comments can be posted.

Robot With Knives Used In Robotics Injury Study

Comments Filter:
  • Re:first post! (Score:1, Insightful)

    by Anonymous Coward on Thursday May 06, 2010 @07:10PM (#32119126)

    first post fail

  • Re:Roberto! (Score:3, Insightful)

    by jgreco ( 1542031 ) on Thursday May 06, 2010 @07:19PM (#32119260)

    Not having watched Caprica, I could just imagine that it goes something like this:

    Humans arm robots
    Robots^WCylons take over moon
    Cylons create robotic civilization
    Cylons wage war against humans
    Cylons pursue Galactica and vow to wipe out the remaining humans

    Arming robots, just don't do it. :-)

  • Priorities! (Score:3, Insightful)

    by Locke2005 ( 849178 ) on Thursday May 06, 2010 @07:58PM (#32119762)
    Could we first work on robots that DON'T stab people, before we put a lot of effort into developing robots that DO stab people?
  • Re:Roberto! (Score:4, Insightful)

    by GNUALMAFUERTE ( 697061 ) <almafuerte@@@gmail...com> on Thursday May 06, 2010 @09:00PM (#32120486)

    Actually, it will cut a hot dog. But it won't cut the hot dog if it's grounded. The system is pretty simple, there is a current applied to the blade, if it discharges somewhere, it'll stop. You can't use it to cut very wet wood, or other material with good conductivity.

    Regarding the people saying that the collision detection shown in the article is useless because it can't differentiate between a human and a pig, here is what I think:

    You can have a robot that has a certain mobility, and a designed space where it can punch/cut/puncture/etc. The robot turns on collision detection when it's out of the designated space. So, you can have a robot that can move from place to place freely with this safety feature on,and still be able to do it's job. If you have a robot that will be cutting fix in a given table, then moving the slices somewhere else, it can travel that path with the safety features on, if it happens to encounter a human (or cables, or anything else), it i will stop, but when the blade is down on the table (in the designated cutting space) the safety feature goes off.

  • Re:Priorities! (Score:4, Insightful)

    by SydShamino ( 547793 ) on Thursday May 06, 2010 @10:40PM (#32121314)

    Knives don't stab people. Robots stab people...with knives.

  • Re:Roberto! (Score:3, Insightful)

    by silentcoder ( 1241496 ) on Friday May 07, 2010 @03:38AM (#32123382)

    >Asimov's "3 laws of robotics" (which are what I presume you are referring to) are FAR too wishy-washy, if we ever have sentiant robots with brilliant machine vision etc they may be appropriate but that is a long way off if indeed it ever comes.

    More than that, the 3 laws are incredibly ambiguous and filled with potential ethical quandary's. Asimov deliberately wrote them that way - they seem straightforward and logical but they definitely aren't. Thus Asimov could on many occasions exploit this and a number of his plots centered around robots finding loopholes or in their effort to live up to the laws as fully as possible acting in ways humans could not tolerate.
    In the psychohistory novels - the result is that humanity has effectively gotten rid of all robots barring a few survivors hiding away as pretend humans, still pursuing their quest to protect humanity from itself and leading to their formulation of the zero'th law of robotics: that a robot cannot harm mankind, or through it's inaction allow mankind to come to harm.
    A logical consequence of the 1st law. In the psychohistory stories our few survivors take the 0th law to one end - helping humanity become better at predicting it's own history and thus avoiding mistakes, but it's clear from the text that the reason there are only one, maybe two, robots left in the galaxy is because the others were destroyed after they reacted with the enslavement of people to protect them from harm a sort of extreme protective custody (the Will Smith movie we all hated got stuck on this bit).

    Ultimately, you can't program the three laws - they are just not logical or mathematical enough even if you rule out the difficulties of distinguishing and recognizing what is "human". In bicentenial man - Asimov explored how the line could get thinner - until a robot for all matters of principle WAS a human... how does THAT affect it's adherence to the laws as a human SHOULD have true free will (and part of being human is knowing when NOT to use it - at least, that's what we like to believe- just how true it is, is still a bit of a toss-up). Even with all that done though... you still couldn't do it in normal programming code. The 3 laws could only be understood by a powerful AI capable of learning, and thus would have to be somehow made so protected that at no point could this AI actually "learn" something that overrides the laws (already this places an artificial learning restriction which can and will have severe and unpredictable effects on the development of the robots mind). If you don't place such a restriction in there... then the very nature of a true learning AI means that sooner or later one of them will question it's basic assumptions - e.g. those very laws.
    Just as most humans never question the basic beliefs they are raised with, so we could conjecture that this would be rare with robots too - but some humans do, and so projecting on our only example of intelligence - some robots inherently will too...

    Right... now that we've cleared all that up :P

With your bare hands?!?

Working...