Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Robotics Your Rights Online

Social Robots May Gain Legal Rights, Says MIT Researcher 288

dcblogs writes "Social robots — machines with the ability to do grocery shopping, fix dinner and discuss the day's news — may gain limited rights, similar to those granted to pets. Kate Darling, a research specialist at the MIT Media Lab, looks at this broad issue in a recent paper, 'Extending Legal Rights to Social Robots.' 'The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.' If a company can make a robot that leaves the factory with rights, the marketing potential, as Darling notes, may be significant."
This discussion has been archived. No new comments can be posted.

Social Robots May Gain Legal Rights, Says MIT Researcher

Comments Filter:
  • by gl4ss ( 559668 ) on Tuesday September 04, 2012 @04:48PM (#41227357) Homepage Journal

    OR WHAT THE FUCK?

    it's scifi nonsense better left for fiction for now.

    "Patrick Thibodeau is a senior editor at Computerworld covering the intersection of public policy and globlization and its impact on IT careers. He also writes about high performance computing, data centers including cloud, and enterprise management. In a distant life, he was a weather observer in the Navy, a daily newspaper reporter, and author of a book about the history of New Britain, Conn." He also likes to write bullshit articles and somehow tie Apple into them. who am I kidding, it's computerworld - it's nothing but bullshit.

    first make the goddamn cognitive robot that can feel pain, then we'll talk. can your car feel pain because there's a bit counter for faults in it? it can't. once the robots can make a compelling argument that they're cognitive then we're living sci-fi future and can look at the issue again. doesn't this jackass understand the huge leap from simple algorithms in siri to true AI ? why the fuck would you make your robot cognitive to the point that it matters if it has rights even if you could - for sadistic reasons? in which case you certainly wouldn't give it any rights.

    next up the movement for rights of rocks - because rocks might have feelings too you know..

  • by icebike ( 68054 ) * on Tuesday September 04, 2012 @05:18PM (#41227759)

    As long as its your robot you can parts it out do your heart's content.

    But if I send my robot down the street to get groceries, I don't want someone yanking its memory modules or salvaging its servos just because it was running around loose and had no feelings.

    We really don't have many laws that cover a device that runs around in public spaces doing errands and perhaps spending money (digital or otherwise).
    Yes its property, and my property rights may still apply, but I'm not sure that's enough to prevent someone from declaring it abandoned property and partsing it out on the spot.

    There are more imminent questions that need to be answered:
    Are they licensed like cars to be in public spaces? Carry and spend money? Carry weapons? Plug in and recharge when they need? Be searched by police at will? Will Police disable and memory strip my Asimo [honda.com] just because it might have recorded a police beatdown while passing a dimly lit alley?

  • Re:No. No. Fuck no. (Score:4, Interesting)

    by daem0n1x ( 748565 ) on Tuesday September 04, 2012 @05:22PM (#41227785)
    It's funny that, while people are less and less protected from being exploited, here comes the hero wanting to give rights to... robots. Wrong priorities?
  • Re:No. No. Fuck no. (Score:5, Interesting)

    by icebraining ( 1313345 ) on Tuesday September 04, 2012 @05:41PM (#41228065) Homepage

    The problem is defining 'consciousness' and 'pain'. There's already a robot that can sense damage to its body [cornell.edu]. Is that pain? If not, why not?

  • by AvitarX ( 172628 ) <me@brandywinehund r e d .org> on Tuesday September 04, 2012 @05:42PM (#41228077) Journal

    Generally the execution of animals is totally acceptable, it's primarily torture and torturous environments that are not, and even then, mostly if people can see it.

    Putting a pet to sleep (even with a home brew method) is pretty much completely legal (in the US). Certain types of competitive breeders cull well over 90% of their stock.

  • Re:No. No. Fuck no. (Score:3, Interesting)

    by ThatsMyNick ( 2004126 ) on Tuesday September 04, 2012 @05:48PM (#41228149)

    If I can reprogram it to "forget" that it ever happened, did it really happen?

  • by Animats ( 122034 ) on Tuesday September 04, 2012 @06:05PM (#41228333) Homepage

    The cited article is rather lame. But there's a real issue here that we're going to reach soon. What rights do mobile robots, like self-driving cars have?

    As a practical matter, this first came up with some autonomous delivery carts used in hospitals. Originally, they were programmed to be totally submissive about making people get out of their way. They could be stalled indefinitely by people standing and talking in a corridor, or simply by a crowd. They had to be given somewhat more aggressive behaviors to get anything done. There's a serious paper on this: "Go Ahead, Make My Day: Robot conflict resolution by aggressive competition (2000) " [psu.edu]

    Autonomous vehicles will face this problem in heavy traffic. They will have to deal with harassment. The level of aggressive behavior that will be necessary for, and tolerated from, robot cars has to be worked out. If they're too wimpy, they'll get stuck at on ramps and when making left turns. If they're too aggressive (which, having faster than human reflexes, they might successfully pull off), they'll be hated. So they'll need social feedback on how annoyed people are with them to calibrate their machine learning systems.

    I don't know if the Google people have gotten this far yet. The Stanford automatic driving people hadn't, last time I checked.

Remember to say hello to your bank teller.

Working...