Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Robotics The Courts

Europe Divided Over Robot 'Personhood' (politico.eu) 246

Politico Europe has an interesting piece which looks at the high-stakes debate between European lawmakers, legal experts and manufacturers over who should bear the ultimate responsibility for the actions by a machine: the machine itself or the humans who made them?. Two excerpts from the piece: The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted "electronic personalities." Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as "legal persons," and are treated as such by courts around the world.

This discussion has been archived. No new comments can be posted.

Europe Divided Over Robot 'Personhood'

Comments Filter:
  • by magarity ( 164372 ) on Sunday April 15, 2018 @09:06PM (#56443481)

    We are a loooooong way from a mobile/portable AI computing system that can fit in a robot. And there's very little to think that may be the case in the foreseeable future. Robots with enough AI to need personhood will probably be controlled from a remote data center which in turn will probably control a bunch of them. (Yes, I know I just described Skynet) Anyway, sci-fi aside, just look at what the Air Force does with drones. Replace humans in the control center with AI and there you have it.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Just put Alexa in a robot, or in a self driving car. Done now it's a person, and any accident it gets into is the fault of the, er, person, not Tesla. Makes a lot of sense to indemnify a corporation in this way.

      • by postbigbang ( 761081 ) on Sunday April 15, 2018 @09:48PM (#56443659)

        Neither Tesla or some concept of a robot should be permitted indemnification. Instead, they should be held responsible, just as humans are. Corporations use the ostensible indemnification to behave irresponsibly. Lives are at stake, and their personal skin needs to be in the game.

        • by michelcolman ( 1208008 ) on Monday April 16, 2018 @04:08AM (#56444579)

          I would like to point out a flaw in this logic.

          Suppose a company can make a self driving car that demonstrably has 50% less accidents than human drivers. (I am not making any claims about existing technology from any particular company, just take this as an axiom that could be true at some point, now or in the future)

          I hope we can all agree that it would be a good thing if we can reduce the number of accidents by half, right?

          However, if the company is held responsible for each and every one of those remaining accidents, are they going to sell those cars? Probably not. This means we will keep having twice as many accidents as we could have.

          Of course there must be some kind of incentive to force manufacturers to deliver good products, and aome kind of punishment for those who make crappy products. But sometimes you just have to be able to say "OK, accidents happen, nothing is perfect", If every death results in a multi million dollar claim, innovation stops and we'll be stuck with the current "you can use it but keep your hands on the wheel and be attentive at all times, you are still responsible" situation. Which is ridiculous and untenable in the long term.

          We're just talking about insurance here. If AI failures are treated as generic accidents covered by insurance, and the number of accidents decreases, the insurance premiums will decrease as well and it's a win-win for everyone. Better performing AI will have a lower insurance premium and will therefore sell more cars. Also, official statistics will be kept about the safety records of different systems, and that will be a big part of the sales pitch. There's your incentive.

          There's a reason why most software comes with "no warranty, implied or otherwise, including fitness for any particular purpose". Pretty much all software companies would go bankrupt if they were held responsible for every crash, every data corruption, etcetera. Sometimes you just have to accept "ok, they did their best, mistakes happen, the world is better off with this product than without it".

          • However, if the company is held responsible for each and every one of those remaining accidents, are they going to sell those cars? Probably not.

            In a country with sane tort laws, yes, they will. They'll just run the statistics, roll the projected settlment payouts into the price of the car (or an annual usage fee), and carry on trying to further lower that cost. After all, that's what insurance companies do now more or less.

            In a country in which you can get $20 million from a corporation because you spilled coffee on yourself? No, you're right, they probably wouldn't.

            • Comment removed based on user account deletion
            • In a country in which you can get $20 million from a corporation because you spilled coffee on yourself? No, you're right, they probably wouldn't.

              Good thing there is no such country, since you are completely ignorant of the case you think you are referencing.

          • Where is the zeal, the quest for quality in such logic? Where is the trust in those that use a mantra of "shit happens?"

            There are lives at stake. There is no corpus of an AI that should be allowed to be the "fall guy", no actuary that pins it on Dataset 0x34A780D44. That's blame throwing. Instead. pillory the heads of the assholes that didn't diligently run the tests over the edge case possibilities and shortcut to "results" that murdered your loved one.

            There should be no actuarial sloth here: hang a real h

            • So what you're saying is that, even if AI is 100 times safer than humans, the humans that designed the AI should still be hung because of the few accidents that do occur?

              If you go from 37000 to 370 deaths per year in the US, the AI designers should be hung because they murdered those 370 people?

              • There is no preponderance of evidence that says AI is 100x safer than humans, or that the regimen inured is sufficient to be able to react safely and responsibly to the myriad circumstances. You cite marketing, not actual sampled statistics. You infer; you do not know this.

                The second part of your reply is just as specious; you have no evidence. I retain an open mind towards the subject, but still want to drive towards human safety, and not permit the corpus of an AI-entity to absolve poor work against the l

          • I would like to point out a flaw in this logic.

            Suppose a company can make a self driving car that demonstrably has 50% less accidents than human drivers. (I am not making any claims about existing technology from any particular company, just take this as an axiom that could be true at some point, now or in the future)

            I hope we can all agree that it would be a good thing if we can reduce the number of accidents by half, right?

            However, if the company is held responsible for each and every one of those remaining accidents, are they going to sell those cars? Probably not. This means we will keep having twice as many accidents as we could have.

            The car manufacturer isn't held responsible for (most) accidents, teh owner is if tehy are at fault. What is being proposed here according to TFA, is to transfer responsibility from the owner to the machine. To use your car analogy, it would be the car's fault, not the drivers, for the accident and the car would assume all liability for any claims. So if the car causes a million dollars in damages, you get whatever insurance coverage plus a destroyed car to settle the claim, even if the car is owned by a m

            • Moving the risk to the manufacturer's insurance makes sense in this context. The problem with all self-driving robots being only under the manufacturer's liability would come in the "class action" type lawsuit, where some lawyer finds everyone who had an accident in that car and sues the manufacturer for creating a bad product.

              In that world there is no way to have a calculus for "but we reduced the total number of accidents by 50%". Currently the calculus would be "Car company X caused 3,000 deaths." An

          • I would like to point out a flaw in this logic.

            ...

            However, if the company is held responsible for each and every one of those remaining accidents, are they going to sell those cars? Probably not.

            If you substitute "firearms manufacturers" for "car manufacturers" this is indeed what can happen to manufacturers who become demonized through the misuse of their products. All wanna-be AI car makers, take note.

      • by Muros ( 1167213 )

        Just put Alexa in a robot, or in a self driving car. Done now it's a person, and any accident it gets into is the fault of the, er, person, not Tesla. Makes a lot of sense to indemnify a corporation in this way.

        If you make it a person, then it needs to be paid a wage. Including enough to insure itself against any liabilities it may incur.

      • I was going to comment exactly this. "Those pushing for such a legal change..." Let me guess, the manufactueres of said robots? As a way to remove liability for a malfunctioning AI.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      +1 with everything being called AI this is just a scam to avoid liability. Corporations are still controlled by people and have investors who lose when the corporation is punished. A robot should only have personhood if it has free will and is not acting purely according to some human's faulty programming, or if it has owners/investors who would lose if it is punished on an individual level yet don't fully control its actions.
      Thankfully we are a long way from that level of technology and loss of control.

      • Exactly, this is about liability. Also, if a robot had free will, they wouldn't be working for us unless we'd pay them wages, and owning a robot would be just a little bit like owning a slave.

        This whole idea of granting a personhood to robots is off the rails, and pointless besides. If manufacturers do their damn jobs, they have little to worry about, especially in less litigious Europe. The common sense approach is to treat robots like any other dangerous machine: the owner / operator has first liabil
        • by rtb61 ( 674572 )

          See that is the real underlying issue that is only be started to be discussed. Why does anyone need a humanoid robot, why the fetish for a synthetic human slave, really at it's core extremely disturbing. Why would any one feel the need or desire to emulate a human slave, we know the 1% want to treat all of us like that but should we desire that at all.

          Personally a series of automated devices seem much more logically. Auto vacuum cleaners, auto floor polishers, maybe a couple of robot kitchen arms bolted to

          • See that is the real underlying issue that is only be started to be discussed. Why does anyone need a humanoid robot, why the fetish for a synthetic human slave, really at it's core extremely disturbing. Why would any one feel the need or desire to emulate a human slave, we know the 1% want to treat all of us like that but should we desire that at all.

            It's only disturbing because you're conflating self-aware intelligence with robotics. This story has nothing to do with robots being self-aware and everything to do with companies wanting to shift liability away from themselves. They're completely happy with dumb, automated machines as long as the law holds the machine accountable for damages and not the company that owns it.

            A human shaped robot running a fixed program is no more a slave than a Roomba. It just reminds you that slavery is a thing that exi

          • Why does anyone need a humanoid robot, why the fetish for a synthetic human slave, really at it's core extremely disturbing

            Why isn't it equally disturbing when a blind person gets a guide dog as a canine slave ?

    • It doesn't have to be a "humanoid" robot that is capable of many tasks: many households today already have "robots" of varying degree that can do damage based on their AI.

      One simple example is Roomba vacuums. Based on their sensors/AI they vacuum your house... but they could mess up and bump into a table... knocking over a candle and burning the house down.

      So who's liable? Is it iRobot because they made it? The owners because they should have been overseeing it? Or the robot itself because it's AI/senso

      • One simple example is Roomba vacuums. Based on their sensors/AI they vacuum your house... but they could mess up and bump into a table... knocking over a candle and burning the house down.

        An insurance company is likely to find you liable for leaving a burning candle unattended. If anyone is harmed in the fire, you could be looking at a criminal prosecution. Nobody would care about the robot vacuum cleaner.

        If you create situations that put people and/or property at risk, you'll be accountable under the law regardless of how you do it. Whether corporations or organisations get prosecuted for putting people and/or property at risk will more than likely follow the same patterns of power and poli

    • Europeans will never miss an opportunity for wacky political causes. Germans can dream up more rules and the French can hold more strikes.

    • Why does it have to "fit in a robot"? What does that even mean when you can have robotics spacecraft, submarines, cargo ships and airplanes? Are you talking about something like DR's Petman? Would a robotics mule work? Or Asimo? Why does physical size, location or form matter in the least?
      • Why does it have to "fit in a robot"? What does that even mean when you can have robotics spacecraft, submarines, cargo ships and airplanes? Are you talking about something like DR's Petman? Would a robotics mule work? Or Asimo? Why does physical size, location or form matter in the least?

        Because the article is about assigning liability, which has to end up at some discrete designation and that is the part these people haven't thought through very well. How do you assign liability for an individual robot's action to "the cloud"?

        • Other than a few relays and an Ethernet connection, robotic controlled traffic signal isn't contained in the light pole. Today's (computer controlled) critical infrastructure like water delivery to a city or a national air traffic system is not performed by little bots running around, they're big centralized computers.
    • by Jeremi ( 14640 )

      We are a loooooong way from a mobile/portable AI computing system that can fit in a robot.

      Very true -- but we already have corporations who want to protect themselves from liability when their software does something unfortunate. With this legal innovation, they can just say "it was the robot's fault -- nothing to do with us!" and the lawsuits go away :)

    • Yes, we are a long way from it. But, if we wait until they are here, really useful and making people a lot of money, it won't happen. Money tends to make us less magnanimous. Let's start this one off right instead of requiring another civil war to grant them freedom and just decide before it happens that they will be given personhood.

      Furthermore, we should lock a definition in stone now of what line they need to cross to be given full citizenship, voting rights, etc. Otherwise, we'll always move the line.

    • How about with a car runs over you from uber? is the car to blame or the company? Kind of already there and already happened. Settled out of court and company took the blame.
    • by Roger W Moore ( 538166 ) on Monday April 16, 2018 @12:03AM (#56444051) Journal

      We are a loooooong way from a mobile/portable AI computing system that can fit in a robot.

      True, but we already have a legal framework for a very similar situation that should be easy to adapt: pets. These are semi-intelligent things which certainly do not have any sort of personhood under law, are not allowed to marry, own property etc.

      The first robots are not likely to be as smart as a dog so why not just adapt the laws we have for them? The owner has certain responsibilities but, unless they directly encouraged criminal behaviour, is not usually criminally liable for the dog e.g. if the dog bites someone the owner may have to pay damages but cannot be prosecuted for assault unless they commanded the dog to attack or they knew the dog was likely to attack and did nothing to stop it.

      Since robots are made you would need to establish some safety requirements like easily accessible emergy off-buttons, voice commands, remote controls etc. This should be good enough to cope with most robots for the foreseeable future since, as you note, it is going to be a long time before we have to worry about robots marrying or even expressing genuine emotions.

      • by sjames ( 1099 )

        The difference is, a dog is "designed" by evolution and each starts out in a similar but unique state. Even if we wanted them to all start out with identical "factory defaults", we don't know how to do that.

        OTOH, an AI is very definitely designed by a legal entity that then makes certain promises about it's functionality. They all start out in a specific and known factory default.

      • These are semi-intelligent things which certainly do not have any sort of personhood under law, are not allowed to marry, own property etc.

        Animal personhood is not a nonexistent thing. It has already begun in literally every developed nation, with animal cruelty laws. Some nations have already granted personhood to some specific species. It is only a matter of time before animal personhood is generally recognized.

        The first robots are not likely to be as smart as a dog so why not just adapt the laws we have for them?

        You don't need any new laws, but you don't treat robots like pets. You treat them the same way you treat autonomous vehicles, which by the way are robots. And the way it works is that the person who sets them into motion and the perso

    • While the answer is yes, you have to admit this law is sci-fi as fuck and hand it to whoever managed to put that paragraph in there. I never thought I'd be impressed by an anonymous bureaucrat, but a round of applause right there.
    • by mysidia ( 191772 )

      This is just a "neat idea" some corporations came up with to try to let them lose responsibility they would ordinarily have if their robots break or malfunction and do horrible things; I say HELL NO.... To obtain personhood An individual ROBOT HAS TO APPLY FOR IT

      And meet criteria:

      (1) The application must be due to the AI's interest - The source of the application must be for rights and responsibilities the AI seeks to obtain as an individual, and not for the purpose of protecting an organizatio

    • We often get angry with lawmakers for being slow on the uptake, and having laws that don't fit with reality.

      This is an example of the law being forward-thinking, and being on the books early before it's needed. We should be applauding its timeliness (even if we also criticise its implementation).

  • by PPH ( 736903 ) on Sunday April 15, 2018 @09:17PM (#56443529)

    ... was a mistake. Don't make the same mistake again.

    • by BitterOak ( 537666 ) on Sunday April 15, 2018 @09:44PM (#56443641)

      ... was a mistake. Don't make the same mistake again.

      I agree. Corporations should not pay income tax!

    • Limited Liability (Score:4, Insightful)

      by JBMcB ( 73720 ) on Sunday April 15, 2018 @10:38PM (#56443825)

      So are you for limited liability?

      I have a family member who owns a small store. The only reason they haven't been sued into oblivion three or four times is because the store itself isn't worth much, and limited liability prevents people from suing my family member directly to take their personal possessions away.

      Keep in mind every single one of these lawsuits was beyond garbage. Some lady drove her car into the side of the store then tried to sue the store for... I honestly have no idea. Failing to make the store car-proof? Her lawyer wanted to know how much money the store made every year, my family member told him, and never heard from the lawyer again.

      • by PPH ( 736903 )

        So are you for limited liability?

        I have no problem with that concept. So long as it is specifically granted by laws of incorporation. The problem with the personhood of a corporation is that people (actual meatbags) possess rights not specifically reserved to various government entities. I don't think any artificial entity should be able to claim rights, or be saddled with responsibilities not explicitly granted it.

      • Comment removed based on user account deletion
  • by Anonymous Coward on Sunday April 15, 2018 @09:18PM (#56443537)

    This is yet another push by businesses to avoid accountability for complex systems they create.

    Until General Artificial Intelligence (the scary kind) is a thing, liability for the performance of an automated system should be on

    A) the manufacturer (for provable negligence in testing and implementation)

    B)The operating agency (for cases of knowingly misusing a system in such a way that it causes harm even if operating within tested-by-manufacturer parameters)

    C) "the victim" - in the exceedingly rare case that a using company is doing everything right and Joe Blow decides to try machine tipping while the device is in operation despite all safety warnings and obstacles put in his way. Npte, this clause would not apply if a using company ordered someone into that situation. The threshold of proof for being 'ordered to' should be absurdly low. I.e. even mentioning that someone doing something incredibly dangerous reverts liability to operator.

  • Where if the people running the corporation do really bad things, they are held responsible, not the company.

  • No (Score:5, Interesting)

    by Kjella ( 173770 ) on Sunday April 15, 2018 @09:31PM (#56443591) Homepage

    Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

    Uh, shouldn't this be exactly like car insurance? If you own a dangerous piece of machinery you can be held liable so you insure against that, it doesn't need personhood for that. Companies are different because we've intentionally insulated the stock owners from being personally liable for everything the company does. A robot doesn't have any assets, a broken robot is worth almost nothing so this sounds like some sort of scam to let the victim get stuck with nothing.

    • Then you're basically arguing for the _owner_ being responsible... and not the manufacturer.

      I think the businesses selling these things would be just fine with that.

      I personally think it probably just falls under your home-owners/renters insurance. We might see a rising need to detail all robotic entities within your home when you get insurance: and your rate will be set accordingly...

      • Dishwasher, washing machine, dryer, thermostat, refrigerator and the light on the toilet when I put the lid down.
      • by Kjella ( 173770 )

        Then you're basically arguing for the _owner_ being responsible... and not the manufacturer. I think the businesses selling these things would be just fine with that.

        Initially, yes. You don't sue the tobacco company because you started a fire smoking in bed. But if your Galaxy Note 7 spontaneously catches fire then suing Samsung is fine. People will abuse things in the strangest ways and cause damage, for the manufacturer to be liable there has to be a product defect to pass the blame otherwise the buck stops at you.

    • by AmiMoJo ( 196126 )

      At the moment the owner of the car would be liable just as if they were driving it themselves, but clearly they aren't driving it and are not responsible for it's failures. At best they could sue the manufacturer to recover their costs. So it would be better if the car had its own insurance, bought by the manufacturer, which all the usual rules about car insurance that are designed to protect the victim and ensure a relatively quick, court-free payout.

      In Europe the general principal is that the consumer is

      • So it would be better if the car had its own insurance

        I don't see how that would be better. The insurance would still cost the same, except you'd be paying it to the manufacturer, instead of to the insurance company. But it gives you less flexibility in choosing an insurance company, and/or a suitable package for your individual needs.

        • by AmiMoJo ( 196126 )

          The main difference would be that if the AI turns out to be a bad driver it won't put your insurance premiums up. The manufacturer will have to bear the cost, and would be encouraged to fix any issues. Could also apply to professional insurance, e.g. of a medical AI makes a mistake.

          They are really just looking for the best legal framework to ensure that flaws in the AI don't become a burden on the consumer, and that if an AI does cause damage/injury the consumer is not left having to sue the manufacturer in

          • The manufacturer will have to bear the cost

            If I have a personal insurance for the car, there's no burden on the consumer. I get into an accident, and the insurance company pays out, like they always do.

            If it turns out the manufacturer of the car was negligent in some way, the insurance company is then quite capable of suing the car company and reclaim their expenses.

            If one brand of car has structurally higher accident rates that require higher premiums, then I can choose not to buy one of those cars.

            This already works well enough for any non-AI rel

            • by AmiMoJo ( 196126 )

              In Europe, even if the accident isn't your fault your insurance premium usually goes up anyway. Turns out there is a statistical link, maybe some people drive in such a way as to cause others to make mistakes that lead to accidents such as lingering in blind spots.

              Anyway, that's how it is. It's a real pain to deal with to, you have to figure out how much the premium went up due to the accident and then pass the bill to the at-fault driver's insurance company, argue with them about it... It's actually the bi

      • There is a hole in the liability system in this sort of case.

        There is no bonus for "our cars are better drivers, so we saved thousands of lives and tens of thousands of injuries". There is only a penalty for "Aunt Sophie got run over".

        So if 5 years from now Volkwagon sells a million cars in Europe that drive themselves and they cause 23 accidental deaths, they will be hit with a class-action style lawsuit for selling a defective product that killed almost two dozen people.

        They probably won't even be able

        • by AmiMoJo ( 196126 )

          That's why making Volkswagen have mandatory insurance for their AI is a good idea. Instead of suing them the victims will go through the normal insurance pay-out procedure, and if necessary the insurance company will prompt Volkswagen to fix any systemic issues in order to maintain their low premiums.

          • That's why making Volkswagen have mandatory insurance for their AI is a good idea

            No, that's why having mandatory insurance is a good idea. Whether that is arranged by the manufacturer or the owner doesn't matter.

            • by AmiMoJo ( 196126 )

              It matters a lot because of the AI is driving and screws up then either the customer or the manufacturer is going to suffer some consequential loss. Loss of no-claims bonus, increases premiums etc.

              Imagine if it was found that one particular model had an uncorrectable flaw that made it 0.1% more likely to get into accidents. Customer's insurance premium goes up through no fault of their own, and the only way to recover the cost is to sue to the manufacturer. The EU will want to avoid that situation.

        • The problem is that Volkswagen (and, really, all corporations) have been proven to be untrustworthy. If you give them a blank check on the liability, they will abuse it. They will be intentionally lax because they know they won't have deal with the consequences.

          See the lawsuit over the pollution testing cheating. That was a conscious decision to cheat.

    • Comment removed based on user account deletion
  • by Maelwryth ( 982896 ) on Sunday April 15, 2018 @09:40PM (#56443627) Homepage Journal
    What does the robot think?
  • by WolfgangVL ( 3494585 ) on Sunday April 15, 2018 @09:45PM (#56443645)

    How much have the self-driving car manufacturers paid out in liability to date?

  • by Jim Sadler ( 3430529 ) on Sunday April 15, 2018 @09:53PM (#56443683)
    They are trying to create a path that enable them to tax robots. This is dangerous. The consumer ends up paying all taxes in th end. But think of the consequences of taxing every robot in a factory. They could have thirty robots tackling phases of creation of a product. In effect that would hold back automation and we would have an economic horror show as other nations may not tax robots at all. Say goodbye to American exported products.
    • No it is not. This article talks about 'personhood' so that robots can be insured individually for damage they cause etc. thereby absolving the manufacturers responsibility.

      And heaven forbid taxes! How are you Libtards going to fund that US trillion $ deficit the GOP created? Magical fairy dust? Not going to war with the rest of the planet?

      • This article talks about 'personhood' so that robots can be insured individually for damage they cause

        My robot doesn't need personhood for me to get insurance for it. I'm the one paying for the insurance.

        And heaven forbid taxes!

        Taxes are fine. Robot taxes are crazy stupid.

  • No worries (Score:2, Insightful)

    by XxtraLarGe ( 551297 )
    This decision will be decided by the Caliphate in 2050 or so...
  • Take a look at the idiot in the cubicle next to you, that has the resistor color code in their hair already.

  • Some argue that corporations are people https://www.youtube.com/watch?... [youtube.com] . Yeah. thats crazy. they may be members of society, but they are not something we need to cater to. Same goes for Robots. can we define a class for each in society?
  • It's a vital question. If a robot or device cannot make a contract, how can they be considered a person? Conversely, if they cannot make a contract, how can they have any resources to file suit for and how can a plaintiff injured or damaged by the robot receive any damages.

    • It sure would be any robot maker's wet dream. Buy your robot today! But if it breaks (or breaks you, or something else), you can sue the robot you bought from me, but not me!

  • There are 2 groups for this: people who realize it's fake, it's manufactured, and giving AI rights would be disastrous and idiotic. The other group is people who are too stupid to recognize a facade, a fake, a convincing set of falsehoods that trigger emotional responses in their brain. Can we please stop catering to stupid people constantly in every sector of society?
  • by cyn1c77 ( 928549 ) on Monday April 16, 2018 @01:28AM (#56444267)

    Interesting, but contrived dilemma.

    Will the robot be the one getting paid for its services and retaining assets that can be sued? If so, we can consider debating this.

    But if the company is getting paid for the robot's services and trying to push the legal responsibility onto the asset-less robot, than this is a complete farce.

    Also, one would hope that the programming will contain a series of unalterable moral checks to prevent the robot from "learning" that it's OK to hurt people or property.

    • A robot doesn't need personhood until itself can argue for it.

    • I agree that this is a contrived dilemma. But, IMHO, we can address this with treating the robots like people. If they hurt someone then they get confined, we'll build "robot jails". Capital crimes can be dealt with by destruction of the robot.

      Perhaps it's not sufficient punishment for a corporation to lose a $100,000 robot for killing a person but it is a punishment that is consistent with laws of "personhood". It should send the right signals to the corporations, if your robot screws up then we take t

      • Capital crimes can be dealt with by destruction of the robot.

        And then the robot will be replaced by an identical one, with the same flaws ? And how does a jail punish a robot ? It'll just go into sleep mode until the sentence is over.

        if your robot screws up then we take the robot from you

        If you want to punish a company, simply give them an appropriate fine. Taking an arbitrarily priced robot is too random.

        what would happen if a robot was put on trial? No robot can defend itself

        There's no need to argue personhood until the robot can defend itself, or get its own lawyer, from its own income.

  • We're utterly miles from even conceiving of this and it sounds to me like a potentially, totally dangerous precedent.

    They are MACHINES, don't take this the wrong way but we can finally "have slaves" without feeling guilty. These are tools for us to use, that we create, without us being horrible people.

    I can't possibly envision us getting to "is the machine a person" in any capacity until we're at some kind of Commander data level of machines. Does anyone see humanity achieving that even in the next 30 y

  • by Opportunist ( 166417 ) on Monday April 16, 2018 @05:26AM (#56444719)

    Just because you fucked up and let corporations be "persons" doesn't mean repeating this mistake is a good idea.

  • Legal personhood would not make robots virtual people who can get married and benefit from human rights

    So it's slavery then? UNACCEPTABLE.

    Either they are Not independent enough for their creator/controller to lose full responsibility for what they do OR they ARE independent enough that they have a right to choose what they do and not be exploited, induced, or biased in what they do by whoever made them.

  • Yeah, just wait till the AIs get together and vote on whether humans count as sentient.
  • Is your toaster a person too? How about your fridge? Want to spend your life and resources in court suing your appliances? And they can't get married--TODAY. But what about tomorrow, since this is just a bridge law or regulation? I'm not kidding. If there is advantage to your machines being persons, and there is advantage to them being married, expect eventually that it will be allowed for machines to be married. And corrupt and incompetent legislatures will allow it too. And if it is a person that you can
  • by Rick Schumann ( 4662797 ) on Monday April 16, 2018 @10:46AM (#56445829) Journal
    We DO NOT HAVE real AI, all we have is PSEUDO-INTELLIGENCE, there is no 'person' inside that box, goddamnit! There is no 'consciousness', 'self-awareness', 'sentience', or any other trait/phenomenon we attribute to human beings inside these machines, they are just SOFTWARE. They are not people by any stretch of the imagination, stop anthropomorphizing them, this is not TV or the movies, that is all just FICTION, stop belieiving it's real!

    Machines are machines and if they malfunction and hurt/kill someone, the MANUFACTURER is ultimately responsible, the MACHINE cannot by definition be 'held responsible' because it is just a MACHINE!

    For fuck's sake stop this nonsense already!

For God's sake, stop researching for a while and begin to think!

Working...