Forgot your password?
typodupeerror
Robotics

How To Change U.S. Laws To Promote Robotics 118

Posted by Soulskill
from the just-worry-about-the-kill-all-humans-part dept.
An anonymous reader writes "A law professor says the U.S. could fall behind in the robotics race if we don't change product liability law. A new op-ed over at Mashable expands upon this: Yet for all its momentum, robotics is at a crossroads. The industry faces a choice — one that you see again and again with transformative technologies. Will this technology essentially be closed, or will it be open? ... What does it mean for robotics to be closed? Resembling any contemporary appliance, they are designed to perform a set task. They run proprietary software and are no more amenable to casual tinkering than a dishwasher. Open robots are just the opposite. By definition, they invite contribution. It has no predetermined function, runs third-party or even open-source software, and can be physically altered and extended without compromising performance. Consumer robotics started off closed, which helps to explain why it has moved so slowly."
This discussion has been archived. No new comments can be posted.

How To Change U.S. Laws To Promote Robotics

Comments Filter:
  • by icebike (68054) on Wednesday January 01, 2014 @04:24PM (#45839291)

    the primary purpose and largest market for robotics will be for weapons.

    That or manufacturing. Some (most) robotic assembly plants aren't safe for humans already.

    In either case, changing product liability laws is EXACTLY the wrong thing to do.

    A "product" is not the place for hackers and experimenters. You can build anything you want in your basement or maker shed, but if you want to build a product for sale, you better have some strict testing and insurance.

  • by Immerman (2627577) on Wednesday January 01, 2014 @05:35PM (#45839759)

    I don't know, it seems like this is a fairly complicated question, it might be worth at least formally clarifying some boundaries.

    Lets say we have industrial robots designed specifically to be user-programmable, as I believe most of them are. If there is a defect in the hardware that causes an accident then the company making the hardware is at fault. If however it was a defect (or intentional nefariousness) in the user programming, then it is clearly the programmer who is at fault, not the hardware manufacturer.

    In the case of autonomous robots, be they car/drone/cyborg/whatever, I think the same logic would reasonably apply - if you use the built-in control systems and they malfunction in a way that damages someone/thing then the manufacturer is at fault, but if the damage was reasonably traced to the orders it was following, then it's the person giving the orders that's at fault. Lots of grey area in there though - what if a flying drone is ordered into an area where the winds are too strong to operate safely , and it gets into a damaging collision? Should the company have been required to actively notice and avoid unsafe wind conditions? What if the wind is gusty and there was insufficient prior warning to have reasonably escaped the "danger zone"

    Perhaps a special provision of liability transfer should be considered for autonomous systems, seeing how as with a sufficiently wide deployment accidents are inevitable, and the people best suited to make and improve the systems are not necessarily motivated to do so if they have to swallow the costs of the inevitable accidents. However, we could perhaps arrange for some liability transfer, where the systems are sold as fit for use in certain restricted conditions where the risks are reduced to acceptable levels, and the operator must accept at least partial liability to operate them in any other setting. An autonomous industrial robot may have a wonderful market in a controlled factory setting, but it may also have great uses operating in public. If the manufacturer is required to accept liability for the second scenario then it will likely take far longer before they're willing to release them for the first one. And if we outright ban the second scenario then we'll be depriving ourselves of the discovery of all sorts of potential for new usages.

    Perhaps we could do something simple like require users to carry comprehensive liability insurance in order to operate an autonomous system outside of it's specified environment. Much as we do to allow the operation of most any other dangerous machinery in public. The usage of customized software, open source or otherwise, would no doubt have an impact on the insurance premiums, but companies would be free to stand by their product and offer such insurance themselves, at the price they believe is justified.

    Of course enthusiast driven open projects would be hit hard - I imagine the premiums to operate potentially dangerous uncertified autonomous systems could be prohibitively high, but the only alternative I see would seem to be to allow enthusiasts to endanger the public without consequence.

  • by icebike (68054) on Wednesday January 01, 2014 @05:39PM (#45839789)

    ut then again, it's not straightforward how to extend liability laws to Amazon delivery drones or driverless cars.

    You don't have to "expand" the laws, they already apply.

    You just have to prevent boneheads like those in TFA from limiting liability for things like Amazon's scheme.

  • Re:who benefits (Score:3, Insightful)

    by Anonymous Coward on Wednesday January 01, 2014 @06:04PM (#45839929)
    That's right, and yet with all this productivity and spare time, what do we do? Force both heads of the family to work to barely have the same standard of living my single income family had 30 years ago. So who benefits from the technology?
  • by icebike (68054) on Wednesday January 01, 2014 @07:48PM (#45840861)

    In the case of autonomous robots, be they car/drone/cyborg/whatever, I think the same logic would reasonably apply - if you use the built-in control systems and they malfunction in a way that damages someone/thing then the manufacturer is at fault, but if the damage was reasonably traced to the orders it was following, then it's the person giving the orders that's at fault.

    These situations are already handled under current law.
    If YOU use the build-in control systems, YOU are predominantly responsible. Its going to be up to YOU to prove the product was defective.

    There needs to be NO changes in the law for this to exempt OEMs from responsibility. A bazillion car analogies suggest themselves, from sticking accelerators to faulty on-board computers. And if the on-board computers fail in specific circumstances that they were warranted to handle, the vehicle manufacturer can pursue a claim against the computer manufacturers.

    There is no reason to build air-gaps in the law to protect upstream suppliers, because the burden of proof is well established in current law.

Entropy requires no maintenance. -- Markoff Chaney

Working...