Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
AI Robotics Transportation Hardware

How Do You Give a Ticket To a Driverless Car? 337

FatLittleMonkey writes "New Scientist asks Bryant Walker Smith, from the Center for Internet and Society at Stanford Law School, whether the law is able to keep up with recent advances in automated vehicles. Even states which have allowed self-driving cars require the vehicles to have a 'driver,' who is nominally in control and who must comply with the same restrictions as any driver such as not being drunk. What's the point of having a robot car if it can't drive you home from the pub while you go to sleep in the back?"
This discussion has been archived. No new comments can be posted.

How Do You Give a Ticket To a Driverless Car?

Comments Filter:
  • All in good time (Score:5, Informative)

    by PRMan ( 959735 ) on Monday December 24, 2012 @08:11PM (#42384727)
    We are at the early stages. Look at the laws from the first few years of automobiles. You had to walk in front waving a lantern. And go slow enough that the cop on horseback could give you a ticket. What's the point of a car with laws like that?
  • by Anonymous Coward on Monday December 24, 2012 @09:27PM (#42385049)

    They read the posted speed limit signs, you really think a car that can drive can't read the standardized limit signs??

  • Re:Extra safety (Score:5, Informative)

    by icebike ( 68054 ) on Monday December 24, 2012 @09:49PM (#42385107)

    You must never have played a game with AI.

    Here is a hint, we are actually very bad are creating smart machines. A 9 YO would be a more intelligent driver any most super computers.

    In relation to the present discussion, I'd have to say that Google's driver-less cars pretty much put the lie to that statement.
    In August 2012, Google announced [wikipedia.org] that they have completed over 300,000 autonomous-driving miles accident-free, typically have about a dozen cars on the road at any given time. Not explicitly stated in their announcement [blogspot.hu] was how often the driver had to take command.

    Further, the summary above may be wrong, because the Nevada law also acknowledges that the operator will not need to pay attention [wikipedia.org] while the car is operating itself, which implies the State has no reasonable expectation of holding the driver responsible for accidents.

  • Re:Extra safety (Score:3, Informative)

    by Anonymous Coward on Monday December 24, 2012 @11:50PM (#42385495)

    My phone has an incredible processor in it and can handle millions of calculations per second, but it still locks up sometimes, occasionally responding seconds later to all the stored input. Isn't that pretty close to being distracted?

    No. The processor does not "lock up". The software does. We know how to write software that will not do that. In the market for toys (cell phones, DVD players, PC applications), time to market is more important than correctness. When writing software for serious applications (airplane control, the embedded systems in medical devices, mainframe OSes) people take the time to do things correctly. Reliability is not perfect, but it is several orders of magnitude better than your cell phone. A human driver has a mean time to failure (a crash) of less than a decade. Software written by competent people should have no trouble beating that by an order of magnitude.

  • by Shihar ( 153932 ) on Tuesday December 25, 2012 @05:43PM (#42390215)

    This isn't as hard as you make it out to be.

    If your driverless car hits another car, your respective insurance companies pay for it unless it can be shown that you showed negligence. There is no liability for anyone. It goes from a case of assigning blame to treating it like getting cancer. Your medical insurance doesn't assign blame. It just pays out. You pay enough so that the insurance company always makes a buck. End of story. If a car company showed gross negligence, maybe someone could take legal action against them, but if occasionally shit happens and that is life, the simple and easy solution is just to have insurance be no-fault unless someone did something stupid, like modify the software. This is how most insurance works. Car insurance just starts to act like normal insurance.

    In the case of your car killing someone, again, it is simple. Your insurance just acts like normal insurance. Your insurance company just pays out unless it can be shown that the pedestrian did something stupid and is own their own (like dive in front of the car). Again, if the software really bit the bullet, maybe you could try and hit the car company, but for the most part your insurance simply pays out and that is the end of the story.

    The real change would be in insurance price. Your insurance price will probably swing based upon how good the car is at avoiding accidents. A car with a slow stopping speed and 5 year old software is going to be more expensive to insure than an agile car that can stop quickly and has the latest software. It is a boring numbers games that actuaries will have a field day with. You will probably have lower insurance rates regardless because the cost to insure for insurance companies will bottom out. You will have fewer accidents and blow less money on trying to determine liability. It will mean that they can score the same profit doing a whole lot less work, It is a win for everyone.

    People are over thinking this trying to apply a world of liability to a world where there is little to none. If you break the speed limit, the cops might pull you over, but it will be just to check that your software and sensors are not screwed up, and maybe a warning to get your car checked out, not to give you a ticket.

"I don't believe in sweeping social change being manifested by one person, unless he has an atomic weapon." -- Howard Chaykin