Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IBM Patents Software Transportation Hardware Technology

IBM Technology Creates Smart Wingman For Self-Driving Cars (networkworld.com) 42

coondoggie quotes a report from Network World: IBM said that it has patented a machine learning technology that defines how to shift control of an autonomous vehicle between a human driver and a vehicle control processor in the event of a potential emergency. Basically the patented IBM system employs onboard sensors and artificial intelligence to determine potential safety concerns and control whether self-driving vehicles are operated autonomously or by surrendering control to a human driver. The idea is that if a self-driving vehicle experiences an operational glitch like a faulty braking system, a burned-out headlight, poor visibility, bad road conditions, it could decide whether the on-board self-driving vehicle control processor or a human driver is in a better position to handle that anomaly. If the comparison determines that the vehicle control processor is better able to handle the anomaly, the vehicle is placed in autonomous mode," IBM stated. "The technology would be a smart wingman for both the human and the self-driving vehicle," said James Kozloski, manager, Computational Neuroscience and Multiscale Brain Modeling, IBM Research and co-inventor on the patent.
This discussion has been archived. No new comments can be posted.

IBM Technology Creates Smart Wingman For Self-Driving Cars

Comments Filter:
  • So in the event of potentially unsafe driving conditions, something with the intelligence of a 2-year-old and the strength of 1000 gorillas randomly grabs the wheel from you. I can't see anything that could possibly go wrong with this.

    • by Calydor ( 739835 )

      Wrong.

      In the event of an unavoidable crash the control is RELEASED to you so the AI isn't at fault for the crash.

      • Yup. It's not about control, it's about liability.

      • "In the event of an unavoidable crash the control is RELEASED to you"

        So, your Belchfire 5000 gets itself stuck on railroad tracks with a freight train barreling toward you at 100kph ... and it turns control over to you.

        This will come to be known as the "Hasta la vista, Baby" patent

    • by hey! ( 33014 )

      So in the event of potentially unsafe driving conditions, something with the intelligence of a 2-year-old and the strength of 1000 gorillas randomly grabs the wheel from you. I can't see anything that could possibly go wrong with this.

      Well, can you see anything that could possibly go wrong by letting the human driver always have his way when milliseconds count? That's what you have to consider: the relative probability of catastrophe in the two scenarios: continued human control vs. automated intervention.

      Most people think of themselves as better than average drivers. Most people *are* a better than average driver... on their good days. But it's not their good days they need to be worried about. It's the days when they're harried, dis

    • something with the intelligence of a 2-year-old ...

      "Intelligence" has little to do with being a good driver. When was the last time someone with a PhD won a NASCAR race?

      Often the most important factor in avoiding an accident is reaction time. For a human, the time between an event occurring, and the brake being depressed is 1500 ms or more. At 70mph, that is 150ft, before the reaction even starts. For a computer, the reaction time is about 1ms, which at 70mph is a few inches.

      • by Kiuas ( 1084567 )

        Often the most important factor in avoiding an accident is reaction time. For a human, the time between an event occurring, and the brake being depressed is 1500 ms or more. At 70mph, that is 150ft, before the reaction even starts. For a computer, the reaction time is about 1ms, which at 70mph is a few inches.

        This, so much this. Most accidents occur because people either fail to observe their surroundings or they react to observations too late.

        Modern cars already have numerous systems that do very similar t

  • >"If the comparison determines that the vehicle control processor is better able to handle the anomaly, the vehicle is placed in autonomous mode," IBM stated

    So I could be in self-driving mode and the computer rips control from me because it thinks it can do a better job? No thanks! Maybe if it detected I was somehow impaired, but the idea of my control being removed randomly is not attractive. Sounds like it would be OK if it were an OPTIONAL mode/setting (3 modes- computer drive, human drive, or hybr

    • There needs to be two things: something like what IBM is developing here, and an 'off' switch that the computer can't override, so the driver can drive normally. I think a sensor in the steering wheel that detects when you've got at least one hand on it would be suffcient for an on-demand 'off' switch, as well as a switch on the dash to put the vehicle in manual drive mode.
  • Will it throw itself onto a Fiat multipla?

    Fact is _I'm_ a reliable wingman. Total darkness is your friend, I'll take that grenade.

    • Damn, I've had some boys worthy of the title of wingman in my day, capable of throwing themselves onto an AMC Pacer [carthrottle.com] but you sir, take the cake!
  • You! You are still dangerous. But you can be my wingman anytime.

  • by bugs2squash ( 1132591 ) on Thursday March 30, 2017 @06:57PM (#54147473)
    Imminent death predicted. You have 4 milliseconds to take control. Thank you for choosing IBM. Would you like Watson to choose a casket for you ?
  • by account_deleted ( 4530225 ) on Thursday March 30, 2017 @07:28PM (#54147585)
    Comment removed based on user account deletion
  • So while other companies are actually developing self-driving cars, IBM is looking at what they'll need and throwing patent mines into their path.

  • AI: "WAKE UP! Take control now!"
    Me: "AAAAAHHHHH!!"
    Car: *crash*

  • When driving, one's brain is immersed in the steam of visual data - preempting problems.

    To suddenly be asked to context switch between being a passenger and driver is dangerous.

    • Guess who didn't RTFS :/

      • by mjwx ( 966435 )

        Guess who didn't RTFS :/

        That would be you because the RFS didn't address anything that the GP talked about. It doesn't matter if the danger is imminent, you're asking someone with little or no situational awareness to suddenly take control of a potentially deadly piece of equipment.

        And lets face it, with autonomous car technology, most steering wheel attendants (I cant call them drivers) will be on their phone, if their even sober enough to drive. So they will be completely unaware of what the vehicle perceives.

        As others in

  • Don't mix humans and AI's. Give them their own roads. If they want to build an underground system and overhead network that meshes with existing roads that is well planned. Even someone like me is likely to try it. I might even like it!
  • human driving and is given the choice of swerve into the path of an oncoming car or hit a pedestrian, AI takes over and hits the pedestrian instead of leaving the human to take the 3rd option and put the car in the ditch and avoid the other choices.

    do not trust a computer with choices of this magnitude, leave it to people.

  • ...and it is, according to nearly every engineer in the autonomous vehicle business, including the head of Google's autonomous vehicle project, unsolvable. It is at the core of the current regulatory conflict between legislators, who want to keep a human in the loop, and most autonomous vehicle makers, who want humans out of the loop because of the unsolvability of the hand-off problem. Google has already stated they will not produce their autonomous vehicles until the government agrees to remove the huma

    • by ColdSam ( 884768 )
      I don't think Musk thinks the problem is solvable, meaning the handoff will never fail. He thinks that incrementally improving automated driving will save so many lives that the handful of lives lost when the handoff occurs will be well worth it. So the problem his team is solving is finding the right balance so the driver doesn't get overconfident given the limits of the current technology. In his mind, delaying deployment of the technology simply because it is not perfect would be immoral.
  • But what if the human is deemed to BE the anomaly?
  • If a working model of a invention was either required or, more likely, resulted in a form of stronger patent then a lot of these hand waving type patents could be avoided. A patent should only apply to a working model, not an "idea". As anyone who actually does something for a living can tell you the idea is not the hard part.
  • Because it's not like EVERY SINGLE AUTONOMOUS CAR currently being tested already does exactly this, forcing the driver to take over when it encounters a situation it can't handle. Or like the driver assistance features already available in cars do this the other way, such as hitting the breaks for you if you're about to run into something.

    But it's ok, they added "with machine learning", so that makes it new. I guess that's the new version of "on the internet". Take anything that people are already doing,

  • I recall learning, oh, something like 30 years ago, that the Space Shuttle had multiple computers, running at least two different operating systems, managing all vital systems on a space shuttle.

    With all the concern about self-driving cars being cracked, or otherwise running into problems, why is no one demanding something similar? The computers themselves are pretty cheap these days - and will be cheaper by the time we start putting this in every car. Just have a minimum of three computers running a mini

news: gotcha

Working...