Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Robotics Technology

MIT Creates Car Co-Pilot That Only Interferes If You're About To Crash 238

MrSeb writes "Mechanical engineers and roboticists working at MIT have developed an intelligent automobile co-pilot that sits in the background and only interferes if you're about to have an accident. If you fall asleep, for example, the co-pilot activates and keeps you on the road until you wake up again. Like other autonomous and semi-autonomous solutions, the MIT co-pilot uses an on-board camera and laser rangefinder to identify obstacles. These obstacles are then combined with various data points — such as the driver's performance, and the car's speed, stability, and physical characteristics — to create constraints. The co-pilot stays completely silent unless you come close to breaking one of these constraints — which might be as simple as a car in front braking quickly, or as complex as taking a corner too quickly. When this happens, a ton of robotics under the hood take over, only passing back control to the driver when the car is safe. This intelligent co-pilot is starkly contrasted with Google's self-driving cars, which are completely computer-controlled unless you lean forward, put your hands on the wheel, and take over. Which method is better? A computer backup, or a human backup? I'm not sure."
This discussion has been archived. No new comments can be posted.

MIT Creates Car Co-Pilot That Only Interferes If You're About To Crash

Comments Filter:
  • 2001 (Score:5, Funny)

    by headhot ( 137860 ) on Sunday July 15, 2012 @08:19AM (#40655123) Homepage

    I'm sorry David, I cannot allow you to pass that car.

  • by ranton ( 36917 ) on Sunday July 15, 2012 @08:20AM (#40655127)

    While fully autonomous cars may be the more desirable future, computer backup systems like this are a more likely first step. Once people start getting used to cars making good decisions on the road, they will be more willing to give the computers even more control.

    • I can't wait for fully automated cars. It would relieve us of another monotonous task. But I guarantee this will meet great resistance from those who want the freedom to cut people off and talk on their mobile phones while driving.
    • As much as I don't like the idea of turning control of my car over to a computer, I like Google's method better. I think it's much safer. MIT's system is more likely to get implemented sooner and I think that's pretty scary. If people start trusting *something else* to get you out of dangerous situations, the immediately respond by relying too much on that trust and putting themselves in more dangerous situations. People with these things in their cars will drive like complete idiots and the computers wo

    • While fully autonomous cars may be the more desirable future, computer backup systems like this are a more likely first step. Once people start getting used to cars making good decisions on the road, they will be more willing to give the computers even more control.

      Yeah, I think it's obvious that in the long term, the way forward is to ban the option of human control on most roads. Human driving should be relegated to a novelty passtime on specially designated roads, scenic routes, and racing tracks. Any

    • by Yvanhoe ( 564877 )
      No. Fuck that. I am tired of baby steps. I won't buy a new car just for this kind of equipment. Give me a full automated car, you'll have a selling point, but a copilot that only works in case of crash is not a selling point for me, and I suspect I am not alone.

      The thing is, the technical challenges are all solved since 2009. And even since the 1990, we knew how to build a car (or more interestingly : trucks) that can follow autonomously a human-piloted car/truck. This technology never caught up because t
    • "While fully autonomous cars may be the more desirable future, computer backup systems like this are a more likely first step. "

      And once put into production, they will be recalled and shelved for 10 years due to suspicion (and legal accusations) that they actually CAUSED some serious or fatal accidents.

      Tools like this need a LOT of proving before they will be generally accepted.

  • Fast Lane (Score:5, Interesting)

    by headhot ( 137860 ) on Sunday July 15, 2012 @08:24AM (#40655155) Homepage

    I would be all for this if the computer would take over once it determines you are driving too slow in the fast lane and blocking traffic. Maybe there can be 2 modes, emergency take over, and 'Nag' mode for when the computer determines your acting like a selfish asshole.

    • All automated cars are the best choice to solve this problem. No more emotionnal decisions will taint driving decisions. And fast lanes will lost completely their significance since the flow of trafic can be accurately controlled and trafic jams/slinky effect completely avoided.
    • by Ichijo ( 607641 )

      I hope this computer resurrects the lost art of using the turn signal.

    • Personally, I'd rather the computer be used to not unlock the doors to anyone using the term 'fast lane'.

      If someone's overtaking, they have the right to be there. People treating the passing lane as a fast lane, doing 30mph over the limit and making it hard for people to safely pull out to overtake cause far more tailbacks thank people 'only' doing 5mph faster overtaking. Don't get me started on the twats who instantly tailgate anyone who has the audacity to be infront of them.
  • From TFA:

    There is also the “deskilling” issue, where eventually no one knows how to drive a car (or fly a plane). This isn’t so bad if every car on the road is autonomous, and if steering wheels are removed altogether, but the in between period could be tricky.

    If all cars on the road are autonomous why don't we just have trains, light rail and subways?

    • by Anonymous Coward on Sunday July 15, 2012 @08:36AM (#40655237)

      Because none of those are point-to-point, to your home and place of work especially.

    • I agree with you up to a point. However, make no mistake that there is a significant difference between a car I don't have to drive and the modes of public transit you have cited. To wit, a car (driven by me or a computer) will take me directly from A to B. No walking, no changing lines, etc. Aside from the fact that most people are incredibly lazy (I'm including myself in that number) the difference in time and convenience is significant. Yes there are cities where that difference is quite small (NYC, Lond
    • If all cars on the road are autonomous why don't we just have trains, light rail and subways?

      Rail is more expensive to build than asphalt. In fact, where I live, some roads are still just flattened dirt surfaces, and will likely stay that way in the foreseeable future. Also, light rail is not very fast, and the lighter the vehicle the greater the chances of derailment. Not to mention that it's impossible to make emergency dodges when you're on rail.

  • Which method is better? A computer backup, or a human backup?

    Both fail because both exist. Accident reports will be full of "I thought the computer was driving" and so forth.

    Also any time there is none the less an accident, "its the computer's fault"

    • Accident reports will be full of "I thought the computer was driving" and so forth.

      Simple solution: The computer should take control anytime it detects the human is not actively controlling the car. But even when the human is driving, the computer is assisting. When you drive a "normal" car, the car will drive straight ahead unless you turn the wheel. With computer assisted driving the car will stay in its lane if you do nothing, and you need to actively steer it to do otherwise.

      Also any time there is none the less an accident, "its the computer's fault"

      Computer controlled cars will save the data from all sensors, including cameras (external and internal), hu

  • Trolley problems? (Score:5, Interesting)

    by JoshuaZ ( 1134087 ) on Sunday July 15, 2012 @08:35AM (#40655231) Homepage
    There's a whole class of philsophical problems about when to save one life v. n lives http://en.wikipedia.org/wiki/Trolley_problem [wikipedia.org]. One very awkward thing about this is that advanced emergency driving systems may need to address questions that we are fundamentally uncomfortable answering or discussing. Should a system for example protect the life of the people in a car as opposed to the life of people in a nearby car that they might crash into? Which gets higher priority. Does the number of people in each car matter? Exactly what the cars do in the few seconds leading up to a crash could alter this. Essentially this sort of thing may force us to examine difficult ethical problems.
    • by OzPeter ( 195038 ) on Sunday July 15, 2012 @09:02AM (#40655367)

      Should a system for example protect the life of the people in a car as opposed to the life of people in a nearby car that they might crash into? Which gets higher priority.

      That was part of the angst of Will Smith's character in the I, Robot [wikipedia.org] movie. A robot logically decided to save him rather than attempt (and probably fail) to save a little girl - a choice that deeply conflicted with his (and probably most peoples) morals.
       
      While this was a functional account, I think it does a good job of showing some potential issues with life and death decisions that aren't made by humans.

    • All these types of ethical questions are fundamentally flawed. They simplify the world into a binary scenario where there is a choice between two hypotheticals, which are themselves far from certain.

      In a realistic example, there are more than two cars on the road, and the machine is dumb, and while there is a remote possibility that the people in the first car could be saved from the uncertain possibility of some accident whose exact unfolding is unpredictable, and there is a remote possibility that the p

      • Once these driving machines become marginally functional, the questions will matter. If they fail as badly as you envision then aren't ready for actual implementation yet. But when they are implemented in real life, this will matter.
        • by martin-boundary ( 547041 ) on Monday July 16, 2012 @05:23AM (#40661379)
          I don't think so. Consider a related problem where a train is equipped with a camera to see if there is an obstruction on the track, and an AI system which can automatically decide to halt the train. Such systems certainly exist, and differ from the smart car example only in the number of dimensions available for movement (the car has two directions available, while the train has only one).

          By your contention, the camera/AI system is ipso facto making an ethical choice about the life and death of a person who happens to be standing on the tracks vs the risk of accident or death of a traveller in one of the wagons who needs to go to hospital immediately (or else we do, by deciding to build it).

          But that is ludicrous. The system merely solves a problem about how strongly to apply the brakes. There is no ethics invovled whatsoever, nor any choice about life and death. Merely a very simple control problem. We can certainly ask what can be done about this particular problem in general, eg how to prevent people from standing on tracks etc, but clearly the actual train/AI (and whether we should build them or not) has no ethical role at all in this.

          The fact is that the statement of the problem here (a person standing on the track while a traveller may die from stopping the train) is independent of the train/AI aspect, which is just a detail. Making it *about* the train/AI is inappropriate.

    • by DaveGod ( 703167 )

      The problem is not merely that we're uncomfortable discussing such moral quandaries.

      A driver may make his own choice to put himself into a certain-death brick wall in order to avoid probable death of self + the other car's passengers. Nobody and nothing else ever has the right to force it upon him. Irrespective of how logical and regardless of any "choice" previously input into the auto-auto preferences screens.

      • A driver may make his own choice to put himself into a certain-death brick wall in order to avoid probable death of self + the other car's passengers. Nobody and nothing else ever has the right to force it upon him. Irrespective of how logical and regardless of any "choice" previously input into the auto-auto preferences screens.

        The problem is, the same applies to the other cars passengers. And Joe Driver had a choice to ride with whatever automation his car has, while the people in the other car had no ch

  • I've heard truck drivers complaining about systems like this [todaystrucking.com]. Apparently it has more control over the engine speed than the driver.
  • by Manip ( 656104 ) on Sunday July 15, 2012 @08:48AM (#40655299)
    Anyone who has been paying attention to the "safety systems" similar to this on commercial aircraft should know that development of systems such as this always have unintended consequences. Even if they work flawlessly the flawless function could still potentially be dangerous.

    Just as one example: sometimes "crashing" is the least-bad alternative available to a driver. Given the choice between hitting a person standing in the road or a row of water-filled barriers many drivers would correctly choose the barrier over the human. But this safety system will likely subvert that and take the choice away from the driver.
    • Interestingly, both approaches have been tried in aviation.

      A while back, Aviation Week reported on an experimental system that could override fighter pilots when they would otherwise crash. It waited until the absolute last second, when the required maneuver was just within the structural limits of the airframe.

      Using humans as backups has a long and good operational history, but it might not work as well with undertrained personnel like car drivers. Even with highly trained pilots, dropping control onto a h

    • by Skinkie ( 815924 )
      Now isn't exactly this kind of reasoning some system could be always prepared for, while the driver has need to make these kind of decisions in split seconds. Multiply this split second by not breaking, the number of choices for all parties to be safe is reduced. It would be even more interesting what would happen if two cars with this system could cooperatively "crash". Hereby saving a third party. A more complex choice would be preventing a lethal accident for multiple drivers, while in any other case all
    • You assume the safety system will make a bad decision in a particular circumstance while it can do the right choice, or even avoid completely this situation which often arise because someone failed to recognize and identify an obstacle in advance. It would be pretty easy for an all automated system to identify humans on the road from everything else well in advance and even looking on the side of the road well in advance for such obstacles to manifest. Something a human driver cannot do.
  • Would it take over if you were attempting to drive 90MPH through a residential zone? What about doing 35MPH through a residential zone?

  • by NEDHead ( 1651195 ) on Sunday July 15, 2012 @09:09AM (#40655415)

    I believe this very question distinguishes Boeing and Airbus and their autopilot philosophy. IIR, Boeing says the pilot is the senior authority, Airbus prefers the computer's judgement. Note the similarity in the sounds 'airbus' and 'skynet'.

    • by vakuona ( 788200 )

      I quite like Airbus' philosophy. Most plane crashes are caused by pilot error, so having system in place to reduce the number of decisions they have to make in the cockpit can only be a good thing. I want pilots to do what computers cannot do, which is to reason out difficult situations.

      Some pilot aids can be potentially dangerous, but only because at times pilots are not trained well enough to know their equipment. One involved an MD plane where the aircraft were fitted with automatic thrust restoration, w

      • I rather don't like the portion of Airbus's philosophy wherein the autopilot can pass the buck back to the pilots if some of the instruments are not working as expected, though...

        I can't imagine any situation where the available instrumentation would be inferior to the pilot's sensory experience in a small compartment with tiny windows at the end of a long tube that pivots about at the other end other than failure of all of the instruments....

      • Re: (Score:3, Insightful)

        by Cassini2 ( 956052 )

        The Airbus approach is fundamentally flawed. Pilots adapt to how the plane usually works. If the plane usually works in a manner that the pilots can't make mistakes, then the pilots get used to never making mistakes.

        When the automatic system quits, the pilots don't have the ability to instinctively react and fly the plane. The result is Air France Flight 447 [wikipedia.org]. The pilots flew a perfectly good plan into a stall, and never corrected. Had the copilots been used to flying in full manual, then they would hav

  • by Bogtha ( 906264 ) on Sunday July 15, 2012 @09:33AM (#40655543)

    Firstly: How does the system detect imminent crashes? If this makes mistakes, it can wrest control away from the driver when unnecessary and cause a crash.

    Secondly: How does the system react to imminent crashes? If this performs worse than what the driver was already doing, it can cause a crash.

    The main problem with autonomous driving is the legal liability. The problems above still introduce the legal liability, yet without the major benefits from a broader system. I think the industry will simply skip over this straight to broader systems.

    • The main problem with autonomous driving is the legal liability. The problems above still introduce the legal liability, yet without the major benefits from a broader system. I think the industry will simply skip over this straight to broader systems.

      Liability isn't too much of a problem in my opinion. Insurance will cover any issues, and rates will change based on the performance of the autopilot. As long as the autopilot performs equally as well as drivers across the entire set of cars insured, then the insurance rates will be the same, people will pay the same rates, and insurance companies will shell out the same payments. An accident wouldn't cause rates to go up, but good driving records wouldn't bring the rates down, and everyone would pay rat

    • by Kjella ( 173770 )

      Secondly: How does the system react to imminent crashes? If this performs worse than what the driver was already doing, it can cause a crash.

      And the related question, even if the computer did everything as "right" as possible how would you prove it did? All it takes is for a person to get up on the stand and say "No sir, I did not run over and kill that man. I was going to swerve around him into the ditch but the car took over control and ran straight over him. My driving may have been reckless but it was the car that killed him." No matter if it's true or not, would be possible or not, the makers of this system would have to get up there on the

      • Same way as they do in airplanes. Have a little black box recorder in a thick steel box.

        When the computer kicks in, it records everything it knows to the black box. After the crash, you can look at the black box and it will tell you if and why the computer kicked in and what information it had to made the decision it did.

        This absolves the driver of responsibility and gives the engineers a bug report to work on.

    • In luxury cars, you can already get a speed-matching cruise control. I'm not sure if that extends to hard braking (although it would make sense to), but such a system would be a perfectly reasonable first step.

      A distance sensor for following could certainly detect the sudden acceleration of the leading car and if actually applying the brakes is unreasonable, it could certainly activate the taillights, an alarm/warning light, and disengage the throttle in anticipation of and to buy some time for the

  • What if there was a switch where the operator of the vehicle could choose between normal driving, computer-assisted driving (MIT), and human-assisted driving (Google)? I think that would be a better option than having to choose between expensive automobiles.
  • Is this some kind of weird expression of the New England work ethic? Make the driver work just as hard as ever, but should he ever falter, a superior system kicks in and saves his ass?

    If I have a computer that can handle emergencies more reliably than I can, surely it can handle the mundane more reliably, too.

    • by Lehk228 ( 705449 )
      Such an emergency system would need significantly simpler sensors and AI, fully automated driving needs total awareness of signs and location and the ability to constantly evaluate what is being sensed neither drivers nor. Police will tolerate driver AI that decides to drive on a median because it looks a bit like a lane, nor will an AI be tolerated that fails to divert where road construction requires driving in the median.

      On the other hand a safety failover won't really create problems as long as it prop
  • The difference between the two approaches is a difference of perception - in one, the *human* is considered the primary while the *computer* is the backup; in the other the *computer* is the primary and the *human* is the backup.

    Now, obviously, both of those elements can fail. Humans are fallible drivers, as I well know. Computers can crash, or just fail to process events properly. No matter what, you will get accidents under any of these. Hell, we still get train crashes, and they're bound to tracks and su

  • My 2010 Prius System 5 already stops the car if i'm about to crash (PCS) and it helps steer when I have the lane keep assist (LKA) on. LKA uses machine vision so doesn't always work if there lines in the road are missing, degraded,

    While clearly the MIT system detailed as more points of constraint and while I think it's could to have PCS (Pre Crash System); that problem doesn't solve numerous problems like those who are getting older (but still need mobility), those who get fatigued, using automated car "tra

  • To me it seems that the MIT approach takes on the hardest part of the problem, reacting correctly in the hard corner cases, while also adding yet another hard problem, which is determining when to take over from the driver, and being less valuable to boot. The only problem the Google approach has to handle that the MIT approach does not is navigation, and that's the easiest part.

    The MIT system is still going to have to have full awareness of all of the surrounding obstacles, traffic, pedestrian and other

  • My wife has narcolepsy, which means even when medicated her 15 minute commute is a risk that she could fall asleep behind the wheel. She probably won't be allowed to drive when she has to go off of the medicine for pregnancy. This emergency autopilot would be a necessity for us if it were available.

    A computer backup should be able to make it to market quite a bit faster than a computer-first human-backup driving system. The Google approach is more luxury than necessity. We should push the computer backup sy

  • No matter how many accidents that the MIT technology prevented, if this technology fails to prevent an accident the makers of this technology will get sued. The lawsuit's reasoning would go like this: Joe's standard of driving, just like everybody else's, is way above average and his super-fast reflexes were handling the traffic situation fine, but MIT's defective technology overrode his highly skilled actions and actually caused the accident. Unless the auto manufacturers and the technology's inventors
  • I miss the time when we could buy cars that put the entire responsibility of keeping the car on the road in the hands of drivers. If I want to do a maneuver that seems like a better solution (flipping the car's tail out while purposefully messing with the throttle to induce a controlled sideways skid on a wet road) to avoiding a dangerous situation, traction control already messes with it. I wonder what it will be like with systems like these being applied as mandatory safety features.
  • We are going fix him like we did to jimmy hoffa.

  • > If you fall asleep, for example, the co-pilot activates and keeps you on the road until you wake up again.

    Is there a way to keep the driver unaware of that feature?

  • You can already get a number of Android apps that watch the road and alert you if you pull up too close or leave your lane.

    And, of course, some cars have these kinds of assistance systems as well.

  • Was gonna flippantly reply "if human is a healthy, reasonably young member of the species in all five senses and with sufficent experience, computer should stand back, else the old fart should RIDE in the back."

    But then someone mentioned planes. Anyone up to date with the news and who read the final BEA report on the Air France crash in the South Atlantic with 200+ dead will recall that the primary cause was lack of crew preparedness. Dumb pilots who couldn't fly a plane? Yes, but not cause of their choice.

  • I know some people whose insurance companies might even underwrite the cost of such a system just for the ability to avoid deer collisions.

  • What happens when some mud splatters on a sensor? Does it suddenly go into catastrophic avoidance mode? What happens when sensor fails, or when a whole bank of sensors fail? My experience with cars is that when one item goes wonky (like a coilpack on a 2000-2004 VW 1.8 turbo motor) they all eventually go wonky. Since these cars will be making life and death decisions, and interacting with other cars making life and death decisions, who decides if the programming between units is compatible? Does the gov

Whoever dies with the most toys wins.

Working...