Forgot your password?
typodupeerror
Google Robotics Technology

MIT Creates Car Co-Pilot That Only Interferes If You're About To Crash 238

Posted by samzenpus
from the robot-take-the-wheel dept.
MrSeb writes "Mechanical engineers and roboticists working at MIT have developed an intelligent automobile co-pilot that sits in the background and only interferes if you're about to have an accident. If you fall asleep, for example, the co-pilot activates and keeps you on the road until you wake up again. Like other autonomous and semi-autonomous solutions, the MIT co-pilot uses an on-board camera and laser rangefinder to identify obstacles. These obstacles are then combined with various data points — such as the driver's performance, and the car's speed, stability, and physical characteristics — to create constraints. The co-pilot stays completely silent unless you come close to breaking one of these constraints — which might be as simple as a car in front braking quickly, or as complex as taking a corner too quickly. When this happens, a ton of robotics under the hood take over, only passing back control to the driver when the car is safe. This intelligent co-pilot is starkly contrasted with Google's self-driving cars, which are completely computer-controlled unless you lean forward, put your hands on the wheel, and take over. Which method is better? A computer backup, or a human backup? I'm not sure."
This discussion has been archived. No new comments can be posted.

MIT Creates Car Co-Pilot That Only Interferes If You're About To Crash

Comments Filter:
  • by ranton (36917) on Sunday July 15, 2012 @09:20AM (#40655127)

    While fully autonomous cars may be the more desirable future, computer backup systems like this are a more likely first step. Once people start getting used to cars making good decisions on the road, they will be more willing to give the computers even more control.

  • by headhot (137860) on Sunday July 15, 2012 @09:22AM (#40655143) Homepage

    I'm not certain but I'm pretty sure computers are landing airplanes with the pilots overseeing the process.

    I also find it hard to believe that a computer cannot get better at driving a car the most people. Sure there are emergency situations the require extreme skill and judgement calls, but how many people are good in those situations? I have seen many drivers who react 100% wrong in dangerous situations. They don't understand the dynamics of the car and get confused in a panic. Computers don't have this problem.

  • by purpledinoz (573045) on Sunday July 15, 2012 @09:25AM (#40655159)
    I disagree. Human drivers are always a disaster waiting to happen. Computers don't get drunk. Computers don't get angry. Computers don't get sleepy. Computers aren't trying to impress a woman. (At least not yet...) Sure, computers fail, but humans fail too, but much more often. My concern is with the cases where a malfunction occurs in the system, maybe a broken sensor. How does a computer driver respond to these scenarios, which are guaranteed to happen in the real world?
  • by Joce640k (829181) on Sunday July 15, 2012 @09:32AM (#40655209) Homepage

    I'm not certain but I'm pretty sure computers are landing airplanes with the pilots overseeing the process.

    There's not many obstacles to avoid up in the air. On the road there's dozens of other cars all around you.

  • by Anonymous Coward on Sunday July 15, 2012 @09:36AM (#40655237)

    Because none of those are point-to-point, to your home and place of work especially.

  • My concern is with the cases where a malfunction occurs in the system, maybe a broken sensor. How does a computer driver respond to these scenarios, which are guaranteed to happen in the real world?

    The only thing that the computer can't be designed to cope with is complete hardware system failure. Are the automotive companies really prepared to put dual systems in the vehicle with backup power? And for that matter, are they going to be willing to disable the vehicle if a sensor is out of commission? They will really need to do that because drivers will become used to depending on the system.

  • by WillDraven (760005) on Sunday July 15, 2012 @09:54AM (#40655333) Homepage

    I would like a combination of both approaches. Full auto for when I want to turn my seat around backwards and play poker with my friends in the back, manual control for when I want to zip though some fun curvy roads, with emergency computer takeover when I forget that I'm not in a formula one car and start to do something stupid.

  • by OzPeter (195038) on Sunday July 15, 2012 @10:02AM (#40655367)

    Should a system for example protect the life of the people in a car as opposed to the life of people in a nearby car that they might crash into? Which gets higher priority.

    That was part of the angst of Will Smith's character in the I, Robot [wikipedia.org] movie. A robot logically decided to save him rather than attempt (and probably fail) to save a little girl - a choice that deeply conflicted with his (and probably most peoples) morals.
     
    While this was a functional account, I think it does a good job of showing some potential issues with life and death decisions that aren't made by humans.

  • by Joce640k (829181) on Sunday July 15, 2012 @10:49AM (#40655645) Homepage

    Not many obstacles, but there's one really big one.

    That one's only dangerous if you approach it off course or at a sharp angle. Computers are pretty good at linear algebra (better than humans), getting it right isn't a massive problem (how many years have they been doing it now...?)

    Guiding a car safely along an arbitrarily curved road full of unpredictable other users is much trickier than landing an aircraft.

  • by Cassini2 (956052) on Sunday July 15, 2012 @11:56AM (#40656095)

    The Airbus approach is fundamentally flawed. Pilots adapt to how the plane usually works. If the plane usually works in a manner that the pilots can't make mistakes, then the pilots get used to never making mistakes.

    When the automatic system quits, the pilots don't have the ability to instinctively react and fly the plane. The result is Air France Flight 447 [wikipedia.org]. The pilots flew a perfectly good plan into a stall, and never corrected. Had the copilots been used to flying in full manual, then they would have had the experience and instincts to react to the stall.

    People make mistakes. You have to let them make mistakes and let people learn from them. Safety systems that let people repeatedly make mistakes are dangerous. Because, sooner or later, a person will make a mistake in a corner case that the automated system does not catch. When this happens, tragedy often occurs.

  • by martin-boundary (547041) on Monday July 16, 2012 @06:23AM (#40661379)
    I don't think so. Consider a related problem where a train is equipped with a camera to see if there is an obstruction on the track, and an AI system which can automatically decide to halt the train. Such systems certainly exist, and differ from the smart car example only in the number of dimensions available for movement (the car has two directions available, while the train has only one).

    By your contention, the camera/AI system is ipso facto making an ethical choice about the life and death of a person who happens to be standing on the tracks vs the risk of accident or death of a traveller in one of the wagons who needs to go to hospital immediately (or else we do, by deciding to build it).

    But that is ludicrous. The system merely solves a problem about how strongly to apply the brakes. There is no ethics invovled whatsoever, nor any choice about life and death. Merely a very simple control problem. We can certainly ask what can be done about this particular problem in general, eg how to prevent people from standing on tracks etc, but clearly the actual train/AI (and whether we should build them or not) has no ethical role at all in this.

    The fact is that the statement of the problem here (a person standing on the track while a traveller may die from stopping the train) is independent of the train/AI aspect, which is just a detail. Making it *about* the train/AI is inappropriate.

Mirrors should reflect a little before throwing back images. -- Jean Cocteau

Working...