Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Robotics Technology

Stanley and the Conquest of the DARPA Challenge 219

geekboy_x writes "Wired has a great in-depth piece on the Stanford team that won the $2 million DARPA prize. If you remember last year's disaster - with most vehicles falling off the road in the first kilometer or so - this victory becomes all the more amazing. The fact that the Stanford team used a 'tailgating' strategy is the best surprise in the article."
This discussion has been archived. No new comments can be posted.

Stanley and the Conquest of the DARPA Challenge

Comments Filter:
  • Team Leaders (Score:5, Informative)

    by Kuxman ( 876286 ) <the_kux@yahoo.com> on Wednesday December 28, 2005 @12:38PM (#14352763) Homepage
    Also interesting to note is the fact that the major leaders of the Stanford team came from the Carnegie Mellon AI department 2-3 years ago.
    • Re:Team Leaders (Score:5, Insightful)

      by maggard ( 5579 ) <michael@michaelmaggard.com> on Wednesday December 28, 2005 @01:17PM (#14353002) Homepage Journal
      This could as easily imply that, in order to succeed, these folks had to get out of Carnegie Mellon AI and go to Stanford .

      I've no inside knowledge, but from the article it appears CMU was locked into the-same-just-more/bigger/faster strategy and the team that decamped to Stanford came up with some innovative real-time confidence-based sensor interpretation systems. It may well be that at CMU they wouldn't have been supported in this whereas at Stamford, without the established regime at CMU, they were free to do so...

      • Re:Team Leaders (Score:2, Insightful)

        by Anonymous Coward
        The issue is much more complicated than an AI strategy. All teams involved had massive hurdles to overcome logistically, financially and technologically. Simplifying the analysis of who won or lost down to an AI strategy does a great disservice to all participants including the Stanford team.
  • by Radres ( 776901 ) on Wednesday December 28, 2005 @12:46PM (#14352808)
    FTA: "He liked to point out that planes had been flying themselves since the 1970s. The public was clearly willing to accept being flown by autopilot, but nobody had tried the same on the ground."

    Just give us our flying cars then already, damnit!
    • There's a LOT less to worry about when a plane is in the air flying. I don't know a pilot alive who would autopilot through anything more than mild turbulence. Autopilot also doesn't take off and land for you. It's closest equivalent in the automotive world is cruise control. Cruise control would be just as good as autopilot if the vehicle didn't have to worry about other vehicles on a regular basis and had a lane to work with that was as straight as typical airplane headings.
      • by Kuxman ( 876286 ) <the_kux@yahoo.com> on Wednesday December 28, 2005 @01:02PM (#14352919) Homepage
        Actually, the Boing 777 does land/take off automatically. I think this also holds true for the Airbus 300s [askcaptainlim.com] (Correct me if I'm wrong)

        From "Ask Captain Lin":

        "On the Boeing 777, the autopilot can be selected on at 200 feet above ground level after take off. Most of the time, the pilot would make use of the autopilot on the climb because it eases the workload of the crew especially during an emergency. Sometimes, a pilot may elect to fly manually during the climb just to get his hands on the control column or to maintain his proficiency because during a flight test, one of the exercise calls for flying without the aid of autopilot. Otherwise, the autopilot is engaged throughout most of the flight. It is smoother, more economical and safer with the autopilot on. In fact, in really bad weather with very limited visibility, the autopilot even lands the aircraft by itself. The pilot only resumes control of the aircraft after it has safely landed on the Runway."

        • by Zocalo ( 252965 ) on Wednesday December 28, 2005 @01:25PM (#14353046) Homepage
          My cousin is a qualified pilot on several of the bigger passenger jets and yes, it is entirely possible for a crew to do nothing but board the plane, taxi to the runway and then let the autopilot handle the entire flight, including the takeoff and landing. The normal mode of operation however is to clear the airport on manual, activate the autopilot until in the approach at the destination and then make a judgement call about letting the autopilot land the plane at the destination based on the conditions at hand. There are also exceptions about if one or more of the autopilots malfunctions (there are apparently three on the bigger jets, I'm not sure about the smaller ones). Technically one functional autopilot is enough to handle the entire flight, but the regulations of my cousin's employer prohibit non-manual landings with just one faulty autopilot, and with two faulty units all flight operations must be fully on manual. They do however have to complete a mandatory amount of manual take-offs, landings and flight hours each year to remain qualified, in addition to the numerous medical, physical and flight examinations you would expect. Other airlines do vary their individual guidelines and proceedures of course, but not by too much.
          • They do however have to complete a mandatory amount of manual take-offs, landings and flight hours each year to remain qualified

            That's interesting. One of the assumptions behind future ATC systems is that the aircraft will fly under automatic control all the time so that higher traffic densities can be achieved safely. The definition of pilot qualification may have to be rethought if this happens.

        • Automatic landings are available on almost all modern airliners ... autopilot landings are meant for calm conditions with poor to no visibility, they are not intended to deal with cross-winds, tail-winds or other adverse weather.
        • Also, consider that the Lunar Module and Space Shuttle both have autopilots, yet neither has been allowed to operate fully until touchdown. The main reasoning behind this is that pilots want to be "in the loop" during those last critical seconds before touchdown, and if something fails the pilot wants to be fully engaged at the time of failure, and not have to switch between monitoring-the-autopilot mode to flying-the-vehicle mode.
      • by arkanes ( 521690 ) <arkanes.gmail@com> on Wednesday December 28, 2005 @01:05PM (#14352931) Homepage
        I don't know about turbulence, but planes have been (capable of) landing themselves on autopilot since the 70s. Taking off is harder but I believe autopilots can do that now as well. Autopilots today can also change course and altitude to avoid weather conditions - it's quite a bit more sophisticated than simply following a course. Driving on the ground is a much harder problem, but don't underestimate what autopilots are capable of.
        • Start is actually easier than landing. If everything goes according to the procedure, it's one of the simplest maneuvres. The problem is it's most risky part, that is many things may go wrong, the plane is most failure-prone, there are lots and lots accidents waiting to happen. An autopilot would have zero problems taking off, but you need a human at the controls in case something goes wrong, and if it does, better if you don't have to waste time on switching the autopilot off. Besides, since it's easy, not
          • More accidents happen on the ground than in the air.
          • you need a human at the controls in case something goes wrong,

            Best to imagine that modern commercial aircraft are just big expensive computers with a fancy mobile case. If something goes wrong you would want it to "do something sensible", not stop the OS and expect the operator to take over.

            better if you don't have to waste time on switching the autopilot off.

            A heavy push on the control column will do that on most aircraft.

            • If something goes wrong you would want it to "do something sensible", not stop the OS and expect the operator to take over.
              If something goes wrong, most of known operating systems will usually continue to do what they were doing, oblivious to the danger, because it happens (Murphy's law) just beyond their sensor range. What you want it to do has nothing in common with what it will do. Despite best intentions of the engineers.

              A heavy push on the control column will do that on most aircraft.
              Something you want
    • Just give us our flying cars then already, damnit!

      Oh boy! I can't wait to file my flight plans for to-and-from work, and then request permission to go to the supermarket when I realize I'm out of cat food. I'm also looking forward to requesting permission to leave the driveway and structural inspections for my personal vehicle every six months, government mandated engine overhalls, and you-must-be-a-terrorist shoe removal to get into my own damn car.
      Oh but to have my very own flying car!

      • I'm 100% with you regarding the absurdity of having to file flight plans (it's nobody else's business), but I'm not so quick to dismiss ATC and mechanical/structural inspections.

        There are a heck of a lot of GA craft out there, and I'd really prefer that none of them crash into my house because (1) a lack of ATC, perhaps exacerbated by someone flying VFR when he shouldn't, led to a collision, or (2) some private pilot decided he didn't need an inspection and his damn wing fell off...

    • I was thinking of the other way as I would think that subways would be the first place that the drivers could be eliminated. The only decisions that can be made is how fast to go and when to brake. I would think the trains could be next. They do have more problems with avoiding collisions as they are not underground so they do have others using the rail especially at railroad crossings. There is no steering in either case.
  • by hal2814 ( 725639 ) on Wednesday December 28, 2005 @12:50PM (#14352831)
    From TFA on 7 ways cars are already robots:

    "4. Lane-Departure Prevention
    Nissan has a prototype that uses cameras and software to detect white lines and reflective markers. If the system determines the vehicle is drifting, it will steer the car back into the proper lane."

    I've driven enough roads under construction that I would be seriously afraid that my car would steer me into oncoming traffic because road workers haven't bothered to paint over lines that were previously there.

    Personally, I'd be interested in how these vehicles do:
    1. On regular highways.
    2. At speeds other than the 5 to 25 MPH tested.

    I realize they're not built for that. I would just like to see how they do applying what they "learned" in the desert to real traffic situations.
    • Yeah, really. half of Michigan would be dead within six months from car wrecks. Not to mention that most roads there don't have reflectors and often not even lane lines.
    • by Anonymous Coward
      From what I've read, the Nissan system only warns the driver that they are drifting from their lane and doesn't actually steer the car. When the driver drifts from their lane without engaging the turn signals the car emits a warning chime. I think we're still far from an actual automated steering system that is reliable enough (i.e. 99.9% safe) for public use.
      • I doubt that 99.9% safe would be deemed safe enough. 1 accident in every 1000 hours of driving would be a horrendous driving record.

        (That is if I am interpretting the 99.9% the way you intended to).

    • So some dude hanging out on an internet message board, who knows very little about the technology in question, overgeneralizes and oversimplifies the problem, and assumes the builders of the technology, which is still in prototype mode, will overlook basic problems, is worried.

      Sorry if your argument doesn't have me trembling with fear.


      Three cheers for run-on sentences and posting while in a bad mood.
    • Assuming the car could be subtle about it, it would be nice to have a slightly sticky road steer, maybe 2 or 3 degrees to keep the car in the lane under most driving conditions. If the driver is actually steering, don't use the system. If they would have to steer a lot, don't use the system. If it can't find the line markers, don't use the system. But with those caveats, the system sounds like it could work.

      And while we're at it, can we get a photosensor on the bottom of the cars to auto-correct for ali
    • If the car only steered you slightly in response to markings, it wouldn't be any worse than when they leave the bumpy reflectors in the old lane markings. The subjective experience would be like if there were slight speed bumps along the edges of lanes, so that you'd need to push against them slightly. It would just be a bit more force feedback.

      Stanley would probably do fine at avoiding obstacles, but it wouldn't have any clue how other drivers may be expected to behave. Also, they'd need to extend its visu
    • No hands across America was a project in 1995 where a car was driven across the US, and the steering was handled by the computer for most of the trip.

      Journal of the trip: NHAA journal [cmu.edu] and information on the software, RALPH [cmu.edu]

      NHAA showed that it's possible to do at highway speeds (60+ mph), using 1995 technology. The construction issues are a challenge. From the journal, it sounds like RALPH handled construction reasonably well, but there certainly are construction sites that even many humans can't succes

  • by Anonymous Coward
    The teams did well this year, but what disappoints me is that this year, many of the teams had relied entirely on laser range finders and GPS to navigate the course.

    There was one entry, a motorcycle, which still ran completely on a vision system (cameras instead of sensors). Unfortunately, it did not do too well.

    While the military can still use technology developed by the teams that completed the DARPA Grand Challenge, I think they could benefit even more from a vision system capable of doing the same thing
    • >What use is a robot that can navigate a desert if it can't actually see anything? It can drive in total darkness, for starters... I'd think that's a big advantage in a warzone. Not sure how rangefinders would cope with sandstorms, mist etc, but then you could maybe switch to another set of non-visual sensors (acoustic? Only when it's 'cam' out there, probably... Why only limit a vehicule's sensors to the visual light spectrum? Our eyes and brains are so good at the task, because it's all we have to or
  • by NeutronCowboy ( 896098 ) on Wednesday December 28, 2005 @12:52PM (#14352849)
    ... is that the CMU team relied heavily on extensive pre-analysis of the environment, and failed (at least in the sense that it didn't come in first). Stanford instead relied on a probability analysis of the incoming data, along with multiple technologies for different goals (lasers for short range data, video for long range data).

    It seems that the DARPA grand challenge not only showed off the first realistically autonomous vehicles, but also laid to rest the idea that expert systems were the way forward. The way forward instead is self-teaching computers. Hooray for self-teaching AI overlords!
    • by RossumsChild ( 941873 ) on Wednesday December 28, 2005 @02:22PM (#14353368)

      The CMU bashing here (and subtley embedded in the wired article--everybody loves an underdog) is not really valid.

      According to The Grand Challenge Tracking Site [darpa.mil]:

      Stanley's official time was 6:53 and CMU's was 7:04 minutes.

      I don't think that ridiculing CMU as having a "poor strategy" for doing something in an additional 11 minutes that was impossible for the entire robotics industry just a year ago is very. . . wise.

      Personally, I'm overjoyed that Stanley won it. I think he's an excellent system and that Stanford deserves the praise. (Besides, those b*stards at CMU didn't let me in for my undergrad)--but making fun of their 2004 'strategy' (when they went further than any other team) and their 2005 results (when they were a scant 11 minutes behind the leader, and were 2 of only 5 teams to have a 'bot cross the finish line) seems silly to me.

      And for the people wondering: Stanley is rumoured to have run linux, though last I heard the team hadn't confirmed it. In fact, most of the qualifiers for the race were running at least one linux machine [robots.net].

  • I still think it will be a long time before we trust a computer to drive us around. Intersting that it used a 'tailgating' strategy...what happens if all the cars around it are also doing the same!
    • I got "The Wisdom of Crowds" for Christmas. It recounts a story of an entymologist studying fire ants. Fire ants generally move by following each other when there are other fire ants ahead of them. But with a certain group of ants, the leaders ended up running into the tail of the group, forming a huge circle.

      The ants marched in the circle for three days before the entire colony starved to death.

      I don't want to starve to death in my car, thank you very much.
  • by Animats ( 122034 ) on Wednesday December 28, 2005 @12:55PM (#14352870) Homepage
    As one of the team leaders of another Grand Challenge team [overbot.com], I'm enormously impressed with the Stanford work. The basic idea is that the LIDARs profile the road ahead out to 20m or so, and the vision system decides whether the road further out is "like" the near road. That vision system was a huge breakthrough. It was obvious that such a system would be a big win, but making it work reliably was impressive. I didn't think that was possible at the current state of the art. I look forward to seeing a more detailed paper on how it was done. A good hint is in this paper on texture comparison. [uni-saarland.de]

    I was never that impressed with the CMU approach. All that manual preplanning was an obvious dead end. And the giant mechanically stablized gimbal was just too clunky. It didn't help them in 2004, when they hit an obstacle placed by DARPA, and it didn't help them in 2005, when DARPA moved the racecourse from California to Nevada to prevent preplanning. The Air Force colonel in charge for 2005 said preplanning wouldn't work, and he meant it.

    Computer vision of the natural world is finally about to take off, after three decades of frustration. It's probably possible to do much of the early vision processing in a current-generation GPU, which may make it affordable. Look for new apps that connect to cameras and pick out items of interest. Read that paper linked above.

  • Liability (Score:5, Insightful)

    by Billosaur ( 927319 ) * <wgrother AT optonline DOT net> on Wednesday December 28, 2005 @12:59PM (#14352903) Journal

    From Wired: The resulting liability issues are a major hurdle. If a robotically driven car gets in an accident, who is to blame? If a software bug causes a car to swerve off the road, should the programmer be sued, or the manufacturer? Or is the accident victim at fault for accepting the driving decisions of the onboard computer? Would Ford or GM be to blame for selling a "faulty" product, even if, in the larger view, that product reduced traffic deaths by tens of thousands?

    It figures. A technological advance that would cut the number of traffic deaths by about 95% by taking drunks and maniacs out from behind the wheel, and preventing 93 year-old men with dementia from killing people [local6.com], will be bogged down by liability issues should the robot kill someone. C'mon people! Even the best system will not prevent a fluke accident or yes, even a bit of bad code, from killing someone, but weight that against the number of road-rage infested idiots on the road now, driving at 100+ mph, swerving in and out of traffic, and I think libility needs to be the furthest thing from anyone's mind.

    Just don't let Microsoft write the software.

    • I beleive this complaint is a little early. Based on the early successes shown in the desert, without people stepping off curbs in front of cars, and other urban hazards, I believe it is premature to say robot drivers will reduce automobile deaths by 95%. That prediction maybe true someday, but we're not going to see that next year. Or even in the next decade.
    • "It figures. A technological advance that would cut the number of traffic deaths by about 95%...."

      You mean COULD. At the present time most people cause no traffic deaths at all. Most people don't cause accidents. Human drivers are a proven, if faulty, method.

      An autopilot system has to be better than an excellent driver. It has to be nearly perfect. Why? Well, humans are assumed to be imperfect..... More to the point, if you have never caused any accidents, why exactly would you want to switch to an imperfec
  • by sikandril ( 924466 ) on Wednesday December 28, 2005 @01:02PM (#14352920)
    was when Thun explained how the vehicle was taught to drive by following a human driver and adapting its algorithms according to his behavior, gaining much better results than "force feeding" massive amounts of data artificially.

    This has immediate implications not only for robotic cars - what if we took a human and strapped some positional sensors, voice recording, etc. and made a humanoid robot follow him throughout the day?

    I mean how varied are our lives after all? Given the right processing power and sensors, the results could be interesting...

    Again, a great achievement for a 'bottom up' approach to artificial intelligence
    • Is this really a surprise?

      I look at learning systems and see that the best, most successful ones seem more and more like human infants - learn by mimicry, with reinforcement by reward/punishment.

      Is it phylogenic that whatever we create will develop the same way we ended up doing so, or is it a form-follows-function result?
  • Spoiler alert! (Score:2, Insightful)

    by kmcrober ( 194430 )
    The fact that the Stanford team used a 'tailgating' strategy is the best surprise in the article.

    Not anymore.
    • Re:Spoiler alert! (Score:3, Informative)

      by SpinyNorman ( 33776 )
      Not ever, for that matter.

      The article doesn't say they had a tailgating strategy, it just mentions the raw fact that during the race they'd been tailgating another entry until choosing to pass them. There's no suggestion (let alone assertion) that they could have passed earlier but chose not to, or deliberately delayed attempting to pass until late in the course.

      Tailgating would appear to be a pretty poor strategy anyway - it assumes that the one you're tailgating is sensing the road and safe speed better t
    • by Animats ( 122034 ) on Wednesday December 28, 2005 @01:23PM (#14353032) Homepage
      That's actually not true. There was no "tailgating". During the Grand Challenge, no vehicle was allowed to approach another while both vehicles were active. DARPA had the ability to remotely pause any vehicle. When vehicles got anywhere near each other, the trailing vehicle was paused to maintain separation. If the trailing vehicle was clearly faster, a pass was scheduled. All passing took place with one vehicle stationary and at a wide place in the road. Wired has this wrong.
      • Wired has more than that wrong. I was stunned at the statement:
        After posting perfect scores on his final undergraduate exams, he went on to graduate school at the University of Bonn, where he wrote a paper showing for the first time how a robotic cart, in motion, could balance a pole.
        This is simply an inverted pendulum experiment which has been in classical control theory for years. There is no way he did that for the first time in the early 1990s.
    • Re:Spoiler alert! (Score:3, Informative)

      by scgops ( 598104 )
      In the Grand Challenge, cars didn't race against one another to try to be the first across the line. They raced to try to complete the course in the shortest elapsed time .

      According to the Darpa [darpa.mil] web site, Stanford won the race by finishing with an elapsed time of 6 hours and 53 minutes. They could still have won if they crossed the finish line after the CMU vehicle, as long as their elapsed time was still shorter.

      CMU's Sandstorm finished in 7 hours and 4 minutes.
      CMU's H1ghlander finished in 7 hours
  • by Dolphinzilla ( 199489 ) on Wednesday December 28, 2005 @01:16PM (#14353000) Journal
    basically due to whatever circumstances (width of the road, start order etc) someone has to be in front and someone has to be behind - the fact that the Stanford vehicle was following another entry had nothing to do with how it was successful, in fact one could argue it put the vehicle in some danger if the lead vehicle messed up, rolled, crashed etc. It later passed the said vehicle to go on to the win - The article makes no mention of a "Tailgating Strategy" it does say that it was tailgating another vehicle for a bit before it passed it - not sure how this is any more strategic then when I drive to work in the morning - how about this winning strategy "Don't hit the car in front of you". Don't know why this bugged me so much, its actually a good read, I just don't know why this non-existent "Fact" was so prominent in the lead in. Sorry.. not enough coffee today....
  • Finally! (Score:5, Funny)

    by Spy der Mann ( 805235 ) <spydermann.slash ... com minus distro> on Wednesday December 28, 2005 @01:20PM (#14353013) Homepage Journal
    Now all we need is a superstrong protective layer, a pursuit mode, and cool red lights on the front!
    • Now all we need is a superstrong protective layer, a pursuit mode, and cool red lights on the front!

      ...and Turbo Boost. Gotta have that Turbo Boost.

      "I'm sorry, Michael, we've already used Turbo Boost today and you know we're only allowed to use it once per episode."

  • Static problem (Score:5, Interesting)

    by kurtkilgor ( 99389 ) on Wednesday December 28, 2005 @01:24PM (#14353038)
    As a participant of another DARPA team (Cornell -- our site is down), I am skeptical as to whether the winners of the challenge would be able to drive in a real world environment. In many ways the Grand Challenge was a toy problem, but this is not usually emphasized because they want to make it seem more dramatic.

    First of all, no other moving objects on the course. When a vehicle was about to pass another, the one in front was paused so that the passing vehicle could overtake it. At no time did the vehicles have to deal with changing conditions.

    Secondly, to my knowledge, there were no obstacles (which were promised) on the course. If someone knows differently, I'd like to hear about it. So we don't know to what extent obstacle avoidance is effective on those vehicles.

    Thirdly, daylight and clear weather is one thing, but nighttime, rain, snow, etc. would significantly degrade the data.

    Essentially the problem that the current vehicles solved was this:
    Given a set of waypoints and a "corridor" outside which you will never have to go (so far the problem can be solved only by 10cm-accuracy DGPS), use your other sensors to avoid obstacles by moving left or right within the corridor.

    Not very much like real world driving at all. And I'm not saying Stanford, CMU and the others didn't accomplish something big -- I'm just saying it's not what the Wired piece makes it out to be.
    • Incorrect. According to the website ( http://www.grandchallenge.org/ [grandchallenge.org]), the course was designed to include obstacles that had to be avoided. If I remember correctly, the obstacles included tank crosses, beams and poles, and a couple of vehicles actually got hung up on them. There was a corridor, but it was not possible to finish the course by simply relying on GPS and keeping within the middle of the road. Finally, the tunnel prevented the use of GPS.

      In short, the Grand Challenge was indeed a grand challenge
      • Go and look at a map of the course. I did not see any snake curvey road or U-Turns.

        This was a great show and achivement. But no car running would be allowed on highway, there is long road to go. Just like the Wright Brothers plane and 747, there is a lot of development to go.
      • Re:Static problem (Score:3, Interesting)

        by kurtkilgor ( 99389 )
        Well, we were all sure that there would be obstacles, including tank traps, but I am pretty sure they were not actually used on the course. If you look at the course map on the DARPA site, there are no obstacles mentioned, although there are a few tunnels and cattle guards (metal grates lying flat on the ground). We all concluded from the lack of obstacles that the DARPA people simply wanted to end the competition as soon as possible, so they made the course easier than anyone expected, thus guaranteeing a
        • Alright, for the life of me, I can't find the videos where they showed cars navigating the last section of the course - what they referred to as bot-killer section on the grandchallenge site. I do remember things like some cars having trouble with bay hales and various poles and fences, but nothing definite. Specifically, I don't remember whether there was just a narrowing of the course, or if there were sections where vehicles had to actually navigate around something.

          I would agree though that the terrain
    • In many ways the Grand Challenge was a toy problem, but this is not usually emphasized because they want to make it seem more dramatic.

      This year's GC course certainly seemed much easier than the previous course -- as you note, there was a lack of obstacles, except for cattle gates lying on the road and some relatively large obstacles like telephone poles and tunnels. Contrary to what some posters claim, there were a large number of sharp turns (and note that the Grand Challenge site doesn't show every singl
    • First of all, no other moving objects on the course.

      Seeing this is a military application that was intended for desert use, any moving objects would generally move themselves out of the way.

      If not, chances are the persons doing this have the intentions to stop and destroy the unmaned vehicle serving the purpose of saving human life on the part of the US Military.

      No to mention that the vehicle would auto-report this back to HQ as hostile action and a nearby UAV predator might drop air support to encourage th
    • First: You have to start somewhere.

      Second: If all of the vehicles in your immediate vicinity are traveling at the same speed in the same direction their velocity relative to eachother is 0. You dont have to swerve to avoid a chair across the desk from you, do you? The same will apply to groups of vehicles traveling the highways under computer control.

      Third: Last time I drove down the freeway the only obstacles were other cars.

      If all the cars are computer controlled there will be little to avoid. Lanes
      • Third: Last time I drove down the freeway the only obstacles were other cars.

        Muwahahahah, you don't drive in Dallas I see. Jettisoned concrete barriers, furniture, livestock, automobile parts, alligators (semi tires that seperated from the rim). On a bad day you may see all of these on 635. Hey there's nothing like a truck in front of you losing a pallet of bricks.

        You need a computer system the recognises these objects then tells all the other cars behind you about the object.

      • If all of the vehicles in your immediate vicinity are traveling at the same speed in the same direction their velocity relative to eachother is 0. You dont have to swerve to avoid a chair across the desk from you, do you? The same will apply to groups of vehicles traveling the highways under computer control.

        True as postulated, but nowhere near real world conditions. For starters cars need to accelerate/decelerate along the axis of the road to merge, exit, and find openings to change lanes. Plus they have t
  • ... and they really did an amazing job, however this is sponsored by the military.

    So what is it going to be used for? Suicide bomber cars?
    I wish more competitions (like F1 racing for ex.) were government sponsored but for discovering certain new advantages that are directly appliable in the public sector.

    Sort of like community service, offering prizes to those who prove their technology and donate it as "public patent" for everyone to use.
    • So what is it going to be used for? Suicide bomber cars?

      Unlikely, as they would be too easy to intercept and destroy. What they really want to use them for is logistics. So much of the military's manpower is concentrated on logistics, that's where the real potential for saving money and saving lives is. What they really want is a convoy of trucks that can be programmed to go from Supply Base A to Tactical Operations Center B, then proceed to Staging Area C, without having to put human drivers in the veh

    • by Anonymous Coward
      Yeah, because things sponsored by the Department of Defense never have any value outside of wars. Like that ARPANET thing.
  • by necro81 ( 917438 ) on Wednesday December 28, 2005 @01:33PM (#14353083) Journal
    One result from the second Grand Challenge that lots of people harped about was the fact that Stanley, the winning (and hence, fastest) entry, completed the course with an average speed of only about 19 mph. "19 mph?" quoted [slashdot.org] some of the the nay-sayers, "we're supposed to get excited about that?"

    One thing that TFA points out, which wasn't mentioned many other places, was that the course rules stated a maximum vehicle velocity of 25 mph. Ideally, then, the fastest possible average speed for any entrant would likewise have been 25 mph. Stanley, at times, wanted and could have gone faster than that, and held back due to the rule-imposed speed limit. In that context, 19 mph is actually quite good, considering the terrain would have forced it to slow down over bumps and turns.
  • by try_anything ( 880404 ) on Wednesday December 28, 2005 @01:54PM (#14353212)
    The article presents the history of Stanley is presented as a series of intellectual breakthroughs that I can understand, just like a lot of the pop science I read when I was a kid (and continue to read). I'm pretty sure each of these breakthroughs (such as learning from humans and assessing sensor data critically) are ideas that have been around in AI for a long time. The true story of Stanley is no doubt just as dramatic but much harder for a layperson to appreciate.

    I think the first practical non-military application of autonomous cars will involve a ton of infrastructure. It won't be achieved solely by making the cars as advanced as possible, but by providing a lot of supplemental data from an array of stationary sensors (and processors) installed by a city or theme park that wants to be the first to have autonomous cars.

    Eventually human drivers will be banned, and the cars will communicate and cooperate with each other (much better than human drivers!). Traffic engineers will maneuver cars manually in rare instances, and computer-controlled cars will give them a wide berth. Safety will be improved, but so will traffic efficiency. Cars will become less personal, hence smaller and more efficient; crashes will become rarer and safer, hence cars will be smaller; computers will be better drivers, so cars will run faster and closer together. We can look forward to a period of ten to thirty years in which freeways don't get any wider.

    Continuing my utopian fantasy, if cars become autonomous and have less personal significance, many city dwellers will choose to use taxi services instead of owning their own cars. That means that most of the cars on the road at a given time can have a sensible capacity, rather than the maximum capacity the owner imagines that he or she might need. Per-capita energy use for personal transportation in the U.S. will drop to a fraction of the current level.

    It will happen someday, but maybe not in the next hundred years, depending on how stubborn we are. It would certainly be easier and more rewarding to start with helpful, high-infrastructure environments, but the military has such a massive capacity for funding research that we will probably solve the harder problem of hostile environments first. I.e., we'll have autonomous robot sharks with frickin' laser beams on their heads long before we have Johnny Cab.
  • by digitaldc ( 879047 ) * on Wednesday December 28, 2005 @02:07PM (#14353281)
    Its lasers are constantly teaching its video cameras how to identify drivable terrain, and it knows that it could accelerate more.

    Maybe one day it can use its lasers to eliminate obstacles, creating drivable terrain and enabling to accelerate more.
  • Try the Scientific American article on the DARPA challenge: Innovations from a Robot Rally [sciam.com]
    It covers all the teams a bit and talks about some of the innovations that were used by the competing teams. It is a little light but worth a minute or your time.
  • Every deer, cow, buffalo, etc... has a GPS unit strapped to its back.
  • As it says on p. 5:

    The SUV's hard drives boot up, its censors come to life, and it's ready to roll. Here's how Stanley works. - J.D.

    Not sure I wanna be censored by my car... Hopefully it won't have "auto-swearword-beepover" & DRM in the on-board audio system, too!

    To some 17-year-old who loses 10 cents on every typo he makes (somewhere in an obscure German town), though, this could be a wakeup call for coding more AI into spell-checking. ;-)

Whoever dies with the most toys wins.

Working...