Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Robotics The Almighty Buck Science

DARPA Grand Challenge 2005 164

fishdan wrote to mention that the Darpa Grand Challenge is getting underway again. The qualifying rounds started yesterday. National media has picked up on the story, with pieces at the Washington Post and Seattle Times. From the Post: "The autonomous robotic vehicles began competing Wednesday in the first of a series of qualifying rounds at the California Speedway. Half will advance to the Oct. 8 starting line of the so-called Grand Challenge. The grueling, weeklong semifinals are designed to test the vehicles' ability to cover a roughly 2-mile stretch of the track without a human driver or remote control. Participants ranging from souped-up SUVs to military behemoths will be graded on how well they can self-drive on rough road, make sharp turns and avoid obstacles _ hay bales, trash cans, wrecked cars _ while relying on GPS navigation and sensors, radar, lasers and cameras that feed information to computers."
This discussion has been archived. No new comments can be posted.

DARPA Grand Challenge 2005

Comments Filter:
  • Finally... (Score:5, Insightful)

    by evil agent ( 918566 ) on Thursday September 29, 2005 @11:59AM (#13676776)
    ...we're putting the "auto" into automobile.

    I for one am very happy to see this technology advancing. It's not gonna take much intelligence to make an autonomous driver better than most human drivers.

  • This is very cool (Score:5, Insightful)

    by zappepcs ( 820751 ) on Thursday September 29, 2005 @12:06PM (#13676840) Journal
    The software and use of sensors, as well as the sensors themselves are being driven to places that they probably wouldn't have gone if not for this contest. Sure, the 2 million dollars is a big-ish prize, but bragging rights are bigger.

    I've seen some hobby roboticists building smaller robots for a scaled down version of this that are just amazing. Even on smaller scales, this is pushing technology. The good part? Much of the hobby stuff is pretty much shared in an OSS kind of way. That means that the technology behind all this will not belong entireley to the military, and will soon find its way into our vehicles and homes.... THAT is very cool!
  • by millahtime ( 710421 ) on Thursday September 29, 2005 @12:18PM (#13676968) Homepage Journal
    The challenge takes place in off road conditions. Existing vehicles like SUVs can handle the conditions where legos most likely can't. They didn't pic SUVs to pick SUVs. They picked them because they are vehivles that can handle the terrain
  • by MOBE2001 ( 263700 ) on Thursday September 29, 2005 @12:23PM (#13677018) Homepage Journal
    Did AI research implode for lack of funding, or is it really that hard?

    None of the competitors are doing true AI. They are not using learning systems as far as I know. This is just good old fashioned programming where the designers/programmers try to think of all possibilities in advance. I don't see how this contest is advancing our understanding of intelligence. I think that the qualifying rules should have been more stringent and should have prohibited non-learning systems. Otherwise it's the same old traditional stuff.
  • by Jeremi ( 14640 ) on Thursday September 29, 2005 @12:23PM (#13677020) Homepage
    a problem that even severely IQ handicapped humans handle routinely while balancing a McMeal on their knees and keeping up a cell phone conversation


    Driving across 150 miles of roadless, obstacle-ridden desert is not something most humans do, or even attempt. Don't be so sure that "even severely IQ handicapped humans" could handle it routinely.


    Will we need Cray-like computing power to handle the sensory input quickly enough to work a steering wheel, brake and gas pedal?


    Yes, because being able to take two dimensional sensory input and use it to construct an acccurate three-dimensional representation of the local surroundings, and then plan a viable route through those surroundings, is not a trivial task. People do it pretty well (at least when on foot), but then they've had billions of years of development time put into their massively parallel computational hardware. Computers can do it too, and eventually that "Cray-like computing power" will be squeezed down into smaller boxes, but it isn't an easy problem.

  • by Druox ( 911165 ) on Thursday September 29, 2005 @12:37PM (#13677160)
    Has any of the contestents overcome the obstacle of negative space (i.e. a cliff, a sudden drop, a crater)?
    Its easier to detect something that is there like a bale of hay by radar, but what about something that isn't there (isn't an object sticking out of the ground, in y+ axis)? If not, I can see alot of Wile E. Coyote incidents with these cars flying off cliffs.
    (**poof**)
  • by GuyMannDude ( 574364 ) on Thursday September 29, 2005 @01:04PM (#13677439) Journal

    I for one am very happy to see this technology advancing. It's not gonna take much intelligence to make an autonomous driver better than most human drivers.

    The benefits of having cars that drive themselves will be enormous. First, these cars can be programmed to drive in a manner that conserves gasoline (e.g., no jack-rabbit starts, limit speeds to 55 mph, time their accelerations between stoplights so they don't have to come to a complete stop at every one). Second, cars that drive themselves in a rational manner -- instead of the emotional, irrational manner that people drive them -- can significantly reduce traffic jams. There is an insightful analysis of traffic jams at this page [amasci.com] which explains that jams are larely the result of people not letting other people merge into their lane coupled with the relatively-slow reaction time of humans. Cars that can synchronize their motion in relation to nearby traffic could make traffic jams a thing of the past.

    Not to mention that if the car drives itself, I can read slashdot on the commute home (or watch Natalie Portman movies).

    GMD

  • by eclectus ( 209883 ) on Thursday September 29, 2005 @01:19PM (#13677596) Homepage
    I hate getting sucked in by a troll like this, but... Please, can we quit having the argument of what is the one true AI? 30 years ago, making computers understand a man-made language of written words was True AI (TM). Now its called compiler design. Later on, True AI was making expert systems that mimicked the behaviour of experts. Now it's called rules-based systems. Lets face it, many people want to define AI to be 'that which we humans can do that computers can't", which is a ever-moving definition used by critics to denounce the AI communities discoveries as insufficient, and used by AI researchers to come up with new research projects.

    Arguing about the definition of AI is useless except as an exercise for philosophers. The definition of AI isn't nearly as interesting as the GOAL of AI: namely, to make artifacts that are useful, that perform functions that, if done by a human, would be considered intelligent. The pragmatic goal of this research is interesting, but the definition of the word 'Intelligence' and whether it applies to a man-made oject is not.

    So let's look at this practically. We can drive a car. We can't get a computer to drive a car very well. Learning how to make a computer drive a car could be insanely great (apologies to Steve Jobs). And right now, making a vehicle that can pilot itself over a known (but non-trivial) course is pretty difficult. Thus the DARPA challenge. Once this challenge has been met, and we understand that problem space, then we can move along. Until then, this challenge is not the 'same old traditional stuff'
  • by acaspis ( 799831 ) on Thursday September 29, 2005 @01:23PM (#13677654)
    the qualifying rules (...) should have prohibited non-learning systems.

    On Judgement Day, you'll fell sorry you wrote that.

    Joke aside, what's the difference between a learning system and a non-learning system ? Aren't the DARPA entries already immensely more "intelligent" than factory-floor robots operating in a predictable environment ?
    Is a Bayesian algorithm a learning system ? Is it AI ?
    Does AI have to be some kind of automagic algorithm that we can't analyze with the concepts of computer science ?

  • by Spy Hunter ( 317220 ) * on Thursday September 29, 2005 @01:28PM (#13677696) Journal
    The goal of the Grand Challenge is to produce useful robots, not "true AI". The designers of the contest realize that's a badly-defined goal that is unlikely to be reached in the near future (after all, people have been failing for decades). Instead they require results and don't specify the methods. If "true AI" is the best way to achieve results, then the people who use it will win. If it is not, then requiring it would be counterproductive.
  • by AnObfuscator ( 812343 ) <onering@phys.uf[ ]du ['l.e' in gap]> on Thursday September 29, 2005 @02:03PM (#13677995) Homepage
    than a soccer mom driving her only child in an SUV it's an SUV driving no one.

    *eyeroll* Oh, dear goodness, that is one of the most rediculous +4 insightful posts I've ever read.

    Right, because using an SUV chassis for a project that advances our knowledge and technological capabilities in the Computer Science fields of robotoics and AI is such a major problem in the US. Scientific research... bah! It's a perfect example of conspicuous consumerism! After all, using an SUV for it's original design specification -- offroad travel -- to advance the knowledge of the human race is definitely the cause of our dependance on fossil fuels.

    After all, our oil usage has NOTHING to do with aircraft, ships, pleasure craft, air conditioning our houses, heating our pools, running our 1000w gaming rigs, or the creation of the countless disposable plastic objects you use each day. No, simply getting rid of SUVs, especially SUVs used in scientific research, will unilaterally free us from fossil fuel dependence!

    ( end sarcastic rant)

    seriously, DARPA is stimulating AI & robotics research into a pragmatic problem. I can't even begin to fathom your rejection of this, MERELY because they used the most pragmatic tool -- an offroad vehicle -- for the problem -- offroad travel.

  • by Zathrus ( 232140 ) on Thursday September 29, 2005 @02:25PM (#13678240) Homepage
    By not requiring learning systems, DARPA is not encouraging progress in AI

    Since visual perception and interpretation is often considered an AI related field of research, I'd say you're wrong.

    But, more importantly, you still don't get it. The GC's goal isn't to encourage progress in AI -- it's to develop an autonomous supply vehicle. Do you have any idea how much of the military is involved purely in transport/resupply?

    The US defence department would sell its soul for a truly intelligent system and that's what we should be after.

    Funny. That contradicts a rather large number of public statements from the DoD. And privately I suspect the more sane individuals don't want it either -- we've seen more than enough SF flicks that go into the potential issues with such a thing.

    include big-city driving in the challenge

    Yes, and we should make all toddlers learn to run before walking or crawling.

    It's called incremental progress -- right now the DoD could benefit immensely from a fully autonomous transport vehicle that simply goes between depots in low traffic but highly rugged environments. After that you could look at highway driving (which is already being worked on by all the major automobile companies) and then maybe high-traffic conditions. But that last one is of relatively little use to the DoD, and DARPA is only mandated for Defense related projects.

    As it stands, all we're gonna get is clever engineering which we already know we're good at, but not good enough.

    When it comes down to it, it's all just "clever engineering" -- especially in retrospect. Most progress is made in small steps, not giant leaps.
  • by Radar Guy ( 827922 ) on Thursday September 29, 2005 @02:49PM (#13678512)
    Ok, I wasn't going to post, but this got modded up as "Insightful" and I couldn't resist - this has to be one of the dumbest posts in this thread...

    What do you think that bale of hay is sitting on? Radars recieve ground bounces all the time - even in airborne applications. Usually radar people call that "clutter", since if you're looking for airborne targets it's information you don't want - here it's information you *do* want. Depending on how the radar is mounted, it could create a ground map and trigger alarms when the ground return is either really close or really far away. It really comes down to a sensor fusion problem - by using the combination of radar, lidar, laser range finder (like another posted replied), vision, etc one could determine that there's a large obstruction in the way - either a "postive" one (like your bale of hay) or a "negative" one (like your cliff).

    The problem isn't in detecting the drop off - it's in figuring out what to do when you see it. A vehicle that comes the edge of the Grand Canyon is going to have a go a long way to drive around it. This isn't a problem with the sensor, it's a route finding problem. Heck, your sudden drop example is an easier problem - it's probably more difficult to realize you're decending gently in to a canyon that you can't get out of on the other side (again, this is a route finding problem - your route finding software has to be smart enough to avoid this obstacle in the first place, given a map of the area)
  • Mod Parent Down (Score:4, Insightful)

    by Illserve ( 56215 ) on Thursday September 29, 2005 @03:48PM (#13679077)
    He doesn't know the mean of the phrase AI.

    AI doesn't mean "Learning", it means Artificial Intelligence. Said poster is probably a stage in his life where his visual system is relatively stable from day to day. Whether it got there by being hard wired by his designer or through learning is irrelevant. His intelligent behavior (barring perhaps said post) on a moment to moment basis is the result of his pre-wired system, not some kind of fabulously amazing learning algorithm.

    Some of the engineers attacking this problem are using machine learning, others are using pre-fab algorithm, most are using a combination of both. They're all true AI by any stretch of the definition.

  • by RoverDaddy ( 869116 ) on Thursday September 29, 2005 @05:20PM (#13679934) Homepage
    The safety expectations for a self-driving car will be exponentially greater than we demand of our own stupid selves. Even if self-driving cars kill people in only 5% of the situations where a human driver would, it will be too much liability for the market to bear. I'm not saying it makes sense. We accept (out of necessity) that human drivers are fallible, and expect profound remorse (as well as prison time) if they make mistake that takes a life. If a machine kills, it can't be remorseful and we can't punish it. Human nature will push us to -find- somebody to punish, and out of fear and frustration, the punishment will be extreme.

"If it ain't broke, don't fix it." - Bert Lantz

Working...