DARPA Grand Challenge 2005 164
fishdan wrote to mention that the Darpa Grand Challenge is getting underway again. The qualifying rounds started yesterday. National media has picked up on the story, with pieces at the Washington Post and Seattle Times. From the Post: "The autonomous robotic vehicles began competing Wednesday in the first of a series of qualifying rounds at the California Speedway. Half will advance to the Oct. 8 starting line of the so-called Grand Challenge. The grueling, weeklong semifinals are designed to test the vehicles' ability to cover a roughly 2-mile stretch of the track without a human driver or remote control. Participants ranging from souped-up SUVs to military behemoths will be graded on how well they can self-drive on rough road, make sharp turns and avoid obstacles _ hay bales, trash cans, wrecked cars _ while relying on GPS navigation and sensors, radar, lasers and cameras that feed information to computers."
Finally... (Score:5, Insightful)
I for one am very happy to see this technology advancing. It's not gonna take much intelligence to make an autonomous driver better than most human drivers.
This is very cool (Score:5, Insightful)
I've seen some hobby roboticists building smaller robots for a scaled down version of this that are just amazing. Even on smaller scales, this is pushing technology. The good part? Much of the hobby stuff is pretty much shared in an OSS kind of way. That means that the technology behind all this will not belong entireley to the military, and will soon find its way into our vehicles and homes.... THAT is very cool!
Re:Only in America could it say *from* SUV :-) (Score:4, Insightful)
This is not true AI (Score:5, Insightful)
None of the competitors are doing true AI. They are not using learning systems as far as I know. This is just good old fashioned programming where the designers/programmers try to think of all possibilities in advance. I don't see how this contest is advancing our understanding of intelligence. I think that the qualifying rules should have been more stringent and should have prohibited non-learning systems. Otherwise it's the same old traditional stuff.
Re:The amazing failures of AI? (Score:4, Insightful)
Driving across 150 miles of roadless, obstacle-ridden desert is not something most humans do, or even attempt. Don't be so sure that "even severely IQ handicapped humans" could handle it routinely.
Will we need Cray-like computing power to handle the sensory input quickly enough to work a steering wheel, brake and gas pedal?
Yes, because being able to take two dimensional sensory input and use it to construct an acccurate three-dimensional representation of the local surroundings, and then plan a viable route through those surroundings, is not a trivial task. People do it pretty well (at least when on foot), but then they've had billions of years of development time put into their massively parallel computational hardware. Computers can do it too, and eventually that "Cray-like computing power" will be squeezed down into smaller boxes, but it isn't an easy problem.
What about negative space? (Score:4, Insightful)
Its easier to detect something that is there like a bale of hay by radar, but what about something that isn't there (isn't an object sticking out of the ground, in y+ axis)? If not, I can see alot of Wile E. Coyote incidents with these cars flying off cliffs.
(**poof**)
Autonomous cars and traffic jams (Score:5, Insightful)
I for one am very happy to see this technology advancing. It's not gonna take much intelligence to make an autonomous driver better than most human drivers.
The benefits of having cars that drive themselves will be enormous. First, these cars can be programmed to drive in a manner that conserves gasoline (e.g., no jack-rabbit starts, limit speeds to 55 mph, time their accelerations between stoplights so they don't have to come to a complete stop at every one). Second, cars that drive themselves in a rational manner -- instead of the emotional, irrational manner that people drive them -- can significantly reduce traffic jams. There is an insightful analysis of traffic jams at this page [amasci.com] which explains that jams are larely the result of people not letting other people merge into their lane coupled with the relatively-slow reaction time of humans. Cars that can synchronize their motion in relation to nearby traffic could make traffic jams a thing of the past.
Not to mention that if the car drives itself, I can read slashdot on the commute home (or watch Natalie Portman movies).
GMD
Re:This is not true AI (Score:2, Insightful)
Arguing about the definition of AI is useless except as an exercise for philosophers. The definition of AI isn't nearly as interesting as the GOAL of AI: namely, to make artifacts that are useful, that perform functions that, if done by a human, would be considered intelligent. The pragmatic goal of this research is interesting, but the definition of the word 'Intelligence' and whether it applies to a man-made oject is not.
So let's look at this practically. We can drive a car. We can't get a computer to drive a car very well. Learning how to make a computer drive a car could be insanely great (apologies to Steve Jobs). And right now, making a vehicle that can pilot itself over a known (but non-trivial) course is pretty difficult. Thus the DARPA challenge. Once this challenge has been met, and we understand that problem space, then we can move along. Until then, this challenge is not the 'same old traditional stuff'
Re:This is not true AI (Score:2, Insightful)
On Judgement Day, you'll fell sorry you wrote that.
Joke aside, what's the difference between a learning system and a non-learning system ? Aren't the DARPA entries already immensely more "intelligent" than factory-floor robots operating in a predictable environment ?
Is a Bayesian algorithm a learning system ? Is it AI ?
Does AI have to be some kind of automagic algorithm that we can't analyze with the concepts of computer science ?
Re:This is not true AI (Score:4, Insightful)
Re:If there's one thing worse (Score:2, Insightful)
*eyeroll* Oh, dear goodness, that is one of the most rediculous +4 insightful posts I've ever read.
Right, because using an SUV chassis for a project that advances our knowledge and technological capabilities in the Computer Science fields of robotoics and AI is such a major problem in the US. Scientific research... bah! It's a perfect example of conspicuous consumerism! After all, using an SUV for it's original design specification -- offroad travel -- to advance the knowledge of the human race is definitely the cause of our dependance on fossil fuels.
After all, our oil usage has NOTHING to do with aircraft, ships, pleasure craft, air conditioning our houses, heating our pools, running our 1000w gaming rigs, or the creation of the countless disposable plastic objects you use each day. No, simply getting rid of SUVs, especially SUVs used in scientific research, will unilaterally free us from fossil fuel dependence!
( end sarcastic rant)
seriously, DARPA is stimulating AI & robotics research into a pragmatic problem. I can't even begin to fathom your rejection of this, MERELY because they used the most pragmatic tool -- an offroad vehicle -- for the problem -- offroad travel.
Re:This is not true AI (Score:5, Insightful)
Since visual perception and interpretation is often considered an AI related field of research, I'd say you're wrong.
But, more importantly, you still don't get it. The GC's goal isn't to encourage progress in AI -- it's to develop an autonomous supply vehicle. Do you have any idea how much of the military is involved purely in transport/resupply?
The US defence department would sell its soul for a truly intelligent system and that's what we should be after.
Funny. That contradicts a rather large number of public statements from the DoD. And privately I suspect the more sane individuals don't want it either -- we've seen more than enough SF flicks that go into the potential issues with such a thing.
include big-city driving in the challenge
Yes, and we should make all toddlers learn to run before walking or crawling.
It's called incremental progress -- right now the DoD could benefit immensely from a fully autonomous transport vehicle that simply goes between depots in low traffic but highly rugged environments. After that you could look at highway driving (which is already being worked on by all the major automobile companies) and then maybe high-traffic conditions. But that last one is of relatively little use to the DoD, and DARPA is only mandated for Defense related projects.
As it stands, all we're gonna get is clever engineering which we already know we're good at, but not good enough.
When it comes down to it, it's all just "clever engineering" -- especially in retrospect. Most progress is made in small steps, not giant leaps.
Re:What about negative space? (Score:3, Insightful)
What do you think that bale of hay is sitting on? Radars recieve ground bounces all the time - even in airborne applications. Usually radar people call that "clutter", since if you're looking for airborne targets it's information you don't want - here it's information you *do* want. Depending on how the radar is mounted, it could create a ground map and trigger alarms when the ground return is either really close or really far away. It really comes down to a sensor fusion problem - by using the combination of radar, lidar, laser range finder (like another posted replied), vision, etc one could determine that there's a large obstruction in the way - either a "postive" one (like your bale of hay) or a "negative" one (like your cliff).
The problem isn't in detecting the drop off - it's in figuring out what to do when you see it. A vehicle that comes the edge of the Grand Canyon is going to have a go a long way to drive around it. This isn't a problem with the sensor, it's a route finding problem. Heck, your sudden drop example is an easier problem - it's probably more difficult to realize you're decending gently in to a canyon that you can't get out of on the other side (again, this is a route finding problem - your route finding software has to be smart enough to avoid this obstacle in the first place, given a map of the area)
Mod Parent Down (Score:4, Insightful)
AI doesn't mean "Learning", it means Artificial Intelligence. Said poster is probably a stage in his life where his visual system is relatively stable from day to day. Whether it got there by being hard wired by his designer or through learning is irrelevant. His intelligent behavior (barring perhaps said post) on a moment to moment basis is the result of his pre-wired system, not some kind of fabulously amazing learning algorithm.
Some of the engineers attacking this problem are using machine learning, others are using pre-fab algorithm, most are using a combination of both. They're all true AI by any stretch of the definition.
The AC below explained it well. (Score:2, Insightful)