Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Robotics Transportation

Stanford's "Autonomous" Helicopters Learn 90

An anonymous reader writes "Stanford computer scientists have developed an artificial intelligence system that enables robotic helicopters to teach themselves to fly difficult stunts by 'watching' other helicopters perform the same maneuvers. The result is an autonomous helicopter that can perform a complete airshow of complex tricks on its own. The stunts are 'by far the most difficult aerobatic maneuvers flown by any computer controlled helicopter,' said Andrew Ng, the professor directing the research of graduate students Pieter Abbeel, Adam Coates, Timothy Hunter and Morgan Quigley. The dazzling airshow is an important demonstration of 'apprenticeship learning,' in which robots learn by observing an expert, rather than by having software engineers peck away at their keyboards in an attempt to write instructions from scratch.'" The title of the linked article uses the term "autonomous," but that's somewhat misleading. The copters can't fly on their own, but rather can duplicate complex maneuvers learned from a human pilot.
This discussion has been archived. No new comments can be posted.

Stanford's "Autonomous" Helicopters Learn

Comments Filter:
  • by Haoie ( 1277294 ) on Tuesday September 02, 2008 @09:08PM (#24853867)

    Remember those robot gunships in the Terminator movies? Yes, the they shot lasers and what not. Those things had humble beginnings.

  • by exley ( 221867 )

    Oh forget it.

  • Autonomous (Score:5, Insightful)

    by Anonymous Coward on Tuesday September 02, 2008 @09:18PM (#24853955)

    They still are autonomous. Would you call people non-autonomous because they learn from other people?

    • The key thing is how robust they are against changes in environment when preforming the learned tricks. If they continue to learn from different environmental conditions when they preform the tricks then I would say they are learning - but if they are just replaying the same run as the expert, with some unchanging, fixed level of robustness then they are not really learning.
      • Re:Autonomous (Score:5, Informative)

        by pete-classic ( 75983 ) <hutnick@gmail.com> on Tuesday September 02, 2008 @09:46PM (#24854209) Homepage Journal

        Your blithe refusal to even acknowledge the article is an inspiration, sir.

        It might seem that an autonomous helicopter could fly stunts by simply replaying the exact finger movements of an expert pilot using the joy sticks on the helicopter's remote controller. That approach, however, is doomed to failure because of uncontrollable variables such as gusting winds.

        -Peter

        • by Hal_Porter ( 817932 ) on Tuesday September 02, 2008 @10:34PM (#24854591)

          Yeah, they need to be able to learn an abstract concept and apply it to the conditions around them rather than learning a fixed sequence of actions and repeating blindly. In slashdot terms rather than posting "I for one welcome our robot overlords" to every article and get modded "-1 Die in Fire", they learn to make a slightly original joke tailored to the article and get modded up.

          Oh God, what have I become.

          • They aren't just learning a fixed sequence. I don't know the details of the project, but I own an RC heli and they are twitchy and unstable things, so there is going to be some dynamic process going on that stops it from plowing into the dirt while trying to "play back" something. Just how integrated the systems are, I'm not sure.

        • Did *you* read the article - my question was not answered in it - its says that the exact approach is doomed to failure, but then what approach was it? A fixed robustness approach or an adaptive approach? The difference between the two is the difference between human learning and standard AI technique (i.e. nothing new).
          • Did *you* even read your original post?

            I can do this all day.

            But seriously, it seems like the part I quoted goes to your point. Maybe I don't understand the article, or your point, or both.

            Anyway, the article seems to imply that there is some underlying "flying" software, and that the computer just "learns" the stunts from the expert. Said another way, it seems like the computer was programmed to fly the helicopter "the old fashioned way", and the new thing is that the computer is inferring what the stunt

  • by Anonymous Coward

    I, for one, welcome our new supersonic nuclear cyborg pit-bull terrier guard dogs.

  • Better hope that one of the silly things doesn't catch sight of a couple of dogs having at it when it's in "learning mode". That could lead to some very interesting flying stunts indeed.

    • They don't use eyeballs to watch and learn, they apparently receive the r/c commands being sent to the 'teaching' helicopter and the position data from its onboard sensors.

  • The dazzling airshow is an important demonstration of "apprenticeship learning," in which robots learn by observing an expert, rather than by having software engineers peck away at their keyboards in an attempt to write instructions from scratch.

    This sounds like any automated testing tool I've ever used. I can either take the time to "peck away" at my keyboard and script the task by hannd. Or I can put it into "record mode" and have it record my mouse cilcks / keyboard clicks. This sounds like the sa
    • by Czyl ( 696277 ) on Tuesday September 02, 2008 @09:49PM (#24854243)
      It's actually considerably more difficult. Unlike your computer, the helicopter encounters different environmental conditions each time it flies, so that just blindly recording the controller inputs and replaying them will cause the helicopter to crash. The trick in apprenticeship learning is to learn the flying model used by the pilot, not just recording and replaying a macro.
      • by villy ( 199943 ) on Tuesday September 02, 2008 @11:16PM (#24854887)

        Exactly - that's where the AI comes in. It looks at numerous attempts of various tricks, notes the differences in the environmental variables and adjusts the controls wrt to the current conditions for the optimal execution. If anything it's more like, record 100 attempts, analyze and define a "good" routine, but also define good "exception handlers". I must say, I was pretty confused about what, if any, "watching" was involved - i.e. reflected-light-based vision. Pretty bad explanation, bordering on fail.

        • by svnt ( 697929 )

          The article made no mention of visual sensors. If they used vision in this project you would have heard about it. The "watching" is watching the inertial readings in relation to the controller positioning. That is, the computer looks at the position, velocity, and acceleration of the helicopter vs. the expert input, and determines what the expert was trying to achieve.

          It isn't just replaying the user inputs, but it is just replaying a filtered version of the "moves" as determined by the inertial sensors.

      • On the contrary, I learned to ride a bike by copying others eactly "Hmmm 433.72 milliseconds into the ride, that means a 2 degree right turn on the handlebars, and left foot reaches top dead center..."
    • Perhaps if we were talking about big planes way up in the atmosphere, but not when it comes to small helis. You can't just run a macro of the last pilot, because you'd end up in the dirt. A small heli like that has to be constantly aware of it's position and continually making adjustments. You'd be better off comparing it to the Eurofighter or the 4WD system of a modern rally car.

  • UCAR (Score:5, Informative)

    by DustyShadow ( 691635 ) on Tuesday September 02, 2008 @09:32PM (#24854085) Homepage
    DARPA had a project going on for awhile called UCAR [globalsecurity.org], which was an unmanned autonomous combat helicopter. Unfortunately the war took all the money and DARPA had to cancel the competitions between Lockheed and Northrop.

    Northrop currently has an unmanned helicopter called Firescout that has autonomously landed on a Navy ship while the ship was moving [navy.mil].

    My point is that this type of work is nothing new.
    • Re:UCAR (Score:5, Insightful)

      by lordofwhee ( 1187719 ) on Tuesday September 02, 2008 @09:59PM (#24854323)
      There's a HUGE difference between a robot that can manage to land on a fixed point on an object, and a robot that can actually LEARN how to land on that object. Remember, these robots know nothing. Not how to fly, not how to land, nothing, except how to learn.
      • It is certainly tweaked based on domain knowledge about helicopters. For example, some tricks were used to get the helicopter to fly upside that were not simply 'learned' by the machine.
  • Autonomous kdawson (Score:1, Insightful)

    by Anonymous Coward

    You probably didn't learn a language on your own either but we think you might be autonomous.

    Jut because the robot's have learned by watching an expert doesn't make them not autonomous. People learn by watching experts as well ... ok so maybe only some of us do

    If the robots are capable ( and according to the article it seems they are ) of independent flight then they are autonomous.

  • Who on slashdot wouldn't be familiar with the word autonomous?

    • Re: (Score:2, Funny)

      by Dpaladin ( 890625 )

      Who on slashdot wouldn't be familiar with the word autonomous?

      Not everyone on Slashdot posts as an Autonomous Coward.

  • by Somegeek ( 624100 ) on Tuesday September 02, 2008 @10:04PM (#24854355)

    From the 2nd paragraph of the article:

    "The result is an autonomous helicopter than can perform a complete airshow of complex tricks on its own."

    From kdawson's summary:

    "The title of the linked article uses the term "autonomous," but that's somewhat misleading. The copters can't fly on their own, but rather can duplicate complex maneuvers learned from a human pilot."

    How in any way do you come to that conclusion based upon the data in the story?

    They CAN and DO fly by themselves. Out of the lab. In varying weather conditions. Constantly making adjustments for wind gusts, etc., none of which is being controlled by a human.

    And then the wisecrack about their AI? It uses an algorithm to study commands sent to another helicopter, studies the results, figures out what the goal of the commands was, and is able to implement those goals, on its own, more accurately than the original human pilot. That's not a strong AI?

    Can we please get some editors that understand what they are reading?

    *From The Not What You Would Call Brilliant Editing Department

    • well, clearly it mislead kdawson somewhat!
    • Re: (Score:2, Interesting)

      by jpate ( 1356395 )
      I wish they said what sort of algorithm was used. Is there an explicit model of wind &c. whose parameters are tuned from data, or is it some sort of straight machine learning approach like neural nets?? If the latter, I wonder how many example runs they had to make to give it enough data to be robust to environmental variation...
      • It uses a Markov Decision process. There are many possible states, each with a "reward" for being in that state and other possible states than can be reached. The goal is to learn the policy (where to go from each state) to maximize the overall reward over a long period of time. The wind and other environment help determine the policy from observing the "policy" of the human example.

    • by BitZtream ( 692029 ) on Tuesday September 02, 2008 @10:31PM (#24854569)

      And then the wisecrack about their AI? It uses an algorithm to study commands sent to another helicopter, studies the results, figures out what the goal of the commands was, and is able to implement those goals, on its own, more accurately than the original human pilot. That's not a strong AI?

      Heh, the summery is just an example of ignorance. A human being does the EXACT same thing. The main difference being that when I put my aircraft into the air, I can redefine the programming very quickly without outside intervention. Currently the AI that has been developed works against a known, set collection of goals.

      One could argue, that in the military, that may be more useful than an actual person in many cases. The person my have a conviction that prevents them from carrying out the mission, for instance, a child on the battle field my prevent them from firing a weapon, even though the result of not firing means that thousands of people die elsewhere.

      Personally, having not served in the military, I can not comprehend how a pilot ( and I've always wanted to be a fighter pilot mind you ) or any other military personal can actually take someone elses life. I know plenty of men and women do it on it often enough to know that you can overcome that and justify the action to yourself, at least long enough to do it, but lets face it, no one that I would want in our military should be able to take a life and NOT feel something is wrong.

      To be clear, I do not fault those people who serve at all, I hold them with the highest respect because they do a job that I do not believe I could do, nor do I really WANT to do, I just want to play with the aircraft :)

      So ... if this AI is a little 'dumb', it may make things a whole lot better in the end. Of course, the other side of the argument, and the side that I'm on says that the human decisions made during combat help prevent us from being walking evil. It is that ability for us to keep our morals in check and consider more than just our assigned goals that keeps us different from machines and makes us, to our knowledge, unique, for good or bad. It is ALSO that ability to adjust our goals based on the situation that makes the difference in a war.

      If you turn all wars into battles between machines, its simple a contest of production and resources in which case its likely you can just decide the outcome in advance and save the resources. So do you do the Warcraft Rush, or do you camp and stockpile?

      Fortunately, I can't imagine that it will even end up that we have computers doing all the work for us as those in power tend to not want to give it up, even to their computerized slaves.

      • by Viol8 ( 599362 )

        "Personally, having not served in the military, I can not comprehend how a pilot ( and I've always wanted to be a fighter pilot mind you ) or any other military personal can actually take someone elses life"

        If someone is about to kill you if you don't kill them first I suspect its rather easy frankly. Having a clear conscience won't do you much good if you've just been turned in a ball of greasy smoke.

      • by svnt ( 697929 )

        I'm going to try to extract your point from this collection of fantasy.

        The person my have a conviction that prevents them from carrying out the mission, for instance, a child on the battle field my prevent them from firing a weapon, even though the result of not firing means that thousands of people die elsewhere.

        So to continue the dramatic hyperbole: what you're saying here is you prefer machine logic deciding to blow up toddlers. Got it.

        If you turn all wars into battles between machines, its simple a cont

        • You sir, missed the point entirely.

          I did not say I am for having machines blow toddlers up, I am however saying that what robotic solder lacks is a concious, and that it can be both a benifit and problem to have a concious.

          My point was exactly the same as yours. Once the resources run out, or production is unable to keep up with demand, humans will be back in the battle field.

          • by svnt ( 697929 )
            Ah, yep, I see it now. I stopped reading and got lured in by the straw man. My bad.
    • by PPH ( 736903 ) on Tuesday September 02, 2008 @11:11PM (#24854833)

      It appears that two things are going on here, both interesting.

      First, the system is learning by observing a human operator as well as the 'chopper's response. This is a bit more than simple macro programming. The system observes a system state, an operator input and a subsequent state and appears to be deriving an initial set of control laws in addition to figuring out the operator's goals.

      Next, when it is in autonomous flight mode, it fine tunes the control laws to optimize the chopper's performance.

      The latter is actually an easier task for simple systems. Adaptive PID control systems are already in use, but a helicopter presents a problem involving multiple inputs and degrees of freedom. The former is a more interesting problem, particularly for unstable systems. Deriving parameters for a control system by watching another in action, when the observed inputs are noisy is tricky.

      • Next, when it is in autonomous flight mode, it fine tunes the control laws to optimize the chopper's performance.

        I don't get the sense that there is really an autonomous flight mode. It sounded to me as if the helicopter only carries the instrumentation that needed to be on board ("accelerometers, gyroscopes and magnetometers"). All smarts seemed to be on the ground.

        Unfortunately the news release was written by a layman (a PR person, yet) for laymen, so you can't trust it.

        There is another "fact" that can't hardly be correct:

        The piÃce de résistance may have been the "tic toc," in which the helicopter, while pointed straight up, hovers with a side-to-side motion as if it were the pendulum of an upside down clock.

        I don't know much about model helicopters, but I find it hard to believe that they can hover wi

        • by PPH ( 736903 )

          I'd give them a break on the claim of autonomous. The weight and power requirements of the controllers are probably beyond the capabilities of a model helicopter. So they 'stretched' the I/O links out over radio up/down links. Its still sort of autonomous if there is no human in the loop.

          I doubt they'd have any trouble putting the same systems on board a full sized helicopter. A black one, at that.

    • The helicopters can NOT fly by themselves "out of the lab." They are remote control helicopters with a few extra gadgets. In particular, they would include potentiometers to measure control surface deflection, digital servos, an AHRS unit, a data acquisition system, and a small autopilot computer based on a PowerPC, XScale, ARM9, or similar low-power CPU. The autopilot computer contains the definition of the control system, which is a set of nonlinear equations defining the dynamics and flight control of

    • They CAN and DO fly by themselves. Out of the lab. In varying weather conditions. Constantly making adjustments for wind gusts, etc., none of which is being controlled by a human.

      They fly with the help of multiple cameras on the *ground*, which makes it more of a 3-D Printer -- not an autonomous flying machine. Notice their emphasis on the "air-ground cooperation" [stanford.edu] of the original web site for their project. That's because directing flying machines from the ground is a known tractable problem. And directing

    • How in any way do you come to that conclusion based upon the data in the story?
      ...
      Can we please get some editors that understand what they are reading?

      From the article:

      During a flight, some of the necessary instrumentation is mounted on the helicopter, some on the ground. Together, they continuously monitor the position, direction, orientation, velocity, acceleration and spin of the helicopter in several dimensions. A ground-based computer crunches the data, makes quick calculations and beams new flight directions to the helicopter via radio 20 times per second.

      If the helicopter is being directed by a ground-based computer, it's not autonomous. Kdawson was correct this time. Yes, the article contradicts itself, but I think the most detailed information is correct. Whoever wrote that news release didn't seem to understand what they were writing.

  • and you'll see. Throw in a little wind here and there and the robot doesn't stand a chance.

    http://ca.youtube.com/watch?v=gi7G-VzU2r4 [youtube.com]

  • by heroine ( 1220 ) on Tuesday September 02, 2008 @10:20PM (#24854481) Homepage

    This story got a lot more attention than the other zillions of autonomous helicopters out there. The disappointment with the Stanford one is it is reinforcement learning. It's recording and playing the commands of a human pilot instead of simulating a dynamic model and deducing commands based on a genetic algorithm. The real value in ground based autopilot is having enough computing power to use biological algorithms.

    • Re: (Score:2, Informative)

      by Kryos ( 45369 )

      It's recording and playing the commands of a human pilot instead of simulating a dynamic model and deducing commands based on a genetic algorithm.

      Or, you could RTFA.

      As Oku repeated a maneuver several times, the trajectory of the helicopter inevitably varied slightly with each flight. But the learning algorithms created by Ng's team were able to discern the ideal trajectory the pilot was seeking. Thus the autonomous helicopter learned to fly the routine better - and more consistently - than Oku himself.

      • You throw away the parent's point too quickly. His point was not that the helicopter wasn't learning (it is), it's the model used for learning. There are several learning models that could be used for this sort of thing, among them reinforcement and genetic learning. Reinforcement learning implies a "teacher" telling the algorithm how well it did, giving hints, etc. The parent would prefer (and I agree here) focus on a purely self-play algorithm. The helicopter "learns" by crashing. The longer it stays in t
        • by cgenman ( 325138 )

          The interesting point isn't that there are autonomous helicopters out there (there have been a few spectacular ones in the past) but the use of learning via apprenticeship. Purely self-taught algorithms have been used in the past to good effect, but apprenticeship has many practical applications. A genetic helicopter may not "discover" all of the known maneuvers out there. If you make minor updates to hardware, it might be possible for the older copters to apprentice the younger ones. At a higher level,

    • Sometimes it's easier to start by copying. RC helis are quite expensive.

    • No, it's not just recording and playing the commands of a human. If it did that, it would crash.

      The algorithm is learning an optimal policy to execute in an environment (defined by wind, altitude, etc.) to reach some goal. The learns how to react to the environment based on the actions of a human.

      A genetic algorithm also has an objective function that it's trying to maximize, but would have to get to the optimum from a random starting location. It would be like putting joe blow in a cockpit and saying "d

  • I have a few RC helicopters, and there's no way I can fly a funnel, inverted or otherwise. Or some of those other tricks. Therefore, let me officially bow down to our rotary wing robotic overlords.
  • Mirror neurons (Score:3, Interesting)

    by zobier ( 585066 ) <zobier@zobieLAPLACEr.net minus math_god> on Tuesday September 02, 2008 @10:38PM (#24854641)

    I'm put in mind of mirror neurons [wikipedia.org] which fire sympathetically and seem to account for the ability of animals to mimic and thus learn novel behaviour.

    • (not that this has anything to do with the article :P)

      I remember I lost your post a while back about language translation, understanding the world metaphorically. I caved in and got a subscription (lol)... and dug out the post... if you got an email/etc and your still interested let me know, the original post was here, in case you've forgotten.

      http://hardware.slashdot.org/article.pl?sid=08/01/16/2333204 [slashdot.org]

      • by zobier ( 585066 )

        It's funny because I almost did the same thing, i.e. subscribe so I could dig out one of my old posts, I'll probably do it.

        I didn't receive an email, you've got my address ^

        Cheers.

  • Sounds like a win for GOFAI. Most of the time the stupid brute force approach beats the kind of thing they seem to be talking about. It's nice to see them taking an idealistic approach and actually getting results.
  • What happens when they are taught the wrong sequence of maneuvers? Like mowing down ducklings crossing the street.....
  • by Anonymous Coward

    Why don't they just start working on the Robotrons now? Maybe then they could get them finished before 2084 and just get the whole "destruction of man by his own creation" thing over with.

  • And here is a video of these helicopters flying... http://www.livescience.com/common/media/video.php?videoRef=LS_080902_copter [livescience.com]
  • OK helicopter, watch this and learn: http://www.youtube.com/watch?v=p8t41avFuCc [youtube.com]
    • by serbanp ( 139486 )

      Yeah, that's pretty amazing, but when I watch something like this, the following saying pops up: "To some a 3D heli is amazing, to me it's a demented insect after being sprayed"

      Anyway, I doubt that SU's helis can do the tricks good pilots can (e.g. the Szabo brothers), nevermind better.

  • by jabithew ( 1340853 ) on Wednesday September 03, 2008 @03:20AM (#24856055)

    I read this on the main page. It was laid out with

    "Stanford computer scientists have developed an artificial intelligence system that enables robotic helicopters to teach themselves to

    and then a new line, which I was certain was going to start with the word "kill".

  • Just hope it don't 'watch' a helo crash, then try to mimic that behavior. Hmmm, that's a neat trick.
  • However, do they learn at a geometric rate?

  • They aren't called, Hell-icopters for nothing. Believe me, when time comes for real combat, any real intelligenced enemy can outmaneuver any nuber of these, simply by being smarter. If what they are claiming is completely true, then, essentially, they are saying, they have discovered artificial intelligence, which is not true. What they have discovered is a mimicer that can learn (to mimic) maneuvers through observation. Big hooha!
  • Just for comparison, check out this guy doing stunts with a model helicopter [youtube.com]. Not only incredible moves, but great choreography as well.

Keep up the good work! But please don't ask me to help.

Working...