Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Robotics Hardware

Neuromorphic Algorithms Allow MAVs To Avoid Obstacles With Single Camera 39

First time accepted submitter aurtherdent2000 writes "IEEE Spectrum magazine says that Cornell University has developed neuromorphic algorithms that enable MAVs to avoid obstacles using just a single camera. This is especially relevant for small and cheap robots, because all you need is a single camera, minimal processing power, and even more minimal battery power. Now, will we see more of the drones and aerial vehicles flying all around us?"
This discussion has been archived. No new comments can be posted.

Neuromorphic Algorithms Allow MAVs To Avoid Obstacles With Single Camera

Comments Filter:
  • MAV? (Score:4, Informative)

    by Anonymous Coward on Tuesday November 06, 2012 @02:04PM (#41897095)

    I'm not sure what a MAV is....
    Googling...
    http://en.wikipedia.org/wiki/Micro_air_vehicle [wikipedia.org]

    Would it have killed the editors to define that?

  • I went to school with a girl who had no depth perception what-so-ever. She had three accidents in 2 years before anyone realized that she couldn't tell how far away things were. I don't think I want a autonomous drone flying above my head like that.

    I would be interested to know if this robot suffers the same problem as birds do when they fly into windows. I might just pay good money to see a pack of drones crash into a glass building.
    • Except, there are many ways to fake depth perception with only one stationary eye.

      • Faking it is one thing. Getting it to work well is another.

        Especially when there's a weapon strapped to it.

      • Except, there are many ways to fake depth perception with only one stationary eye.

        Except these eyes are not stationary. These are aerial vehicles. They are moving.

    • Re:Do Not Trust (Score:4, Informative)

      by Abreu ( 173023 ) on Tuesday November 06, 2012 @02:53PM (#41898041)

      My mother had a car accident in her twenties and lost sight in one eye. She spent years relearning how to perceive distance, but eventually she went back to her normal life. She could drive perfectly, both in the city (and driving in Mexico City is not for amateurs) and on the road.

    • by TheLink ( 130905 )
      I see people managing to fly drones (or drive) fine using 2D computer screens. And I also see people managing to crash often despite having depth perception.

      So I think her problem lies elsewhere.
  • How does the robot know a certain location is not traversable? I know it is possible to use one camera and a large database of things to get even a 3d guess of its environment without moving. One camera and moving, and suddenly you have all the data to work with. The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level. So unless they did that, I'd be interested in how they find out what is not traversable.
    • by ceoyoyo ( 59147 )

      "The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level."

      Yes, actually, creating 3D models from pictures from multiple perspectives (generally acquired with video) is fairly standard. I remember seeing a DIY project, possibly here on Slashdot, using a webcam and a record turntable to create a 3D object scanner. You could make one that would make you Quake levels if you wanted to.

      No, that doesn't seem to be what they've done here, pro

    • The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level. So unless they did that, I'd be interested in how they find out what is not traversable.

      http://www.robots.ox.ac.uk/~gk/PTAM/ [ox.ac.uk]
      http://www.youtube.com/watch?v=CZiSK7OMANw [youtube.com]
      http://www.youtube.com/watch?v=mimAWVm-0qA [youtube.com]

  • Cause they will need to engineer anti-drone drones now that everyone can afford drones.
    • by Luckyo ( 1726890 )

      We've had that since before world war 2. It's called AAA, Anti-air artillery. Modern automatic AAA swats small drones out of the sky faster then you can launch them.

      Or you can just jam their control signals, fake your own and have them land on your airfield.

      Or, if you're talking about neighborly relations, I'm pretty sure that shotguns that are used to hunt birds will make for a wonderful counter if someone decides to be dumb enough to watch you fap in the shower.

      • Wernstrom...! I'll make you eat those words when you see my $50 billion dollar super-sonic ramjet-powered flyswatter!
  • I guess depth perception is overrated.
    • I believe you can have depth perception with one camera, so long as it is moving. Because you remember the previous position, and your current position, and you have two images. Depending on how you define depth perception, a person can close one eye, walk into a hallway and envision the situation in 3d to navigate the hallway and avoid the obstacles. I would even go so far to say when coding a system to render vision into 3d, it might be easier on the programmer to start with just one camera instead of
    • by JanneM ( 7445 )

      It _is_ overrated by quite a lot of people, in the sense that they believe stereo vision is the be-all end-all of depth perception.

      Reality is more complicated. We use stereo vision only as one depth cue among several others, and mostly in close-up situations. Apart from a few kinds of cases such as rapid, precise object manipulation it's not a particularly important one.

      Consider that most animals do not have stereo vision (their monocular fields of view do not overlap) and can navigate a complex, cluttered

      • by aXis100 ( 690904 )

        A good example is first person shooter computer games.

        No-one has trouble navigating or dodging obstacles (though maybe reflexes let them down) even though we are viewing a video with no stereo clues. Object size, motion parallax and perspective are enough.

  • by slew ( 2918 ) on Tuesday November 06, 2012 @02:55PM (#41898089)

    I don't think that phrase invokes the same idea as most of the folks on /. The "neuromorphic" algorithms they allude to are the kind that run on highly specialized hardware (e.g., this beast [modha.org]). This type of hardware really just works similarly to synapses (integrate & fire architecture). Of course you could simulate the algorithm on a more conventional processor, but it would probably lose much of it's low-power attribute.

    FWIW, the algorithm they propose is attempt to identify objects that project up from the ground. To do this, they attempt to label parts of the image as obstacle (or not) taking a raw initial guess and filtering it with a pre-trained neural net (using some sort of adjacent region belief propagation technique).

    I think they may have "cheated" a bit in that in some papers, they describe decomposing the image with oriented Gabor filters (edge orientation detectors), but they admit that this decompsition doesn't currently work well on their ultra-low-power computing platform.

    FYI: MAV=micro aerial vehicle

    • by ceoyoyo ( 59147 )

      I doubt very much they used specialized hardware on their MAV. Neural net algorithms work just fine on conventional processors. If they did build specialized hardware they could make it REALLY low power, but 1 watt sounds like a regular processor.

      The visual centres in your brain use something very much like Gabor filters, and they're not hard to implement in hardware, so if they did "cheat" by precalculating the filters it's not a big deal.

      • If the neural net were to run on a swarm of MAVs you'd have plenty of processing power, so long as you didn't move too many of them at once, or only moved them together. But then, while they're together you can use stereo vision...

        • If the neural net were to run on a swarm of MAVs you'd have plenty of processing power, so long as you didn't move too many of them at once, or only moved them together. But then, while they're together you can use stereo vision...

          neural nets are not cpu intensive, its learning that is hard.

  • This is the logic behind a flinch reflex. It's just enough approaching obstacle detection to avoid hitting stuff. It's good to have in a UAV that has to operate near obstacles. It's not full SLAM, but it doesn't need to be.

    Nice. Now get it into the toy helicopter market.

  • I trust the group presenting this, but I could not verify their conclusions.

Trap full -- please empty.

Working...