Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Hacking Input Devices Microsoft Build Games

Microsoft Research Brings Kinect-Style Depth Perception to Ordinary Cameras 31

mrspoonsi (2955715) writes "Microsoft has been working on ways to make any regular 2D camera capture depth, meaning it could do some of the same things a Kinect does. As you can see in the video below the team managed to pull this off and we might see this tech all around in the near future. What's really impressive is that this works with many types of cameras. The research team used a smartphone as well as a regular webcam and both managed to achieve some impressive results, the cameras have to be slightly modified but that's only to permit more IR light to hit the sensor." The video is impressive, but note that so are several of the other projects that Microsoft has created for this year's SIGGRAPH, in particular one that makes first-person sports-cam footage more watchable.
This discussion has been archived. No new comments can be posted.

Microsoft Research Brings Kinect-Style Depth Perception to Ordinary Cameras

Comments Filter:
  • by Saint Gerbil ( 1155665 ) on Tuesday August 12, 2014 @09:02AM (#47654137)

    Leap motion uses two monochromatic IR cameras and three infrared LEDs.

    This claims to use one 2d Camera.

    Apple and Pears.

  • by Anonymous Coward on Tuesday August 12, 2014 @09:35AM (#47654311)

    I thought that the kinect, while nicer than the average cheapie camera in terms of optics and sensor, also used a fairly normal camera(well, one higher resolution visual band one for image and one IR one for depth) and that the real secret sauce was the IR laser device that projected the dot pattern on the environment for the camera to pick up and interpret. Am I remembering incorrectly?

    Yes and no.

    It's correct for Kinect 1. It uses a "structured light" approach (developed by PrimeSense), which projects a magic pattern and has a (regular) IR cam observing the distortion in the pattern.

    Kinect 2, on the other hand, uses real Time-of-Flight (or rather, it measures the phase difference between the modulated IR signal and the reflected IR light) imaging, very similar to laser distance meters, just 2D instead of a single point. (Versus Kinect 1, it provides a better resolution and less noise.)

  • by OzPeter ( 195038 ) on Tuesday August 12, 2014 @09:35AM (#47654319)

    This system is very different. The Kinect has a deep field of view, but all the demos show this working in a very short range. I haven't yet read the paper, but I'm wondering if that's the point of the IR.

    From watching the video my understanding is that they illuminate the subject with a fixed IR source and map the drop off of the reflected IR in 2D space and then interpret that drop off as a depth map of the object they are looking at. Which looks surpassingly accurate for the sort of use cases they demonstrate. They also point out that this technique is not a general purpose 3D system.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...