Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Hacking Input Devices Microsoft Build Games

Microsoft Research Brings Kinect-Style Depth Perception to Ordinary Cameras 31

mrspoonsi (2955715) writes "Microsoft has been working on ways to make any regular 2D camera capture depth, meaning it could do some of the same things a Kinect does. As you can see in the video below the team managed to pull this off and we might see this tech all around in the near future. What's really impressive is that this works with many types of cameras. The research team used a smartphone as well as a regular webcam and both managed to achieve some impressive results, the cameras have to be slightly modified but that's only to permit more IR light to hit the sensor." The video is impressive, but note that so are several of the other projects that Microsoft has created for this year's SIGGRAPH, in particular one that makes first-person sports-cam footage more watchable.
This discussion has been archived. No new comments can be posted.

Microsoft Research Brings Kinect-Style Depth Perception to Ordinary Cameras

Comments Filter:
  • by Tx ( 96709 ) on Tuesday August 12, 2014 @09:12AM (#47654189) Journal

    It is apples and pears on one hand, however the fact that the camera needs a modification, however small, means that you will still be buying a special bit of hardware to make your gesture control work, so in that sense it is in the same boat as the Leap. Except of course that the piece of hardware in question should be a lot cheaper, and could easily be included in laptops/tablets/monitors at minimal extra cost, if it really works that well and the idea takes off.

  • by OzPeter ( 195038 ) on Tuesday August 12, 2014 @09:19AM (#47654221)

    At the very end of the video it describes how the system is tuned to skin albedo. The only problem with this is that various races around the world have different albedos - which does have a real world effect in photography when trying to expose correctly for skin. In the video they mentioned training the system on the user, but all users shown in the video were white - so I can't say how well it would work for non-whites. But in general I am impressed with what they have done.

    Back in 2009, in Better Off Ted [wikipedia.org] episode 4 "Racial Sensitivity", they developed a security system that had issues with skin albedo and not detecting (from memory) dark skinned people - which resulted in all sorts of hijinks for the African American employees

  • by plover ( 150551 ) on Tuesday August 12, 2014 @09:27AM (#47654273) Homepage Journal

    You are correct. The IR laser and IR camera are used to measure depth, while the visual light camera only picks up the image.

    The cool thing about the Kinect's IR pair is that it senses depth in the same way a pair of eyes does, in that the delta between left and right eyes provides the depth info. But instead of using two eyes, it projects a grid from the location where one eye would be, and the camera in the other location measures the deltas of "where the dot is expected - where the dot is detected". The grid is slightly randomized so that straight edges can be detected. If you've ever stared into one of those Magic Eye random dot stereogram posters, you're doing pretty much the same thing the Kinect does.

    This system is very different. The Kinect has a deep field of view, but all the demos show this working in a very short range. I haven't yet read the paper, but I'm wondering if that's the point of the IR.

E = MC ** 2 +- 3db

Working...