Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Graphics Input Devices Technology

Light Field Photography Is the New Path To 3-D 79

waderoush writes "In November, Lytro, the maker of the first light field camera for consumers, upgraded its viewer software to enable a feature called 'Perspective Shift.' In addition to refocusing pictures after they've been taken, Lytro audiences can now pivot between different virtual points of view, within a narrow baseline. This 3-D capability was baked into Lytro's technology from the start: 'The light field itself is inherently multidimensional [and] the 2-D refocusable picture that we launched with was just one way to represent that,' says Eric Cheng, Lytro's director of photography. But while Perspective Shift is currently little more than a novelty, the possibilities for future 3-D imaging are startling, especially as Lytro develops future devices with larger sensors — and therefore larger baselines, allowing more dramatic 3-D effects. Cheng says the company is already exploring future versions of its viewer software that would work on 3-D televisions. 'We are moving the power of photography from optics to computation,' he says. 'So when the public really demands 3-D content, we will be ready for it.'"
This discussion has been archived. No new comments can be posted.

Light Field Photography Is the New Path To 3-D

Comments Filter:
  • Bought a Lytro (Score:3, Interesting)

    by Anonymous Coward on Friday February 01, 2013 @08:33PM (#42767495)

    Returned it.

    It was awful, and the resolution wasn't hot

  • My first thought (Score:5, Interesting)

    by Anonymous Coward on Friday February 01, 2013 @08:36PM (#42767537)

    Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

  • voxels (Score:2, Interesting)

    by Anonymous Coward on Friday February 01, 2013 @08:53PM (#42767677)

    Combine the data from 12 or so of these in a matrix and you have a really powerful, accurate, self optimizing point cloud capture device for voxel 3d content.

  • by dgatwood ( 11270 ) on Friday February 01, 2013 @09:01PM (#42767743) Homepage Journal

    No, but they might be able to avoid the lens entirely and do microlensing on a flat surface. For example, I could totally see the entire back of a cell phone be a light field camera, automatically throwing out data from where your fingers overlap the edges from holding it by using capacitive sensors in some way. I mean, we're probably talking twenty or thirty years out here, but that's the direction I see things heading eventually. And that would give you a believable stereo spread, not to mention much more usable resolution.

  • by mr_exit ( 216086 ) on Friday February 01, 2013 @09:07PM (#42767781) Homepage

    As long as you've got enough parallax to work out the depth information from your scene, you can push the effect to recreate viewpoints that are wider then you have real data for.

    You will end up with tiny slivers of image that you don't have pixel data for when there's a foreground element that diverges more then it did before, but that's easy to recreate. All post converted 3d films have this problem to an even greater extent, there's algorithms out there to clone the surrounding pixels or even use pixels from other frames if the object is moving through the scene

    There are lightfield cameras out there that instead of using a single chip, they use an array of small cameras (think cell phone cameras) The adobe one is 500 Megapixels

    See the research by Todor Georgiev http://tgeorgiev.net/ [tgeorgiev.net] The Lytro camera is a nice cheap toy, but there's some stunning results form researchers.

  • by Taco Cowboy ( 5327 ) on Friday February 01, 2013 @09:15PM (#42767827) Journal

    I have a Lytro as well. I know that currently its limitations are so severe that have rendered the Lytro cameras to nothing but a novelty.

    Its limitations right now are in the computational power --- it does take a whole lot more computational power to make it useful --- and the HORRENDOUS AMOUNT OF DATA to make it any useful.

    But, I still have hope in this 3D imaging thing --- I do not see it as mere toy, I see a future link, in between 3D imagine and 3D printing, and beyond.

    Currently, to gather data on 3D imagery we use technologies such as MRI, which in itself not really portable.

    The concept behind the Lytro 3D camera may offer us a possible alternative.

  • by harperska ( 1376103 ) on Friday February 01, 2013 @09:22PM (#42767883)

    I wonder if it would be possible to make a 'light field' display, which rather than each pixel emitting light in all directions like in current 2d and faux 3d displays, it would be able to emit light in both the frequency and vector that was detected by the camera. This would be true autostereoscopic 3d, as the emitted light would have the same properties as the original light allowing the eye to naturally focus on it. I wonder if this would be possible by perfecting lenticular display technology, or if it would require something like an array of micro lasers with each pixel containing a set of lasers pointing in all directions.

  • by gmueckl ( 950314 ) on Friday February 01, 2013 @09:47PM (#42768069)

    A lenticular lens array in front of LCD screens are a nice do-it-yourself solution that almost does the trick. It makes an autostereoscopic display that can display more than 2 images in different directions, making it possible to move around in front of the screen and see a stereo image without glasses. However, there are a couple of limitations. The LCD resolution suffers tremendously and the number of zones that you can create still isn't very high. Maybe it gets better with retina displays, but I'm not sure. Even paper printouts of 20 to 30 images at 600dpi are barely good enough.

    Another interesting idea is this proposal: http://gl.ict.usc.edu/Research/LFD/ [usc.edu] - replace each pixel on a *huge* screen with a microprojector acting as a directional light source. It is insane in its own special way, but this research group has successfully thrown massive amounts of hardware at problems in the past.

"In the fight between you and the world, back the world." --Frank Zappa

Working...