Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Input Devices Technology

Light Field Photography Is the New Path To 3-D 79

waderoush writes "In November, Lytro, the maker of the first light field camera for consumers, upgraded its viewer software to enable a feature called 'Perspective Shift.' In addition to refocusing pictures after they've been taken, Lytro audiences can now pivot between different virtual points of view, within a narrow baseline. This 3-D capability was baked into Lytro's technology from the start: 'The light field itself is inherently multidimensional [and] the 2-D refocusable picture that we launched with was just one way to represent that,' says Eric Cheng, Lytro's director of photography. But while Perspective Shift is currently little more than a novelty, the possibilities for future 3-D imaging are startling, especially as Lytro develops future devices with larger sensors — and therefore larger baselines, allowing more dramatic 3-D effects. Cheng says the company is already exploring future versions of its viewer software that would work on 3-D televisions. 'We are moving the power of photography from optics to computation,' he says. 'So when the public really demands 3-D content, we will be ready for it.'"
This discussion has been archived. No new comments can be posted.

Light Field Photography Is the New Path To 3-D

Comments Filter:
  • Bought a Lytro (Score:3, Interesting)

    by Anonymous Coward on Friday February 01, 2013 @08:33PM (#42767495)

    Returned it.

    It was awful, and the resolution wasn't hot

    • While I hope Lytro manages to stay in business - It's nice to have somebody doing something different, they're a long way from any sort of realistic 3D. You're asking for a lot of computational and sensor power to create anything beyond postage stamp images (the major issue with the current Lytro products).

      Imagine You Tube videos in 3D - stupid cat pictures without any editing in three dimensions! Yeah!

      • by Lumpy ( 12016 )

        "Imagine You Tube videos in 3D - stupid cat pictures without any editing in three dimensions! Yeah!"

        Already there. Fuji had a point and shoot camera that did 3d video. there is a lot of stupid cat 3d youtube videos.

      • I have a Lytro as well. I know that currently its limitations are so severe that have rendered the Lytro cameras to nothing but a novelty.

        Its limitations right now are in the computational power --- it does take a whole lot more computational power to make it useful --- and the HORRENDOUS AMOUNT OF DATA to make it any useful.

        But, I still have hope in this 3D imaging thing --- I do not see it as mere toy, I see a future link, in between 3D imagine and 3D printing, and beyond.

        Currently, to gather data on 3D i

        • by ceoyoyo ( 59147 ) on Saturday February 02, 2013 @12:09AM (#42768883)

          "Currently, to gather data on 3D imagery we use technologies such as MRI, which in itself not really portable."

          We use things like MRI to gather tomographic data. It's for seeing inside. Lytro doesn't do that, and never will.

          Currently if you want to do the kind of 3D we're talking about you can buy a Lytro and get low resolution with a lot of data and processing, or you can buy one of the commonly available compact cameras that include two lenses (like this one [dpreview.com]) and get instant, high res results.

          • by cnettel ( 836611 )

            Currently if you want to do the kind of 3D we're talking about you can buy a Lytro and get low resolution with a lot of data and processing, or you can buy one of the commonly available compact cameras that include two lenses (like this one [dpreview.com]) and get instant, high res results.

            To be fair, that gives you stereography, not 3D. From stereography, you can compute, tada, something resembling 3D along a short baseline, i.e. not only showing the image through one of the lenses, but a theoretical image from anywhere close to where they were located. The Lytro is a much more solid way of achieveing that, though, if they can create sensors that are both wide enough and carry enough resolution. Currently, they are a long way off. I would even think that you could put to Lytros in kind of a

            • by ceoyoyo ( 59147 )

              The Lytro is also effectively doing stereography. Which is fine, because that's how we perceive 3D anyway.

            • by Genda ( 560240 )

              This is just the beginning of a disruptive technology. I don't think the Lytro will necessarily be the answer for consumer products. If you look up "Super Resolution" and the "Pelican slimphone camera array" you'll see that there are emerging technologies that will be at least as effective and collecting both spatial and 3 dimensional data. In fact, its easy to imaging 6 -10 camera arrays being placed on the back of an iPad producing images with higher resolution that a top professional camera, as well as t

            • Bingo, stereo. Not equal to 3D.

              Consider a stadium of light field cameras + location + time stamps Crowd source to the cloud and view all the pictures you wish you had taken.

      • by Anonymous Coward

        They're not really the only player in this space-

        LinX Imaging [ http://image-sensors-world.blogspot.com/2012/07/linx-imagings-multi-aperture-camera.html [blogspot.com] ]
        Pelican [ http://image-sensors-world.blogspot.com/2012/11/pelican-imaging-capabilities-presented.html [blogspot.com] ]

        Is starting to be applied to high-precision 3D measurement:
        Raytrix [ http://image-sensors-world.blogspot.com/2012/12/raytrix-presentation-from-vision-2012.html [blogspot.com] ]
        Ascentia Imaging [ http://www.ascentiaimaging.com/ai_web_2812_003.htm [ascentiaimaging.com] ]

      • Imagine You Tube videos in 3D - stupid cat pictures without any editing in three dimensions! Yeah!

        Actually, that sounds kind of awesome.

      • While it is technically stereoscopic 3D, my HTC evo 3D takes and displays 3D photos and videos at a decent quality and resolution as long as the lighting isn't too dim. I'm not sure what you mean by "realistic 3D". Do you mean to have the ability to rotate the photo and view from multiple vantage points? Because AFAICT that would require a snapshot from more than one perspective (ie the evo 3d - and that only supplies two shots. For a good example, consider bullet time from "The Matrix"). If you want the ab
    • Returned it.

      It was awful, and the resolution wasn't hot

      Sure you should buy a Nikon D600 or digital back for a large format camera.

      This light field thing is new and novel. New and novel to the degree that it is easy to see that sensor and post processing technology have room to improve.

      It will take a while but this may prove to be the trick that solves the most common cell phone camera image errors. The technology is flat out amazing.

      Welcome to the future of imaging.

  • My first thought (Score:5, Interesting)

    by Anonymous Coward on Friday February 01, 2013 @08:36PM (#42767537)

    Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

    • I could never figure that scene out. Deckard seemed to be looking around a foreground object.

      • by Anonymous Coward

        There was a paper some months ago about revealing an object obstructed by another thru the scatered rays that were bounced by it. Maybe if we could merge the two tecnologies, that amazing camera could be constructed.

      • by gl4ss ( 559668 )

        the tech depended on the photograph itself being of insane, insane resolution(possibly higher than physically possible on hard medium) and the machine at the police hq(it was a remote terminal he was using) having immense computational power and fancy algorithms to calculate from reflections in the picture what was "outside" the picture, as viewed from a different angle.

  • by levork ( 160540 ) on Friday February 01, 2013 @08:47PM (#42767623) Homepage

    Wow, TFA is really glossing over an inherent limitation:

    the "shiftability" of a Lytro image is a function of the width of the image sensor

    If the goal of this is to produce useful stereo content that replicates the parallax seen by humans, then the image sensor needs to be at least as big as the average distance between two human pupils. That's roughly six centimeters. The Lytro's sensor is around six millimeters. Somehow I doubt they're going to increase their form factor by ten times in each dimension, and since the point of a Lytro is to avoid fancy lenses they can't bend the light path to compensate.

    • by dgatwood ( 11270 ) on Friday February 01, 2013 @09:01PM (#42767743) Homepage Journal

      No, but they might be able to avoid the lens entirely and do microlensing on a flat surface. For example, I could totally see the entire back of a cell phone be a light field camera, automatically throwing out data from where your fingers overlap the edges from holding it by using capacitive sensors in some way. I mean, we're probably talking twenty or thirty years out here, but that's the direction I see things heading eventually. And that would give you a believable stereo spread, not to mention much more usable resolution.

    • by mr_exit ( 216086 ) on Friday February 01, 2013 @09:07PM (#42767781) Homepage

      As long as you've got enough parallax to work out the depth information from your scene, you can push the effect to recreate viewpoints that are wider then you have real data for.

      You will end up with tiny slivers of image that you don't have pixel data for when there's a foreground element that diverges more then it did before, but that's easy to recreate. All post converted 3d films have this problem to an even greater extent, there's algorithms out there to clone the surrounding pixels or even use pixels from other frames if the object is moving through the scene

      There are lightfield cameras out there that instead of using a single chip, they use an array of small cameras (think cell phone cameras) The adobe one is 500 Megapixels

      See the research by Todor Georgiev http://tgeorgiev.net/ [tgeorgiev.net] The Lytro camera is a nice cheap toy, but there's some stunning results form researchers.

    • by Guspaz ( 556486 ) on Saturday February 02, 2013 @12:48AM (#42769055)

      Typical IPD is 54-68mm. IMAX film stock is 70mm wide. This is not an insurmountable problem, at least for professional use.

      Imagine a 70mm lightfield motion picture camera. Other than the fact that the data throughput would be positively insane, the requirements for physical size would be substantially less than a current IMAX camera.

      I suspect that you can actually get away with less than the typical IPD and still produce a convincing effect. In which case, you can buy the required sensor today; you can get 48mm wide medium format digital sensors, and there's nothing special about the sensor in the Lytro. It's the array of microlenses and software that make it special. So it would be possible today to build a Lytro motion picture camera with a 48mm digital sensor, and I suspect that 48mm is close enough to the typical IPD to produce a convincing effect. Such sensors also have the resolution to make lightfield work for a motion picture (50 MP models turned in up the first page of results on B&H), and the cameras themselves are smaller than most motion picture cameras (or even ENG cameras)...

      I suspect that the primary problem would be, again, the data throughput. Uncompressed 24fps 50 megapixel 36-bit images, those would pump out 41 gigabits per second... Compression would be pretty much required. If we use redcode as a benchmark (because apparently motion picture productions are happy with the quality of the compression enough to use it), where the minimum camera-supported compression ratio (on the RED ONE) is 8:1 and the highest is 12:1... This gives us about 5.1 Gbps and 3.4 Gbps... Heck, that's easy to handle. Existing communications tech can handle that, you could have a single 10 Gbps ethernet cable running out the back of your camera to an on-site storage box, and storing that sort of data rates isn't hard. Even a 4TB on-camera SSD module could store 156 minutes of footage... and handle those kinds of write speeds.

    • Comment removed based on user account deletion
    • I thought it was a limitation of the lens diameter, not the sensor chip. It doesn't really matter what size the other lenses and sensor is behind that if you are capturing each ray that came in the front.
  • voxels (Score:2, Interesting)

    by Anonymous Coward

    Combine the data from 12 or so of these in a matrix and you have a really powerful, accurate, self optimizing point cloud capture device for voxel 3d content.

    • My first thought was that this can do for "true" 3D recording what the Kinect couldn't because of its interference with other Kinects. We could put a bunch of these around someone and reconstruct a very complete 3D scene, including normal information (the camera knows what direction light is coming from), which is useful for motion capture, videobloggers who want a neat gimmick, and -porn-... and the latter has driven all sorts of innovations.

      And the geek in me is giddy at the thought of the data you could

      • I forgot to mention why recording a 3D scene is useful for anything but 3D scanning. I hope most will already know, but for those that don't: you could do all sorts of things with it. The first things to come to mind is that stereoscopic 3D viewing is easy to achieve with realistic results. You could also look at the subject from different angles, and post-processing could use the 3D data to make extremely accurate green-screen type cutouts even without a green screen or anything like it. If you're clever,

    • by dfghjk ( 711126 )

      Sure are a lot of buzzwords applied in ignorance here.

  • by Sarusa ( 104047 ) on Friday February 01, 2013 @09:00PM (#42767739)

    'when the public really demands 3-D content'

    When it doesn't require glasses and doesn't give you headaches.

    • Re: (Score:2, Interesting)

      by harperska ( 1376103 )

      I wonder if it would be possible to make a 'light field' display, which rather than each pixel emitting light in all directions like in current 2d and faux 3d displays, it would be able to emit light in both the frequency and vector that was detected by the camera. This would be true autostereoscopic 3d, as the emitted light would have the same properties as the original light allowing the eye to naturally focus on it. I wonder if this would be possible by perfecting lenticular display technology, or if it

      • by gmueckl ( 950314 ) on Friday February 01, 2013 @09:47PM (#42768069)

        A lenticular lens array in front of LCD screens are a nice do-it-yourself solution that almost does the trick. It makes an autostereoscopic display that can display more than 2 images in different directions, making it possible to move around in front of the screen and see a stereo image without glasses. However, there are a couple of limitations. The LCD resolution suffers tremendously and the number of zones that you can create still isn't very high. Maybe it gets better with retina displays, but I'm not sure. Even paper printouts of 20 to 30 images at 600dpi are barely good enough.

        Another interesting idea is this proposal: http://gl.ict.usc.edu/Research/LFD/ [usc.edu] - replace each pixel on a *huge* screen with a microprojector acting as a directional light source. It is insane in its own special way, but this research group has successfully thrown massive amounts of hardware at problems in the past.

      • by CityZen ( 464761 )

        If we could make displays at the same resolutions that we make image sensors, then it would be quite easy to make the display, since it would operate just like a Lytro except in reverse (the Lytro uses a microlens array in front of a regular high resolution image sensor).

        Now, performing the computations to know what to display, that's another story. Of course, you could just display the data from the Lytro camera directly.

      • by Dwedit ( 232252 ) on Friday February 01, 2013 @11:38PM (#42768699) Homepage

        There was an article earlier about Tensor Displays (slashdot link) [slashdot.org], (MIT link) [mit.edu], which used a sandwich of three high-refresh-rate LCD screens to simulate a light field by using the screens to selectively block light in multiple directions.

  • by Guppy ( 12314 ) on Friday February 01, 2013 @09:52PM (#42768113)

    In November, Lytro, the maker of the first light field camera for consumers, upgraded its viewer software to enable a feature called 'Perspective Shift.' In addition to refocusing pictures after they've been taken, Lytro audiences can now pivot between different virtual points of view, within a narrow baseline.

    It sounds like the techniques Lytro uses could make for a really good Borescope/Endoscope. Imagine being able to virtually shift your view to get another perspective (even if only a few millimeters), without moving your scope. If you could process the shifting fast enough, you might use it as a way to compensate for the motion of a beating heart or moving probe. Or upon reviewing a recording, re-focusing on a newly-found item of interest, even after you've pulled your scope out of the patient.

    It might also be used to build a compact yet superior type of Fundus Camera -- current cameras are often rather bulky things. The Lytro has a single aperture, yet might be capable of imagine the retina in 3-D (it is a multi-layered structure). The light field info might even allow you to compensate for some kinds of cornea or lens aberration.

  • Here [flickr.com]'s a virtual focus photo I did a few years ago, placing the focal plane on a skew.

    If you take photos from a large enough set of positions with a normal camera and some time, you can get the same thing lytro does, but only with still subjects.

  • Ha! Now all those nitpickers who complained that Deckard's inspection of Leon's photo in his Esper machine shows an impossible "perspective shift" will have to eat their words!

    I guess with a good-sized light field, you really can photograph around corners!

    • by gl4ss ( 559668 )

      as far as I could tell it worked by calculating stuff from reflections in the picture.

      it was a 2d picture after all that he had. just one with crazy, crazy dpi.

      • by dzfoo ( 772245 )

        How can you and others comment on how it "worked," when it's a visual effects sequence on a fantasy film? It wasn't intended to prove any theoretical process, and I doubt that the sequence was conceived by physicists; it was intended to look futuristic and cool.

                  dZ.

      • Even a reflection will only give you a single POV from a traditional camera, no matter how high the resolution. But there's definitely a point in the sequence where Deckard tells it to track left, and part of the door edge moves across the background, revealing more of the woman and nightstand behind it. That's the part nitpickers complain about - looking "around" an object in the picture.

    • The nitpickers do not understand the scene, or follow Deckard's mindset as he examines the photo.

      The view was taken in one room looking into a second room(hallway?) through a doorway. On the opposing wall (in the other room looking through the doorway) is a convex silver mirror. In the right portion of the curved mirror is reflected the image of a partially open door that impinges into the room where the picture was taken. A full length mirror set in the center of the door creates the correct incident a

      • sorry for replying to my own post... correcting a slight error in my description.
        Erm cant keep left and right straight... the cot/bed is on the right... and also it is not unusual in remodeled victorians converted to rooming houses to have a room that has an entrance at ether end of the central hallway.

      • Watch this section, from 1:57 to about 2:08:

        http://www.youtube.com/watch?v=qHepKd38pr0#t=1m57s [youtube.com]

        You will see the movement of the door edge that reveals more detail as Deckard tracks and zooms. There's no possible way for a flat, static photo taken from a single POV to do this. It has to be 3D, or layered, or the Esper has to be doing some kind of interpolation from the distorted reflection.

        • That the Esper is applying interpolative corrections makes perfect sense for the scene. "Enhance" would seem to mean, "figure out what is going on in this picture and correct for possible distortion(s)."

          The tools we have today can do this. They can't do it automagically. It generally requires telling the software, "In this region of the picture... Treat *this* curved edge as if it was straight." Doing so would result in an acceptable image.

  • by PPH ( 736903 )

    So when the public really demands 3-D content, we will be ready for it.

    I thought the public had already weighed in on 3D and their opinion is basically, "Meh".

  • Mostly they show up when in creative mode because you can have areas of the scene that can't be pulled into focus, and when shooting dirty glass. I've been really happy with the picture quality in general though, and am sure that improvements in the software/algorithms will help a lot.
    https://pictures.lytro.com/tophertuttle/pictures/544030 [lytro.com]

    https://pictures.lytro.com/tophertuttle/pictures/544050 [lytro.com]

    https://pictures.lytro.com/tophertuttle/pictures/531986 [lytro.com]

    Also this article makes for an interesting read.
    http://eclect [eclecti.cc]

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...