Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Television Hardware Build

MIT Develops Holographic, Glasses-Free 3D TV 98

Posted by samzenpus
from the realer-than-real dept.
MrSeb writes "Engineers at the Massachusetts Institute of Technology (MIT) are busy working on a type of 3D display capable of presenting a 3D image without eye gear. What you've been presented with at your local cinema (with 3D glasses) or on your Nintendo 3DS console (with your naked eye) pales in comparison to what these guys and gals are trying to develop: a truly immersive 3D experience, not unlike a hologram, that changes perspective as you move around. The project is called High Rank 3D (HR3D). To begin with, HR3D involved a sandwich of two LCD displays, and advanced algorithms for generating top and bottom images that change with varying perspectives. With literally hundreds of perspectives needed to accommodate a moving viewer, maintaining a realistic 3D illusion would require a display with a 1,000Hz refresh rate. To get around this issue, the MIT team introduced a third LCD screen to the mix. This third layer brings the refresh rate requirement down to a much more manageable 360Hz — almost within range of commercially produced LCD panels."
This discussion has been archived. No new comments can be posted.

MIT Develops Holographic, Glasses-Free 3D TV

Comments Filter:
  • by mysidia (191772) on Thursday July 12, 2012 @07:13PM (#40633995)

    Do they need to add more LCD panels? :)

  • by ma++i+ude (580592) on Thursday July 12, 2012 @07:17PM (#40634027) Homepage
    "MIT Develops Holographic, Glasses-Free 3D TV"? Only if by "holographic" you mean "not holographic"
    • by zalas (682627) on Thursday July 12, 2012 @09:31PM (#40634987) Homepage

      Oh interesting, so they finally gave it a name. I remember coming across the 2-layer version of the display sometime ago. Looks like they also have an interesting theoretical foundation to go with it; the abstract of the first paper from Gordon Wetzstein's page [mit.edu] gives a nice overview.

      What essentially is going on is that you can model (at least when talking about things much larger than the wavelength of light) light as a four-dimensional function (i.e. intensity of light along all the possible rays that fill space), which is referred to in this research area as a "light field." Putting a mask somewhere in space will mask out a 2D-extrusion of the mask shape in 4D space. Putting multiple masks at different planes will mask out the product of this 2D-extrusions (and the extrusion angle varies as a function of depth). Hence, what they are doing is attempting to piece together the original 4D function by piecing together unmasked portions at each time frame.

      For a more simplified view, you can think of this as trying to create a 2D picture through a sequence of special single-color 2D pictures created by placing stripe patterns oriented at a fixed set of angles on top of a light panel.

      If you've taken linear algebra, it is somewhat like decomposing a matrix into a sum of rank-one matrices, except here each component needs to be positive (masks cannot create "negative" light).

      • by jsh1972 (1095519) on Friday July 13, 2012 @03:04AM (#40636589)
        I'm not saying it's timecube, but it's timecube.
      • I truely should patent "negative light".

      • by mcgrew (92797) *

        What I didn't understand was "Instead of the complex hardware required to produce holograms." Holograms are pretty easy to do with film, all you need is a dark room, a laser, a lens, and a beam splitter. To see the image you simply shine a laser at the film after it's developed. We did this in an undergrad physics class I took way back in the seventies.

        It seems to me that if you had a high enough resolution display, you could view holograms on it by backlighting with lasers instead of LEDs, although making

        • by Anonymous Coward

          What I didn't understand was "Instead of the complex hardware required to produce holograms." Holograms are pretty easy to do with film, all you need is a dark room, a laser, a lens, and a beam splitter. To see the image you simply shine a laser at the film after it's developed. We did this in an undergrad physics class I took way back in the seventies.

          It seems to me that if you had a high enough resolution display, you could view holograms on it by backlighting with lasers instead of LEDs, although making the actual movies would likely be difficult.

          Holographic photography is not the same things as converting data into a hologram. Basically no one has figured out how to make a digital hologram.

          • by Carnildo (712617)

            Holographic photography is not the same things as converting data into a hologram. Basically no one has figured out how to make a digital hologram.

            Digital holographic displays do exist, but they're strictly for research purposes right now: the resolution is horrible (last time I checked, a top-of-the-line display was 30 lines of 250 pixels), the computational requirements were huge (that 30-line display was driven by a multi-core multi-GPU workstation, generating a few frames per second), and the bandwidth

  • Yay! (Score:2, Funny)

    by sneakyimp (1161443)
    I can't wait to see another shitty Tupac. Wait, that's redundant.
  • by Anonymous Coward
    So useless for TV.
    • by acid_andy (534219)

      So useless for TV.

      Just hang four tellies on the wall. Or twelve if you're really popular. It then just needs something to identify which viewer is supposed to be watching which TV as it could get annoying if it adjusted for one of the other viewers glancing at it from the other side of the room, but I imagine that could be even more of an issue with the single TV. Face tracking biased to whoever has been looking at the current show the longest?

      • by acid_andy (534219)

        So useless for TV.

        Just hang four tellies on the wall. Or twelve if you're really popular. It then just needs something to identify which viewer is supposed to be watching which TV as it could get annoying if it adjusted for one of the other viewers glancing at it from the other side of the room, but I imagine that could be even more of an issue with the single TV. Face tracking biased to whoever has been looking at the current show the longest?

        Ah, looks like it doesn't do face tracking anyway and could work for multiple viewers. My bad.

    • Re:Just one viewer? (Score:5, Informative)

      by White Flame (1074973) on Friday July 13, 2012 @04:04AM (#40636823)

      No, this effectively broadcasts many views of the image through the entire range. Any viewer at any valid angle within the field of view should see a properly tracked perspective.

  • by Anonymous Coward on Thursday July 12, 2012 @07:52PM (#40634299)

    "Please state the nature of your medical emergency."

    • by pgpalmer (2015142)
      Which will lead to the Emergency Secretarial Hologram, Emergency Technician Hologram, Emergency Receptionist Hologram, Emergency Lawyer Holograms, and Emergency Company Representative Hologram.
  • by CyberVenom (697959) on Thursday July 12, 2012 @08:34PM (#40634589)

    The article from the first link is a little better explanation than the second link.

    This is not quite a hologram, but it is a true multi-viewer solution without the need for headtracking or other dynamic tricks. It is a precomputed video stream displayed on precisely spaced, and slightly higher-than-your-living-room-tv-refresh-rate, but otherwise normal LCD panels.

    Basically, the MIT guys have come up with algorithms to compute a set of three overlay transparencies, which selectively occlude or reveal certain pixels when viewed from certain angles due to parallax, such that one of many possible perspective images of a scene is produced depending on the angle from which this stack of overlays is viewed.

    The part they seem most proud of is that because these different perspective views are all of the same scene, many of the pixels are the same color from one perspective to another, so they only need to concentrate their parallax trick on making a select few pixels vary by angle, thus reducing the complexity of the problem to the point where it can actually be realized with consumer resolution LCD panels and attainable data rates.

    • by Ozan (176854)

      Thank you for your explanation. Is it correct to say that the principle is similar to shutter glasses? But instead of occluding one eye at a time, one picture for exactly one perspective is shown at a given time, while the other perspectives (for different viewing directions) are not displayed. This is the reason for the high refresh rates needed.

  • Think of this like an integral display: https://en.wikipedia.org/wiki/Integral_imaging#Description [wikipedia.org]

    Except that instead of using microlenses to bend the rays, they are using the layered screens to produce virtually bent rays. The high FPS is because they can only produce one set of virtually bent rays for any one frame, so they need as many frames as they want points of view. IOW what integral displays need in extra pixels this display needs in extra frames.

    To put it another way, this is to integral what par

    • by Exrio (2646817)
      Also, they are abusing the property of most real 3D scenes that not everything changes from one POV to another (ie. the middle of a diffuse-lit diffuse surface doesn't) to try to cram more POVs in less frames.
    • by zalas (682627)

      The analogy kind of works and kind of doesn't. A parallax barrier has an image layer and a fixed mask layer. What these guys did was to allow for multiple layers with time-varying patterns and optimize the pattern on each layer so as to create a better image. So it's more like "this is to integral what parallax on crack is to lenticular."

      • by Exrio (2646817)

        I didn't think anything of the time-varying, but maybe I'm just spoiled because in my field we convert from PCM to PDM and back, every day for breakfast, and once again for dinner, and the mindset of resolution--time equivalence sort of sticks with you.

        But yes, your version is more accurate.

  • So I'm assuming that this requires the source to be supplying the additional content at their 1000hz (or whatever refresh rate) to cover the full range of viewing angles? So now all we need are video media 1000 times bigger, and graphics chips 1000 times faster to supply the frames.
    • by Exrio (2646817)

      No. The content itself is at a normal video frame rate, the extra frames are computed out of a map of the deltas between POVs at the displaying site.

      Of course you still need to store that in the video somehow, but it's only the inevitable overhead of holographic vs. 2D, which isn't going to be anywhere near 1000 times bigger and is only going to get smaller as compression methods tailored to it are developed.

    • by Beardydog (716221)
      Do you currently run video at 1Hz?
      • by hughJ (1343331)
        Sorry, I wasn't using "1000" to mean a specific amount, it was just a convenient way of asking if the source data would dramatically balloon as there would have to be huge amounts of additional data to support all these unique POVs rather than just a pair of stereo POVs.

        I guess basically where I'm coming from is asking myself the question: what is needed in order for a display using this 3D technology to replace the present 3D HDTV implementation while keeping at least a 1080p @ 60hz field per eyeball? Wh
  • THEY'VE DONE IT (Score:5, Informative)

    by wisebabo (638845) on Thursday July 12, 2012 @10:26PM (#40635325) Journal

    This is really a significant breakthrough. I mean good looking, glasses free 3D (please look at the video) which means MULTIPLE SIMULTANEOUS VIEWERS using CHEAP components. The only difficulty is the compute power requirement is a little high but that's nothing that won't be solved quickly thanks to Dr. Moore. (I think they are also able to use GPUs so massive cheap parallelism can overwhelm the problem).

    A previous poster brought up the good point that it wasn't clear if the scene was pre-rendered. If/when it can be done on the fly (just a matter of CPU power), think of the applications. CAD, GAMES!

    In 10 years (or less hopefully) we should have really large (80") true 3D displays that a bunch of people can stand around and touch (like what those guys in Perceptive Pixel, recently bought by Microsoft*, do). Talk about science fiction.

    I actually submitted this story a day or two ago but I didn't understand how it worked (and still really don't get it, the math is beyond me). Anyway I'm glad it's getting the attention it deserves.

    *Let's hope that Microsoft doesn't kill it, or use the patents it acquired to block progress.

    • by khipu (2511498)

      This isn't "multiple simultaneous viewers".

      • Yes, it is. This is anyone, looking at the screen from any angle within the supported field of view will see the (approximately) appropriate perspective of the displayed objects. No head tracking, fancy glasses, etc. required. Very much like an animated hologram in appearance, though the technique used is different.

      • by gl4ss (559668)

        it is.

        but the compute requirement is a bit more than just "a little high" for games.

        • by Anonymous Coward

          Maybe something will finally use those 8-core processors and 1536-core [geforce.com] graphics cards.

  • ... because I saw it on Star Trek.

  • Anyone know if it's feasible to construct a camera that records footage that this screen would output? Would they just interpolate between multiple cameras?
    This is the first bit of 3D display tech I'm genuinely interested in, the current stereoscopic implementations have too many compromises.
    • by Exrio (2646817) on Friday July 13, 2012 @02:26AM (#40636455)

      The camera that films video for this display is a light-field camera: https://en.wikipedia.org/wiki/Light-field_camera [wikipedia.org]

      Surprisingly they're already being sold to mere mortals, but those are early models that are not mature enough to be used for video production (the Lytro is for consumers but can only take pictures, the Raytrix can take video but is for industrial applications).

      In the meantime while these cameras mature, any way you can turn imagery into 3D models is fair game, maybe a wide-angle high resolution Kinect, or interpolation from two normal cameras (it's a bit more complex than interpolation but you get the idea), or mere image recognition a la gimmicky 2D-to-3D conversion, etc.

  • is this going to be yet another device that could screw up a young child's vision and/or gives people headaches?

  • The migraine test is the only true test for all 3D viewing technology.
  • Inspired by the tapestry on pg 560 of Skylark III [gutenberg.org] I wanted to build a art piece that contained thousands of suspended beads -- as you walked around it, the beads would align into images that could only be seen from that one spot; it would be a random (although attractive) array of colors otherwise.

    This work here seems similar, although infinitely more practical and realizable. Very nice work.

  • I wonder what the precise computational cost is of rendering complex 3D objects and scenes on this kind of technology.

    I can't wait to have one of these babies to run Taodyne's 3D presentations software [taodyne.com] on it! One of the things which is tricky with current multiscopic displays is how to convert existing 2D movies. Ideally, you want to do that in real time, something that Tao Presentations can do for Alioscopy or Tridelity displays, but which is more computationally expensive for Philips/Dimenco displays.

To avoid criticism, do nothing, say nothing, be nothing. -- Elbert Hubbard

Working...