Become a fan of Slashdot on Facebook


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×
Displays Medicine Hardware Science Technology

MIT Fiber Points To Woven Glasses-Free 3D Displays 52

MrSeb writes "Electrical engineers and material scientists at MIT have created a fiber-borne laser that could be woven to form a flexible display that could project different 3D images in any number of directions, to any number of viewers. MIT's fiber is similar to standard telecoms fiber, but it has a tiny droplet of fluid embedded in the core. When laser light hits the fluid, it scatters, effectively creating a 360-degree laser beam. The core is then surrounded by layers of liquid crystal, which can be controlled like 'pixels,' allowing the laser light to escape from specific points anywhere along the length of the fiber. This means that you could have a display that shows one picture on the 'front' and another on the 'back' — or different, glasses-free 3D images for everyone sitting in front and behind. In the short term, the laser fiber is more likely to have a significant application in photodynamic therapy, an area of medicine where drugs are activated using light. Photodynamic therapy is one of the only ways to treat cancer in a relatively non-invasive and non-toxic manner. MIT's laser could be threaded into almost any part of the body, where the ability to produce pixels of laser light at any point along its length would make it a highly accurate device."
This discussion has been archived. No new comments can be posted.

MIT Fiber Points To Woven Glasses-Free 3D Displays

Comments Filter:
  • Re:No headache? (Score:5, Informative)

    by JustinOpinion ( 1246824 ) on Monday March 12, 2012 @10:21AM (#39325781)

    Is there a word for where both eyes' 'beams' are pointing to?

    That's usually called convergence []. It's one of at least 5 ways that humans infer distances and reconstruct the third dimension from what they see:
    1. Focal depth: based on how much the eye's lens has to focus
    2. Convergence: based on the slight differences in pointing of the two eyes
    3. Stereoscopy: based on the slight differences between the left and right image
    4. Parallax: the different displacements/motions of objects at different distances (e.g. when you move your head)
    5. Visual inference: reconstructing using cues like occlusion, lighting, shadows, etc.

    As long as all 5 of those don't agree, the image won't look 'truly 3D': it will seem wrong at in many cases can cause headaches or nausea (your brain is getting conflicting information for which there is no physically-correct solution). The reason that current 3D systems fail is that they don't match all 5. A regular 2D movie (or a photograph, etc.) gives you #5 and that's it. This works actually remarkably well. Glasses-based 3D systems try to trick you by giving each eye a slightly different image, which adds #3, but since 1,2 and 4 are still wrong, the overall effect feels weird: your eyes still have to point at, and focus on, the movie screen. (It's even worse for 3D-TV since you are focusing on something relatively close to you.)

    The reason this happens is precisely because a movie/TV screen has spatial resolution (each pixel is different) but no angular resolution (the image on the screen is the same regardless of where your head/eyes are positioned). If you could add back in the angular information (with enough resolution), then you could create an arbitrary light field, that was indistinguishable from a physically-realistic light field. If done right in terms of angular resolution and computing a physically-correct light field, then this would give you 1,2, 3, and 4. (And 5 also, if what's being projected is a realistic scene with proper shadowing and so forth.) If the light field is properly created, each eye will get a slightly different image (since each eye is at a slightly different angle with respect to the screen); these images will change as you move your head around; and your eyes will in fact NOT focus or converge on the location of the screen: they will focus and converge on the virtual image being created by the light field emanating from the screen. (This is similar to a hologram, which can be a two-dimensional sheet and yet reconstruct the light field that would come from a three-dimensional object, and can create virtual images that are not in the plane of the sheet.)

    The prototype being demonstrated in this article is not good enough to do that, mind you: they don't have enough angular resolution to trick your eyes. However that's where this technology is headed, and if it's done at high enough resolution, we will finally get proper 3D: where we're not just tricking your eyes, but where we're actually projecting the correct light field towards the viewer.

  • Re:No headache? (Score:5, Informative)

    by JustinOpinion ( 1246824 ) on Monday March 12, 2012 @10:40AM (#39325925)
    For those will access, here's the actual scientific article:
    Alexander M. Stolyarov, Lei Wei, Ofer Shapira, Fabien Sorin, Song L. Chua, John D. Joannopoulos & Yoel Fink Microfluidic directional emission control of an azimuthally polarized radial fibre laser [], Nature Photonics 2012 doi: 10.1038/nphoton.2012.24 []

    Here is the abstract:

    Lasers with cylindrically symmetric polarization states are predominantly based on whispering-gallery modes, characterized by high angular momentum and dominated by azimuthal emission. Here, a zero-angular-momentum laser with purely radial emission is demonstrated. An axially invariant, cylindrical photonic-bandgap fibre cavity8 filled with a microfluidic gain medium plug is axially pumped, resulting in a unique radiating field pattern characterized by cylindrical symmetry and a fixed polarization pointed in the azimuthal direction. Encircling the fibre core is an array of electrically contacted and independently addressable liquid-crystal microchannels embedded in the fibre cladding. These channels modulate the polarized wavefront emanating from the fibre core, leading to a laser with a dynamically controlled intensity distribution spanning the full azimuthal angular range. This new capability, implemented monolithically within a single fibre, presents opportunities ranging from flexible multidirectional displays to minimally invasive directed light delivery systems for medical applications.

    In answer to your question, no this isn't a hologram, although in some sense it achieves a similar goal. Regular screens control the emission of light as a function of position. Holograms control not just the intensity of the emanating light but also the phase; this phase information carries all the extra information about the light field passing through a given plane. This new device controls the intensity and angular spread of the light coming from each pixel, which is also thereby controlling the full shape of the light-field being emitted from the plane of the screen.

    With both a hologram and this directional-emission concept, you're controlling the angular spread of the light coming from each point, are thus fully specifying the light-field, and thus creating 'proper 3D' that is physically-realistic and fully convincing. (Assuming you have enough angular resolution in your output to create the small differences the eye is looking for, of course.)

    As for why they are using a laser as the source light, it's mostly because they want detailed polarization control. (Coupling lasers into fiber-optics is well-established technology for telecommunications.) By controlling the exact mode of the laser-light propagation through the fiber, they can control the polarization of the light that shines out of the fiber, and thereby use conventional tricks to modulate that light. In particular, in an LCD screen, small fields are used to re-orient liquid-crystal molecules, which then either extinguish or transmit the light (based on whether the orientation of the LC molecule is aligned with the polarization of the light).

    Overall it's an ingenious trick: have a light fiber emit light with controlled polarization. Then have a series of LC pixels on the outside of the fiber, whose orientation can now not just modulate the intensity of emission as a function of position along the fiber, but also as a function of angle for each position along the fiber. The end result is that you control the light field emanating from the device, and so can (in principle) reconstruct whatever full-3D image you want.

    Of course the prototype in the article only has four LC channels along the fiber. Enough to create a different image on the front vs. the back of the screen. Not nearly enough to create realistic 3D. Also they are only controlling the angle in one direction (around the fiber axis), a

  • Re:No headache? (Score:5, Informative)

    by rgbatduke ( 1231380 ) <> on Monday March 12, 2012 @11:02AM (#39326157) Homepage
    If I understand this, the real possibility it enables is a true holographic display, not split images. The point is that one can deliver light with both amplitude and phase information to a fine-grained pixel grid. In principle, then, one can create outgoing waveforms of arbitrary shape, leading one to Casimir's prophetic statement (that I'm trying to recall, not look up): "If you see a lion in a cage, you cannot be certain that there really is a lion in the cage, as there could instead be a peculiar charge-current density that gives rise to the appearance of a lion". This is actually more formally stated as the Casimir Paradox -- because the solution to the EM field equations can be written as an integral equation with a surface (inhomogeneous) term, one can always reproduce the solution exterior to a closed subvolume produced by sources within that subvolume with a surface charge-current distribution that produces the same exterior solution.

    The lasers themselves would then be the charge-current distributions. The interesting question would be how to modulate them with the requisite holographic encoding including phase information, at sufficient resolution to produce a clean 3D image.


"In the face of entropy and nothingness, you kind of have to pretend it's not there if you want to keep writing good code." -- Karl Lehenbauer