Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Input Devices Technology

Researchers Develop Genuine 3D Camera 96

cylonlover writes "Cameras that can shoot 3D images are nothing new, but they don't really capture three dimensional moments at all — they actually record images in stereoscopic format, using two 2D images to create the illusion of depth. These photos and videos certainly offer a departure from their conventional two dimensional counterparts, but if you shift your view point, the picture remains the same. Researchers from Ecole Polytechnique Federale de Lausanne (EPFL) hope to change all that with the development of a strange-looking camera that snaps 360 degrees of simultaneous images and then reconstructs the images in 3D."
This discussion has been archived. No new comments can be posted.

Researchers Develop Genuine 3D Camera

Comments Filter:
  • Does anyone know if the Microsoft Kinect is classified as a "true" 3d camera under these criteria?
    • Negative. Kinect a regular camera and an IR range camera. The IR range camera can figure out the depth based on IR returns, but it can't see anything from any additional angles making it just as fixed as stereoscopy.

      • Re:Quick question (Score:5, Insightful)

        by marcansoft ( 727665 ) <hector@marcansoft.UMLAUTcom minus punct> on Saturday December 11, 2010 @02:20AM (#34521820) Homepage

        Nor can the camera in the article. They keep talking about "being able to see the scene from any point", but that's a load of bullshit. All they've done is combined a 360 camera array (what Street View does) with stereoscopic vision (what regular 2-camera 3D does) to get a 360 view with depth information. So now you can look around in a scene in 3D, but you can't change your position. The camera still sees the scene only from one viewpoint, it's just that it has a full hemispherical field of view plus depth/3D info. Cool? Yes, but hardly a breakthrough, and definitely nothing like what they claim it does.

        If the camera can't see something because it is obscured by another object, then it can't see it, period. The camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd. You can do a better job of 3D scene reconstruction with three or four Kinects in different places than with this, since then you can actually obtain several perspectives simultaneously and merge them into a more complete 3D model of the scene.

        • Re:Quick question (Score:5, Insightful)

          by marcansoft ( 727665 ) <hector@marcansoft.UMLAUTcom minus punct> on Saturday December 11, 2010 @02:25AM (#34521830) Homepage

          Seriously, Slashdot can't handle the degree sign either? That's ISO-8859-1 for fuck's sake, not even one of the weirder Unicode characters.

          • by mangu ( 126918 )

            Doesn't work with HTML codes [ascii.cl] either, I tried writing it as &deg; and as &#176; and it didn't work with either one

          • Slashdot, rather than using one of the existing whitelists of unicode characters, rolls its own. This contains all characters that more than 100 posts have complained about them not supporting. If you're bored and want to read something entertaining, find some of the posts explaining the rationale for the current state of Slashdot unicode support. It's really scary to think that people who consider those reasons to be valid arguments are writing code that we're using...
        • "Nor can the camera in the article."

          Nor does the article discuss how the "3D" images are to be viewed, beyond a very vague "...which is no longer limiting thanks to the 3D reconstruction."

          Holograms? Those stupid-fucking glasses? A planetarium (that would actually make the most sense)?

        • I'm glad to see people calling "bullshit" on this. I'm big into developments of PhotoSynth/Bundler/PMVS and other interesting 3D photogrammetry, so this is close to my heart. I'd just like to clear up a confusion though: "You can do a better job of 3D scene reconstruction with three or four Kinects in different places[...]" Unfortunately you can't, as the Kinects use structured light reconstruction, and the IR light patterns from multiple Kinects would confuse each other.
          • by psy0rz ( 666238 )
            What if you somehow alternate the ir beam, so they're not on at the same time?
          • by Yvanhoe ( 564877 )
            They confuse each other a bit, but you still can do some things :
            http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]

            Also, the kinect as it is today dose not easily allow combinations, but it is not hard to imagine different IR frequencies being used to prevent interference or even blinking patterne with a difference of phase.
          • Ah, but you see, the Kinects *know* what pattern to expect (they correlate with a known pattern) and ignore extraneous data, so in practice the interference between two or three kinects is minimal and only results in a few scattered small "holes" (that you fill in with data from the other device). I didn't think it would work at first, but, in fact, it does.

        • Comment removed based on user account deletion
        • by TheLink ( 130905 )

          the camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd.

          It doesn't have to be like regular steroscopic vision. The clever bit wouldn't be so much in the camera positions.

          It would be in the image/scene processing: http://news.cnet.com/8301-13580_3-9793272-39.html [cnet.com]
          See the video too: http://www.youtube.com/watch?v=xu31XWUxSkA [youtube.com]

          Based on both videos, Adobe's tech looks more impressive to me. And they did that years before.

        • If you placed sevral of these in a stadium with high end figial camers you could emulate a full #D experience. Sure you could hife spots if you wished, but for enterprainment and mapping it would be pretty awesome. Of course the software would have to be able to handle the input from multiple locations and corrlate the data between them, but syill yjay would be amazing.

      • Where do you get the idea that having a bunch of camera looking outward from a single point will be any more effective at doing 3D than 2 cameras set up to do stereoscopy? (let's assume no other differences here)

        This thing can't magically look around corners just because it's looking outward at different angles.

        • (and yes, I know that this thing is using software to compute the distance of objects, but that isn't anything new. We've been able to do the same thing with two images for a long time)

        • by Anonymous Coward

          Have you seen the video of two Kinects rendering a 3d object? as long as one sees it then you have some of the 3d info.

      • Kinect is not exactly a ranging camera, it is based on structured light processing. There are 3 types of classifications in 3D cameras -
        1. Stereoscopic - the most prevalent kind, meant for human vision
        2. Time of flight - these are true IR ranging cameras that get depth information from the time the light takes to reflect off the object. Mostly used for machine vision since it provides true depth numerically for each pixel.
        3. Structured light - The shine a patterned light out and analyze the way the light di

      • Re:Quick question (Score:4, Informative)

        by Rhaban ( 987410 ) on Saturday December 11, 2010 @05:52AM (#34522254)

        What about the 2-kinects video where the scene was shown from the viewpoint of a non-existant camera located somewhere between the two kinects?

        • What about the 2-kinects video where the scene was shown from the viewpoint of a non-existant camera located somewhere between the two kinects?

          Synthesizing depth information from the differences in simultaneous stereographic images sufficient to produce images from any point between the cameras that took the stereographic images is something that was done in software (and available in lots of consumer packaged software, though I don't think any of it was particularly popular) long before Kinect (I first saw

    • Not by default. But it can be arranged [youtube.com].

  • BS... (Score:2, Insightful)

    by markass530 ( 870112 )
    So I RTFA and WTFV , and the asshole at the computer put on some fucking glasses, I call shenanigans..
    • by Logopop ( 234246 )

      One of the major problems with today's 3d technology is that the brain of the viewer is used to a specific distance between the eyes. If the distance between the camera lenses of a 3d camera is not the same as the distance between the eyes, your brain will generate a distorted depth image (to shallow, too deep) and you might end up with a major headache (as some experiences). Not to mention that tele/wide lenses causes additional problems of the same nature. To represent a movie that was correct and natural

    • I don't think the video he was looking at was even rendering the red/blue stereoscopic effect when he put them on...
  • of Stereoscopic....it's polyscopic? I dunno...this still seems like more of the same.

  • by Anonymous Coward

    Unless it's doing a lot of moving around, it's just stereoscopic on steroids. If it stays in a fixed position, even though it has more than two cameras, all the objects are at fixed points. Until it can accurately judge the height, width and depth of an object without faking it in reconstruction, or making an educated guess - it's just more of the same. Humans suffer from the same limitations, but they fix this by moving the viewpoint around until a coherent 3D image is constructed.

    Unless you have cameras

  • I never knew I was using such worthless vision capabilities until now.

    I do hope to upgrade to the far superior bug eyes which will allow me truly see in three dimensions.

    • I do hope to upgrade to the far superior bug eyes which will allow me truly see in three dimensions.

      Wouldn't that really require a phased array?

  • that's not 3d (Score:2, Redundant)

    by catbutt ( 469582 )
    It may be 360 degree, but not 3d. It doesn't process depth any more than a traditional 2d camera, it just takes a wider angle of view.
    • Yeah I can't see any difference between this and a fish-eye lens with a lat/long transform applied to it.

      Two Canon 5Ds with 8mm lenses a foot apart would be considerably more effective and cover the same FOV.

  • The summary and article make it seem like this is some revolutionary *3D* device. It isn't. What it does to create 3D imagery has been around for a long long time (done in software, perhaps on a dedicated chip). The only newsworthy thing about this is that it can do very large panoramas.

  • ...you still have your work cut out for you, blade runner.

  • by martin-boundary ( 547041 ) on Saturday December 11, 2010 @03:01AM (#34521898)
    Whoever tagged the story "france" got it wrong. The *real* Ecole polytechnique is of course in France, but this one is in Switzerland.
  • No. (Score:5, Insightful)

    by The Master Control P ( 655590 ) <ejkeever@nerdNETBSDshack.com minus bsd> on Saturday December 11, 2010 @03:11AM (#34521910)
    There's only one kind of "genuine" 3D camera, and it requires very special film and one of absolute stillness or high-power pulse lasers. We call the process "holography," and if it doesn't do that it's not a real 3D "camera."

    Words mean things.
    • That's not even 3D. A true 3D "camera" would capture a sample at every point in the volume being captured. That means it would show what's inside objects too. Put another way, if I take a 3D picture of a house, it should look the same regardless of where I happen to be standing with the camera.
      • Re: (Score:3, Interesting)

        by CityZen ( 464761 )

        We should use the appropriate terminology: light fields. A traditional camera captures a point sample of a light field: you see a correct image from 1 point of view. A hologram captures a 2D planar sample of a light field: you see a correct image from any perspective that looks through the film (as if it's a window). To capture a volume sample of a lightfield is not really possible (at least, not at the same instant in time), since that requires having a sensor be placed everywhere in the volume, and of

    • Does anybody know what the problem with true holographic cameras is? High power pulse lasers are available. What would the requirements be for a sensor to record the interference pattern?

  • It sounds like this is a combination of the Kinect and the Google Street View or yellowBird [yellowbird...eality.com] camera. It had to read the article and watch the video twice, because initially it sounded like they were promising more than this could do. Turning a town or campus into a 3D model for a game sounds quite doable; you just need to move the camera around a ton as it records. As for getting a different perspective at a concert, he said you need several cameras. If you have a lot, then yeah, I can see a smooth perspecti

  • Comment removed based on user account deletion
  • TFA gets it wrong, too... Sure, it may be great for immersive experiences, but it doesn't even address the question of 3D. For that, we are still stuck with holograms.

  • From the article & video, all I can see is a higher-resolution version of an Omnidirectional camera, which is very common in mobile robots. Such as this list of about 50 different types! "http://www.cis.upenn.edu/~kostas/omni.html"

    They keep referring to the notion of depth being used, but unless there is some big technology that they completely forgot to mention in the article & video, it just does the equivalent of pointing a camera into a bowl shaped mirror, allowing you to see in all 360 degre
  • The amazing thing is... My realtor must have been a genius because when we sold our house 4 years ago he had that very same camera take a picture of our living room...
  • After all, we're talking 3-D and not 2-D!
    • by udippel ( 562132 )

      I'd mod you up if I had mod points. So I can only dump my comments mired with frustration here.
      Of course, what the shit is 360 degrees here? On a plane you have 360 degrees. You can draw them on a simple exercise book from your school days. Though, those so-called scientists, being engineers, ought to know the basics of undergrad engineering: a sphere has 4 times pi. And their camera doesn't. Look at the photo of the original article, it is a hemisphere. No way to see the nadir of the 'dark' half. In princi

  • by zmooc ( 33175 ) <zmooc@zmooc.DEGASnet minus painter> on Saturday December 11, 2010 @08:33AM (#34522728) Homepage

    Bullshit. This is not genuine 3D. This is just stereovision using a lot of cameras demonstrated by a guy with a 'orrible french accent that talks a lot about what could be done but in fact they do not even come close to what this other guy built in a fraction of the time using a MS Kinect: http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]

    That video also makes it very clear why the fantasies of the french guy will never come true. At least not with that camera.

  • This isn't what I want in a 3D camera. I want to be able to spin the scene after moving from original point. I want to see what's behind something. The cameras need to be encapsulating the scene.
  • this is bad advertisement. And timothy ought not have posted it.
    As someone who has worked in stereoscopic research, there is nothing new to it in this 'development'. Except, of course, maybe the brute force real-time stitching of the images. The idea to arrange a multitude of cameras on a half-sphere has grown a beard over decades.
    Worse, there is not much of a difference between a traditional '3D'-view (which isn't, actually, 3D), and this arrangement. A quarter century ago some chaps had a somewhat functio

    • by Animats ( 122034 )

      this is bad advertisement. And timothy ought not have posted it.

      Right. It's 3D from stereoscopy, not a depth camera. The baseline between imagers is small, so it won't have much depth resolution for distant objects. Note that while the video shows outdoor scenes, it doesn't show depth information for them.

      Now the Advanced Scientific Concepts flash LIDAR [advancedsc...ncepts.com] is a real time of flight depth camera, a 128 x 128 pixel LIDAR. With a real time of flight device, depth resolution stays constant with distance, as

    • What if, you had a room with a bunch of objects and this camera in the article somewhere in the middle, and then you surrounded the room with a hemisphere of honeycombed mirrors so that your 360 degree camera in the center could look at the images on the mirrors to see the scene from every angle and somehow use software to reconstruct all the information?
  • In engineering we use laser scanners that use a laser as a rangefinder to find out how far from the camera each pixel is. You then shoot from different perspectives to build a 3D scene that you can move around in. http://www.faro.com/3dimager/videos/ [faro.com]
  • "... they actually record images in stereoscopic format, using two 2D images to create the illusion of depth" My eyes work the same way.... dammit... if I want to see around something I have to actually move my damn head!
  • This is inverse 3D... instead of what we want to see from all the possible points of view, we have what is around the camera.in a plain view

    I prefer the approach of the cameras surrounding the action in the first Trinity scene in Matrix.

  • Someone is still thinking in 2D.
  • It seems to me that 3 flying cameras surrounding the scene should be able to capture practically any scene in 3D (4D, really, including time - though time might be a fractional dimension since it seems to move only forward).

  • 3D Porn! Need I say more? Anyway now that I got your attention I remember sketching a camera like this 15 years ago and learning *then* that it was nothing new... I heard that Disney's panoramic movie at Disneyland LA used something like this, just in analog and without the processing. True 3d processing is like what you see at http://2d3.com/ [2d3.com] ...
  • This news is 2 weeks old. I saw this very same video on youtube while I was searching for 3d cameras some two weeks ago. Was just curious as to what format the 3d cameras, if there are any, were storing their images in...

    I have been waiting for 3d cameras to arrive for a long time now. Was imagining something that will probably shoot some kind of lasers may be, into the scene and capture the depth of the objects that I point to and reconstruct the scene in 3d. I don't think the camera in this video is porta

  • This was done years ago with two 360 degree panorama cams. The two spherical panoramic images gave reasonably convincing 3D imaging for a field of view of at least 160 degrees. This means you could dart your eyes anywhere in this field of view with acceptable results. To render subjects outside of this field of view, you had to reformat the subject at a particular area to match the two images well enough to allow viewing. This allowed viewing out to a FOV of nearly 180 degrees. Two pics. A bit of simple sof

  • Because all the camera focal points are approximately at the same location, the images can be stitched together in software to create a full hemispherical view. This is essentially the same type of snapshot that is used in Google street view, and allows you to look in different directions.

    In order to change position, however, requires depth information. Because the focal points are not EXACTLY at the same location, it is theoretically possible to estimate depth, although the practical reality is that the

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...