Researchers Develop Genuine 3D Camera 96
cylonlover writes "Cameras that can shoot 3D images are nothing new, but they don't really capture three dimensional moments at all — they actually record images in stereoscopic format, using two 2D images to create the illusion of depth. These photos and videos certainly offer a departure from their conventional two dimensional counterparts, but if you shift your view point, the picture remains the same. Researchers from Ecole Polytechnique Federale de Lausanne (EPFL) hope to change all that with the development of a strange-looking camera that snaps 360 degrees of simultaneous images and then reconstructs the images in 3D."
Quick question (Score:2)
Re: (Score:2)
Negative. Kinect a regular camera and an IR range camera. The IR range camera can figure out the depth based on IR returns, but it can't see anything from any additional angles making it just as fixed as stereoscopy.
Re:Quick question (Score:5, Insightful)
Nor can the camera in the article. They keep talking about "being able to see the scene from any point", but that's a load of bullshit. All they've done is combined a 360 camera array (what Street View does) with stereoscopic vision (what regular 2-camera 3D does) to get a 360 view with depth information. So now you can look around in a scene in 3D, but you can't change your position. The camera still sees the scene only from one viewpoint, it's just that it has a full hemispherical field of view plus depth/3D info. Cool? Yes, but hardly a breakthrough, and definitely nothing like what they claim it does.
If the camera can't see something because it is obscured by another object, then it can't see it, period. The camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd. You can do a better job of 3D scene reconstruction with three or four Kinects in different places than with this, since then you can actually obtain several perspectives simultaneously and merge them into a more complete 3D model of the scene.
Re:Quick question (Score:5, Insightful)
Seriously, Slashdot can't handle the degree sign either? That's ISO-8859-1 for fuck's sake, not even one of the weirder Unicode characters.
Re: (Score:2)
Doesn't work with HTML codes [ascii.cl] either, I tried writing it as ° and as ° and it didn't work with either one
Re: (Score:2)
Re: (Score:3)
"Nor can the camera in the article."
Nor does the article discuss how the "3D" images are to be viewed, beyond a very vague "...which is no longer limiting thanks to the 3D reconstruction."
Holograms? Those stupid-fucking glasses? A planetarium (that would actually make the most sense)?
Kinects (Score:2)
Re: (Score:1)
Re: (Score:2)
http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]
Also, the kinect as it is today dose not easily allow combinations, but it is not hard to imagine different IR frequencies being used to prevent interference or even blinking patterne with a difference of phase.
Re: (Score:2)
Ah, but you see, the Kinects *know* what pattern to expect (they correlate with a known pattern) and ignore extraneous data, so in practice the interference between two or three kinects is minimal and only results in a few scattered small "holes" (that you fill in with data from the other device). I didn't think it would work at first, but, in fact, it does.
Re: (Score:2)
Re: (Score:3)
the camera has a bit more info due to the use of multiple lenses somewhat offset from each other, but that's just like regular stereoscopic vision, and your viewpoint is still severely limitedd.
It doesn't have to be like regular steroscopic vision. The clever bit wouldn't be so much in the camera positions.
It would be in the image/scene processing: http://news.cnet.com/8301-13580_3-9793272-39.html [cnet.com]
See the video too: http://www.youtube.com/watch?v=xu31XWUxSkA [youtube.com]
Based on both videos, Adobe's tech looks more impressive to me. And they did that years before.
Re: (Score:2)
If you placed sevral of these in a stadium with high end figial camers you could emulate a full #D experience. Sure you could hife spots if you wished, but for enterprainment and mapping it would be pretty awesome. Of course the software would have to be able to handle the input from multiple locations and corrlate the data between them, but syill yjay would be amazing.
Re: (Score:2)
Where do you get the idea that having a bunch of camera looking outward from a single point will be any more effective at doing 3D than 2 cameras set up to do stereoscopy? (let's assume no other differences here)
This thing can't magically look around corners just because it's looking outward at different angles.
Re: (Score:2)
(and yes, I know that this thing is using software to compute the distance of objects, but that isn't anything new. We've been able to do the same thing with two images for a long time)
Re: (Score:1)
Have you seen the video of two Kinects rendering a 3d object? as long as one sees it then you have some of the 3d info.
Re: (Score:1)
Kinect is not exactly a ranging camera, it is based on structured light processing. There are 3 types of classifications in 3D cameras -
1. Stereoscopic - the most prevalent kind, meant for human vision
2. Time of flight - these are true IR ranging cameras that get depth information from the time the light takes to reflect off the object. Mostly used for machine vision since it provides true depth numerically for each pixel.
3. Structured light - The shine a patterned light out and analyze the way the light di
Re:Quick question (Score:4, Informative)
What about the 2-kinects video where the scene was shown from the viewpoint of a non-existant camera located somewhere between the two kinects?
Re: (Score:2)
Synthesizing depth information from the differences in simultaneous stereographic images sufficient to produce images from any point between the cameras that took the stereographic images is something that was done in software (and available in lots of consumer packaged software, though I don't think any of it was particularly popular) long before Kinect (I first saw
Re: (Score:2)
Not by default. But it can be arranged [youtube.com].
BS... (Score:2, Insightful)
Re: (Score:1)
One of the major problems with today's 3d technology is that the brain of the viewer is used to a specific distance between the eyes. If the distance between the camera lenses of a 3d camera is not the same as the distance between the eyes, your brain will generate a distorted depth image (to shallow, too deep) and you might end up with a major headache (as some experiences). Not to mention that tele/wide lenses causes additional problems of the same nature. To represent a movie that was correct and natural
Re: (Score:1)
So instead... (Score:2)
of Stereoscopic....it's polyscopic? I dunno...this still seems like more of the same.
More of the same, just stereoscopic on steroids (Score:2, Insightful)
Unless it's doing a lot of moving around, it's just stereoscopic on steroids. If it stays in a fixed position, even though it has more than two cameras, all the objects are at fixed points. Until it can accurately judge the height, width and depth of an object without faking it in reconstruction, or making an educated guess - it's just more of the same. Humans suffer from the same limitations, but they fix this by moving the viewpoint around until a coherent 3D image is constructed.
Unless you have cameras
Worthless stereoscopic eyeballs (Score:2)
I never knew I was using such worthless vision capabilities until now.
I do hope to upgrade to the far superior bug eyes which will allow me truly see in three dimensions.
Re: (Score:2)
Wouldn't that really require a phased array?
that's not 3d (Score:2, Redundant)
Re: (Score:2)
It's nothing new, though. We've been doing this for ages. The only thing remotely cool about this is the fact that it has a whole bunch of cameras that can put together large panoramas.
old hat (Score:2)
You're right, it's nothing new. It's not even real 3D, it's "stereoscopy" to a higher degree. We were doing true 3D analysis 15 years ago; video analysis of gait and other motions on a treadmill. We used multiple cameras mounted orthoganally, and a digital mixer to combine and record onto SVHS tape w/a SMPTE time code. Post recording analysis was done using Peak Performance (brand) software. This was way before the cinematographers and game ma
Re: (Score:1)
Yeah I can't see any difference between this and a fish-eye lens with a lat/long transform applied to it.
Two Canon 5Ds with 8mm lenses a foot apart would be considerably more effective and cover the same FOV.
What's new about this? (Score:1)
The summary and article make it seem like this is some revolutionary *3D* device. It isn't. What it does to create 3D imagery has been around for a long long time (done in software, perhaps on a dedicated chip). The only newsworthy thing about this is that it can do very large panoramas.
I think... (Score:1)
...you still have your work cut out for you, blade runner.
Lausanne is in Switzerland (Score:4, Insightful)
No. (Score:5, Insightful)
Words mean things.
Re: (Score:2)
Re: (Score:3, Interesting)
We should use the appropriate terminology: light fields. A traditional camera captures a point sample of a light field: you see a correct image from 1 point of view. A hologram captures a 2D planar sample of a light field: you see a correct image from any perspective that looks through the film (as if it's a window). To capture a volume sample of a lightfield is not really possible (at least, not at the same instant in time), since that requires having a sensor be placed everywhere in the volume, and of
Re: (Score:2)
Does anybody know what the problem with true holographic cameras is? High power pulse lasers are available. What would the requirements be for a sensor to record the interference pattern?
Not Over Promising (Score:2)
It sounds like this is a combination of the Kinect and the Google Street View or yellowBird [yellowbird...eality.com] camera. It had to read the article and watch the video twice, because initially it sounded like they were promising more than this could do. Turning a town or campus into a 3D model for a game sounds quite doable; you just need to move the camera around a ton as it records. As for getting a different perspective at a concert, he said you need several cameras. If you have a lot, then yeah, I can see a smooth perspecti
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
he/she missed a few geometry classes too, because he/she thinks you can capture 3D with only one camera. and some other geometry classes, because he/she thinks a 2D array of 2D cameras makes a 4D camera.
simple explanation for apparent stupidity: we just fed a troll.
Re: (Score:1)
Also, for which use are they flawed? If you want to render a scene from any direction, then yes, you most likely need more than a 1D array of 2D cameras, however if the scene is intended to be view
Re: (Score:1)
Stereoscopic images are not the same as 3D.
The difference is 2 pictures versus a line of pictures.
A 2D array of 2D cameras give 4D.
Here is a picture of it:
http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com]
The flaw is an inefficiency of about 100 due.
Re: (Score:1)
Are you implying that if I altered the depth of some of the cameras (thus having them in a 3D grid) we would have 5D or 6D (depending on if you are using addition or multiplication
Re: (Score:1)
No. A video wall have the LCDs in the same plane as the LCD planes, so that is just re-use of dimensions.
The only Quantum Physics in LCDs are their polarizing filters and the transistors. (And LEDs in the newer ones.)
Re: (Score:2)
Personally I'm just glad they finally have a 2D camera out there. I keep cutting myself on my 2D one, and once I set it down on a flat surface I can't pick it up again.
Re: (Score:2)
Just keep your ifs and your ofs straight, and you should be fine.
Re: (Score:1)
You cannot simply add the dimensions, it depends on how you integrate the image data together. Us people who don't know very much call this integration 3D reconstruction. "The image is 3D" - do you mean the real world is 3D ? The image, as you put it, is a projection from 3D onto a 2D plane and is most definitely 2D.
Humans possess a stereoscopic vision system, each eye is capturing a 2D i
Re: (Score:1)
| You cannot simply add the dimensions, it
| depends on how you integrate the image data
| together. Us people who don't know very much
| call this integration "3D reconstruction".
You misunderstood. There are 4 dimensions to the
data captured by the camera:
http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com]
1. The X axis on the light sensors.
2. The Y axis on the light sensors.
3. The radius of the cameras from the top of the dome.
4. The angle of the cameras from the top of the dome.
| "The image is 3D" - do you mean th
Re: (Score:2)
You misunderstood.
There are 4 dimensions to the data captured by the camera:
http://images.gizmag.com/hero/3d360camera.jpg [gizmag.com] [gizmag.com]
1. The X axis on the light sensors.
2. The Y axis on the light sensors.
3. The radius of the cameras from the top of the dome.
4. The angle of the cameras from the top of the dome.
You are misunderstanding. There are only 3 dimensions to information being collected. There may be 4 dimensions needed to describe a particular array of cameras, but that does not magically create a 4D amount of information. (actually, you left out the 3 dimensions describing the direction of the focal plane and the focal length of the lens, the dimensions describing the imaging surface, the pixel arrangement, and other dimensions required to describe the camera array completely.)
Where more than two cam
Re: (Score:1)
Finally someone who is not an arrogant ignorant, but instead knows something.
And I agree about the /. users.
They seem to have become more of the self righteous idiot kind. They are not useful.
Kim0+
Re: (Score:1)
You are confusing dimensions with parameters.
Your argument is therefore invalid.
Kim0+
That's not 3D, its panorama (Score:2)
TFA gets it wrong, too... Sure, it may be great for immersive experiences, but it doesn't even address the question of 3D. For that, we are still stuck with holograms.
Its just a hi-res Omnidirectional camera (Score:1)
They keep referring to the notion of depth being used, but unless there is some big technology that they completely forgot to mention in the article & video, it just does the equivalent of pointing a camera into a bowl shaped mirror, allowing you to see in all 360 degre
The amazing thing is... (Score:2)
Shouldn't that be 4 \pi sr ... (Score:2)
Re: (Score:2)
I'd mod you up if I had mod points. So I can only dump my comments mired with frustration here.
Of course, what the shit is 360 degrees here? On a plane you have 360 degrees. You can draw them on a simple exercise book from your school days. Though, those so-called scientists, being engineers, ought to know the basics of undergrad engineering: a sphere has 4 times pi. And their camera doesn't. Look at the photo of the original article, it is a hemisphere. No way to see the nadir of the 'dark' half. In princi
Bullshit (Score:3)
Bullshit. This is not genuine 3D. This is just stereovision using a lot of cameras demonstrated by a guy with a 'orrible french accent that talks a lot about what could be done but in fact they do not even come close to what this other guy built in a fraction of the time using a MS Kinect: http://www.youtube.com/watch?v=5-w7UXCAUJE [youtube.com]
That video also makes it very clear why the fantasies of the french guy will never come true. At least not with that camera.
Bleh (Score:2)
Though EPFL is usually good, (Score:2)
this is bad advertisement. And timothy ought not have posted it.
As someone who has worked in stereoscopic research, there is nothing new to it in this 'development'. Except, of course, maybe the brute force real-time stitching of the images. The idea to arrange a multitude of cameras on a half-sphere has grown a beard over decades.
Worse, there is not much of a difference between a traditional '3D'-view (which isn't, actually, 3D), and this arrangement. A quarter century ago some chaps had a somewhat functio
Re: (Score:2)
this is bad advertisement. And timothy ought not have posted it.
Right. It's 3D from stereoscopy, not a depth camera. The baseline between imagers is small, so it won't have much depth resolution for distant objects. Note that while the video shows outdoor scenes, it doesn't show depth information for them.
Now the Advanced Scientific Concepts flash LIDAR [advancedsc...ncepts.com] is a real time of flight depth camera, a 128 x 128 pixel LIDAR. With a real time of flight device, depth resolution stays constant with distance, as
Re: (Score:1)
I have a real one at work. (Score:2)
Re: (Score:2)
Those roly things on the front of my head... (Score:1)
Matrix (Score:2)
This is inverse 3D... instead of what we want to see from all the possible points of view, we have what is around the camera.in a plain view
I prefer the approach of the cameras surrounding the action in the first Trinity scene in Matrix.
360 Degrees? (Score:1)
3 Flying Cameras (Score:2)
It seems to me that 3 flying cameras surrounding the scene should be able to capture practically any scene in 3D (4D, really, including time - though time might be a fractional dimension since it seems to move only forward).
3d porn (Score:1)
old news (Score:1)
This news is 2 weeks old. I saw this very same video on youtube while I was searching for 3d cameras some two weeks ago. Was just curious as to what format the 3d cameras, if there are any, were storing their images in...
I have been waiting for 3d cameras to arrive for a long time now. Was imagining something that will probably shoot some kind of lasers may be, into the scene and capture the depth of the objects that I point to and reconstruct the scene in 3d. I don't think the camera in this video is porta
This has already been done more elegantly. (Score:1)
This was done years ago with two 360 degree panorama cams. The two spherical panoramic images gave reasonably convincing 3D imaging for a field of view of at least 160 degrees. This means you could dart your eyes anywhere in this field of view with acceptable results. To render subjects outside of this field of view, you had to reformat the subject at a particular area to match the two images well enough to allow viewing. This allowed viewing out to a FOV of nearly 180 degrees. Two pics. A bit of simple sof
360 yes, 3D no (Score:2)
Because all the camera focal points are approximately at the same location, the images can be stitched together in software to create a full hemispherical view. This is essentially the same type of snapshot that is used in Google street view, and allows you to look in different directions.
In order to change position, however, requires depth information. Because the focal points are not EXACTLY at the same location, it is theoretically possible to estimate depth, although the practical reality is that the