Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Input Devices Media

HDR Video a Reality 287

akaru writes "Using common DSLR cameras, some creative individuals have created an example of true HDR video. Instead of pseudo-HDR, they actually used multiple cameras and a beam splitter to record simultaneous video streams, and composited them together in post. Looks very intriguing."
This discussion has been archived. No new comments can be posted.

HDR Video a Reality

Comments Filter:
  • by Above ( 100351 ) on Thursday September 09, 2010 @07:54PM (#33529130)

    HDR

    Focus Stacking

    Panoramic Stitching

    All in the camera, all 1-button easy to use, and all at once.

  • HDR? (Score:4, Interesting)

    by afaik_ianal ( 918433 ) * on Thursday September 09, 2010 @08:02PM (#33529200)

    Can anyone give a brief rundown on what HDR is? I know it stands for "high dynamic range", but as someone who knows nothing about photography, it means nothing to me. What it has to do with overexposure/underexposure (to which the video refers)? Why is it harder to do with video than still images?

  • Very impressive! (Score:3, Interesting)

    by WilliamGeorge ( 816305 ) on Thursday September 09, 2010 @08:08PM (#33529228)

    I've been a long-time fan of HDR photography, and was just thinking about ways that HDR could be implementing in video camcorders as well. Personally I'd like to see a correctly-exposed stream mixed in with the other two, as is common in photography, but even without that the effect is pretty darn cool.

    By the way, in case any camcorder manufacturers are watching, consider this idea: make a video camera with three (or more) times the required number of sensors for the resolution you want to record at. Set the logic in the device up to use three unique sets of sensors inside to pick up three different sets of images, at differing exposure settings. Then have them saved separately so that they can be integrated later on for various editing effects - or have a mode where they are integrated on-the-fly for easier use by non-professionals. I imagine it would be expensive to make such a complex sensor and camera, but it might be easier to manage than multiple cameras as the folks in the article did.

  • by frank_carmody ( 1551463 ) <(pedrogent) (at) (gmail.com)> on Thursday September 09, 2010 @08:12PM (#33529264)
    I had my first foray into HDR still photography recently and I have to say I'm very very impressed with the results. Certain night-time scenes look absolutely stunning using 4-5 exposures. Here's some shots by a friend of a friend: http://roache7.deviantart.com/gallery/ [deviantart.com].
  • Re:HDR? (Score:3, Interesting)

    by EnsilZah ( 575600 ) <.moc.liamG. .ta. .haZlisnE.> on Thursday September 09, 2010 @08:17PM (#33529314)

    I'm no expert on the subject but the basics as I understand them are you take several photos at different exposures, that way you have all the details in the dark areas from the overexposed photo, the details in the bright areas from the underexposed photo (that would otherwise be burnt out) and you can even use an HDR image for lighting a 3D scene by I guess analyzing the nonlinear way lighting changes between exposures (this area I'm a bit less clear about)

    It's difficult to do for video since for a still image you just take different photos without moving the camera, so you need to share the same point of view but it can be at different times given a static scene, with video you need to share both the point of view and the time so it requires, as they did here, splitting the same image into two and having two cameras record at two different exposures.

    What I'm not sure about is why you can't just use a single exposure and make copies of the current states along its duration, probably has something to do with sensor response times and or the method used to read from it being destructive.

  • Re:HDR? (Score:5, Interesting)

    by plover ( 150551 ) * on Thursday September 09, 2010 @09:33PM (#33529756) Homepage Journal

    One problem I realized after watching the scene with the guy is the video compression artifacts can be different between the two cameras. Even if the sensors were perfectly aligned with each other and the optics, the MPEG compression could be different because the values at each pixel will still be slightly different due to the differences in exposure levels. Different pixel values can cause different compression schemes to be invoked in each block, which will result in weird combinations of aliasing. I think this may have been partly responsible for the shimmer on his denim jacket.

  • by forkazoo ( 138186 ) <wrosecrans@@@gmail...com> on Thursday September 09, 2010 @09:47PM (#33529820) Homepage

    Spheron had an awesome single-sensor HDR video camera demo at SIGGRAPH this year. It records 20 stops of latitude, and after some processing for debayering and whatnot, you get an EXR sequence. I got to see it live, in person, and stand a few feet away from the camera. The guy running the demo even let me play with some footage in Nuke on the demo laptop. I'm confused about why a hacked up beamsplitter based system would be so noteworthy, when the single-sensor method will suffer less light loss thanks to the simpler optical path.

    I'm sure the guys who did this project are proud of what they pulled off, and it's probably a neat hack, but I have to assume they are sort of operating in a vacuum if they think they have really invented something newsworthy.

  • Re:HDR? (Score:3, Interesting)

    by mtmra70 ( 964928 ) on Thursday September 09, 2010 @09:57PM (#33529878)
  • by Prune ( 557140 ) on Thursday September 09, 2010 @10:09PM (#33529960)
    You forgot about full lightfield capture. This can be done with a single camera using ultra high resolution and a microlens array (or alternatively, an array of a very large number of tiny cameras). Think single camera, single shot capture of depth (3D) and all focus planes. Then you can reproduce the full 3D and multiple focus depths (as in, the eye would have to focus at different depths) on a flat display with microlens array covering it (again, need ultra-high resolution since focal depths and parallax viewpoints are discretized to the pixel number covered by each micro lens).
  • by Prune ( 557140 ) on Thursday September 09, 2010 @10:13PM (#33529986)
    You don't need aperture at all if you use a microlens array to do integral photography. On top, you get full depth (3D) and capture all focal lengths, including the focal depth information. All in a single shot. Just need an ultra high resolution sensor--or, instead, an array of many small cameras (works just as well, and no need for perfect alignment as that can be finessed in software). You capture a full 4D lightfield (light can be parameterized as the two pairs of coordinates of a light ray crossing two infinite planes), i.e. miss no optical information whatsoever other than your diffraction and wavelength limits.
  • Franken/3D cameras (Score:5, Interesting)

    by gmuslera ( 3436 ) on Thursday September 09, 2010 @11:07PM (#33530290) Homepage Journal
    With frankencamera [stanford.edu] you could do HDR and a lot more things in an "intelligent" camera with software. In fact the first implementation in a mass consumption device was in the N900, it takes several photos, regulates exposition and other parameters to make that photo in a more parametrizable way that the iphone could do. But not sure if that would be enough for HDR video, if needs that the input, in real time, have different something at hardware level. In that case maybe something like this 3D camera [pcr-online.biz] would be needed. And could give some meaning to such devices... not only shooting in 3d, but in HDR video.
  • by thrawn_aj ( 1073100 ) on Thursday September 09, 2010 @11:51PM (#33530486)
    So, HDR video would help make movies look like ... video games??? Is it just me or does that video (that parent linked to) look amazingly like a (post-HalfLife2) game? I guess this should be a fantastic clue for game programmers who usually try to go the other way ;). Lack of HDR = more "realistic" video? (where realistic is defined by what people are used to). Find an algorithm to intelligently degrade the dynamic range in a rendering and CGI becomes more photorealistic.
  • Re:This is not HDR (Score:3, Interesting)

    by black3d ( 1648913 ) on Friday September 10, 2010 @12:29AM (#33530674)

    That's pretty much down to our mental training that a photograph is a realistic representation of lighting in a scene.

    This is similar to the mental effect which makes high frame-rate 60-90fps video look "fake" and less true-to-life to us, who have been watching 25fps movies for decades, despite the opposite being true.

    In truth, printed photographs are terrible representations of light and instead rely on our knowledge of the elements to trick our brain into viewing lit scenes in the context of previous experiences. Digital photographs, capable of being artificially lit are much better, but still not as good as real life.

    However, the best true-to-life representations of digital photographs is SDR tone-mapped HDR images. Look at the lights around you - your eyes DO see those blooms around lights, etc. Years of looking at standard photographs has trained us to believe that they're a great representation of real life - when they're not. They're simply the best we've been able to generally do.

    Besides, eventually HDR will be the norm, and this entire line of conversation will be moot. By that time, they will be "a normal photograph". In fact, HDR techniques have been practiced for a long time now - heavily since the 80s. Many of the "great" published photos of our times were taken with multiple exposure techniques - we might just not realize it because we only see the final result.

  • by grcumb ( 781340 ) on Friday September 10, 2010 @01:27AM (#33530914) Homepage Journal

    and it also has to give BJs

    ... Actually, forget the HDR, focus stacking, panoramic stitching and the rest. I say we put all the R&D money into BJs!

  • by yoyhed ( 651244 ) on Friday September 10, 2010 @03:22AM (#33531376)
    It's the other way around.

    Even though we call it high dynamic range in videos and photographs, it's actually just compressing all the extra information from multiple exposures into a LOWER dynamic range, so we can manipulate/display it on our 8-bit screens.

    Games, however - such as the Source engine after it got the HDR update with Half-Life 2: Lost Coast and Day of Defeat: Source, actually do increase the dynamic range of a scene beyond what your monitor can display. They underexpose and overexpose parts of the scene when transitions between light and dark places occur, just as your eyes would before they adjusted to the new light, or as a video camera would depending on what exposure the videographer chose. This makes it look more realistic - just take a look at a bright outdoor scene in Half-Life 2: Episode Two and check out how shiny objects in the sunlight have blown-out highlights that gleam brilliantly, and then look at the same scene in the original Half-Life 2, where that object would look flatly-lit and fake. The "non-HDR" looks more fake because the dynamic range is compressed so you can see all the detail everywhere, which also gives it that flat "game" look.

    Of course, that last part is just my opinion - but I believe that in order to look more realistic, CGI needs to simulate the behavior of traditional cameras with a lower dynamic range (or that of your eyes before they've adjusted properly to bright/dim light). The everything-is-exposed-properly, compressed-dynamic-range look just appears fake to me, even though my eyes could probably perceive that range at the actual scene. I'm not sure why.
  • Re:HDR? (Score:2, Interesting)

    by Fri13 ( 963421 ) on Friday September 10, 2010 @04:27AM (#33531630)

    But they still do have the syncing problem when using a D-SLR's. Even that 5DMKII can shoot HD video, it can not be used for real 3D work neither. The cameras do not have perfect sync for frames. The framerate is choppy. Very short videos can be taken but longer ones bring lots of problem in the postprocess. As the framerate can change while recording from 29,97 to 29,98 and so on. That is not so bad thing for HDR but it still exist, and for 3D that would cause lots of problems as both eyes notices the out-of-sync problem and it just is terrible to watch.

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Friday September 10, 2010 @10:23AM (#33533552) Journal

    For the third of the Fast and Furious movies, we had to film at night in the spectacular Shibuya Square in Tokyo, with its many animated billboards and video screens. I really wanted to get an HDR film of the billboards.

    For the driving green-screen sequences of the film, we had built a plate to mount three cameras, at 0, 45, and 90 degrees, to shoot panoramas driving down the street. To get the nodal points closer together, we had the cameras facing toward each other, with the lenses almost touching. It worked wonderfully.

    By taking the center camera out, and replacing it with a beam-splitter, we had a down-and-dirty HDR rig using the other two cameras. Now, this was HDR on film, not video -- but film already has a very high dynamic range -- so two cameras with very different effective exposures gave us a tremendous dynamic range. In the 'normal' exposure all of the brighter signs were blown out, but on the beam-splitter camera you could see all the details of the structure of the lighted billboards. Quite cool.

Without life, Biology itself would be impossible.

Working...