Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Input Devices Media

HDR Video a Reality 287

akaru writes "Using common DSLR cameras, some creative individuals have created an example of true HDR video. Instead of pseudo-HDR, they actually used multiple cameras and a beam splitter to record simultaneous video streams, and composited them together in post. Looks very intriguing."
This discussion has been archived. No new comments can be posted.

HDR Video a Reality

Comments Filter:
  • by kaptink ( 699820 ) on Thursday September 09, 2010 @08:02PM (#33529196) Homepage

    C&P from the linked page (assuming a /.'ing imminent)

    HDR demo @ http://vimeo.com/14821961 [vimeo.com]

    Press Release:

    HDR Video A Reality

    Soviet Montage Productions releases information on the first true High Dynamic Range (HDR) video using DSLRs

    San Francisco, CA, September 9, 2010: Soviet Montage Productions demonstrated today the first true HDR video sourced from multiple exposures. Unlike HDR timelapse videos that only capture a few frames per minute, true HDR video can capture 24 or more frames per second of multiple exposure footage. Using common DSLRs, the team was able to composite multiple HD video streams into a single video with an exposure gamut much greater than any on the market today. They are currently using this technology to produce an upcoming film.

    Benefits of Motion HDR
    HDR imaging is an effect achieved by taking multiple disparate exposures of a subject and combining them to create images of a higher exposure range. It is an increasingly popular technique for still photography, so much so that it has recently been deployed as a native application on Apple’s iPhone. Until now, however, the technique was too intensive and complex for motion. Soviet Montage Productions believes they have solved the issue with a method that produces stunning–and affordable–true HDR for film and video.

    The merits of true HDR video are various. The most obvious benefit is having an exposure variation in a scene that more closely matches the human eye–think of filming your friend with a sunset at his or her back, your friend’s face being as perfectly captured as the landscape behind them. HDR video also has the advantage of reduced lighting needs. Finally, the creative control of multiple exposures, including multiple focus points and color control, is unparalleled with true HDR video.

    “I believe HDR will give filmmakers greater flexibility not only in the effects they can create but also in the environments they can shoot in” said Alaric Cole, one of the members of the production team, “undoubtedly, it will become a commonplace technique in the near future. ”

    Contact:
    Michael Safai
    Soviet Montage
    201 Spear Street #1100
    San Francisco, CA 94105
    1 415 489 0437
    mike@sovietmontage.com

  • Re:HDR? (Score:5, Informative)

    by mtmra70 ( 964928 ) on Thursday September 09, 2010 @08:10PM (#33529246)

    Wiki explains it well:
     
     

    is a set of techniques that allow a greater dynamic range of luminances between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods.

    And their picture is a great example. If you expose the building well, the clouds are washed out. If you expose the clouds well, the building is dark. If you take pictures of both equally exposed then merge the photos, you now have a properly exposed building along with a properly exposed sky giving thus giving you more dynamic range. Think of it like instead of going to the lunch buffet and cramming everything into one plate, you go up to the buffet three times with three plates: one for salad, one for main course and one for dessert. With a little processing (trips) you end up with more range (food variety).

  • Re:HDR? (Score:4, Informative)

    by treeves ( 963993 ) on Thursday September 09, 2010 @08:11PM (#33529252) Homepage Journal

    It requires post-processing. You combine images shot at bracketed (above and below the "optimum") exposures, in order to get the details in both the brightest and darkest parts of the image which are sometimes lost in high contrast situations. You end up compressing (to use an audio analogy) the brightness range into a smaller range so it can be reproduced on a monitor or paper.
    The post-processing of a LOT of frames requires a lot of processing power and time.

  • by scdeimos ( 632778 ) on Thursday September 09, 2010 @08:11PM (#33529260)
    Wasn't the first HDR video camera back in 1993? Granted, they called it Adaptive Sensitivity [technion.ac.il] back then.
  • by EnsilZah ( 575600 ) <.moc.liamG. .ta. .haZlisnE.> on Thursday September 09, 2010 @08:30PM (#33529394)

    I haven't noticed it and now it's been slashdotted so I can't confirm but I imagine that if they used two different exposures on the cameras then on the longer exposure a fast moving object would be blurred so at its core it would be darker because it's always blocking the light while at the edges it would be lighter since it's only blocking the light part of the time.
    So I guess it would create edge artifacts because of the mismatch between the short exposure which has less motion blur and is mostly at the same level of brightness and the long exposure which has the edge blurring.
    And I would think that you could solve that with a neutral density filter rather than using different exposure lengths.
    I'm this is all one big assumption though.

  • Unimpressed (Score:5, Informative)

    by Ozan ( 176854 ) on Thursday September 09, 2010 @08:37PM (#33529442) Homepage

    The technique is promising, but the provided example video does not demonstrate a true advantage it has over conventional cinematography. They filmed with two cameras, one overexposing one underexposing, but they don't have one with the right exposure to compare with the composed HDR images. The city scenes are filmed at daylight, without any areas of high contrast that would make a high dynamic range necessary. The same with the people example, they even overdid it to give it a vibrant effect, making it more of an artistic tool than capturing shadows and lights naturally.

    They should make a short film with city nighttime and desert scenes, that should be impressive. They should also contact director Michael Mann, he would jump at the opportunity to film HDR.

  • Re:HDR? (Score:5, Informative)

    by mtmra70 ( 964928 ) on Thursday September 09, 2010 @08:43PM (#33529480)

    HDR looks so unreal even if at times aesthetically pleasing. Their "more real" filter didn't do the scene much justice too.
    Was the guy supposed to look that way?

    The video was not very good at all, so I'm not sure why it is a big deal. The video of the guy was more HDR than any other part, though it was very strange.

    Take a look at some of the HDR photos on Flickr http://www.flickr.com/groups/hdr/pool/ [flickr.com]. They give much better and proper example of HDR.

  • Re:HDR? (Score:3, Informative)

    by ADRA ( 37398 ) on Thursday September 09, 2010 @08:46PM (#33529494)

    I'm not an expert, but from my limited knowledge:

    HDR is taking frames of varying exposure levels and merging them into a single picture that contains color levels combined from both. It would help in correcting contrast washout areas of the image that aren't the target exposure of the image without needing touch ups. Taking HDR pictures at multiple exposure levels allows for a richer range of captured detail. When I overexpose in sunlight, I get an effect that takes all detail away from a darker piece of the scene. This may be intentional if I'm looking to over saturate other areas of the image using optical capture techniques. Having the same effect could be simulated in post processing by adjusting levels of specific parts of the image, but that's more time consuming and may not lead to the best results. Having one image under exposed and another overexposed means that the richness of each color range can be captured as they were when shooting. That gives a director a lot of power in changing the composition of a shot without needing to re-shoot or do more laborious processing techniques.

    It is hard to do period, because any optical capture device has a set exposure that they are capturing for. The other issue is that the image has to be identical basically identical. Any variation (such as time delay between image captures) can cause ugly or unwanted side-effects that would require cleanup later on. Applying this principle to video capture, you -could- have a camera and single lens/sensor taking images at twice higher speeds, but that means reducing the possible exposure times by at least half which ultimately limits the possible lighting conditions that one could shoot HDR in (hard/impossible in the dark?). One could shoot two cameras simultaneously, but then again the problem is that because the images aren't exactly perfect which would lead to ugly artifacts. For close ups, this is all but infeasible because these artifacts become larger and more apparent. Think of this as the anti-3D concept. You want two pictures being taken at the same time, but instead of having them offset based on the capture view plane, the photographer wants them as close to identical in terms of angle / offset as possible. For 3D-HDR movies, you'd need at least 4 simultaneous frames being captured at all times (two left, two right)

    These guys' solution seems to be taking one lens and by applying an beam splitter (http://en.wikipedia.org/wiki/Beam_splitter) (which ultimately reduces the amount of incoming light) cuts the frame identically between two channels which gets fed into two Canon cameras (capturing video) who are set to varying exposure timings. They've chosen to use 2 stops+/- and I don't really know if that's the ideal for HDR capture or if its just the maximum automatic exposure variation they can choose in the 'pseudo-auto' exposure modes built into the cameras.

  • Re:HDR? (Score:5, Informative)

    by ColdWetDog ( 752185 ) on Thursday September 09, 2010 @08:48PM (#33529506) Homepage
    That's one of the problems with HDR photography. The light to dark transitions just don't look quite right and so the scene has an 'unreal' appearance. Either washed out or cartoonish.

    You see that all of the time in still HDR photography and I think it has to do with the limitations of the final media - movie screens, paper, computer screens - that do not reproduce the eye's ability to deal with contrast well. In prints, you can work with this and minimize but not completely remove the effect. I imagine that they could tweak their algorithms a little better but Internet video isn't a particularly high quality visual experience in the first place so there well be some limitations in how well they can do it.
  • Re:HDR? (Score:1, Informative)

    by Anonymous Coward on Thursday September 09, 2010 @08:55PM (#33529554)

    Tone mapping and HDR are often used synonymously. Unfortunately the article also confuses HDR with tone mapping. They're two parts of the process which in combination often creates an "unreal" look. The HDR part is about capturing the higher dynamic range. The tone mapping part is about reducing the dynamic range without losing detail or color in the shadows or highlights (blue sky instead of white, texture instead of flat shadows). The tone mapping is what makes these pictures look unreal when it is overdone or performed carelessly. Algorithms for automatic, realistically looking tone mapping are still a research topic. It doesn't have to look unreal though. Tone mapping can be used to create realistic impressions. For example, in this panormic image [fotoausflug.de], the result of tone mapping is that you can see the tables in the shadow and the blue sky with the faint clouds at the same time. Without tone mapping, you'd see a white sky or black shadows. (That picture is not an HDR picture, but it is strongly tone mapped. This [fotoausflug.de] is an HDR and tone mapped picture.)

  • by arcsimm ( 1084173 ) on Thursday September 09, 2010 @09:51PM (#33529846)
    The bright spots are indeed an artifact of the HDR process -- partulcarly the tone-mapping algorithms. On its own, HDR is basically a method of capturing intensity values that would otherwise fall above or beneath the threshold of a camera's sensitivity. The problem is, when yo do that you end up with image data that can't be completely represented within the gamut of a printer or a screen. You could simply display a "slice" out of the data, which results in a regular images at whatever exposure setting you've chose, or try to "compress" the tone values into your available gamut, which results in a washed-out appearance. This is where tone-mapping comes in. What tone-mapping does is try to compute the correct exposure levels on a per-pixel basis, by comparing its intensity relative to nearby pixels. Ideally, this results in shadows being brightened to the point where you can see detail in them, and blown-out highlights brought toned down (analogous to "dodging" and "burning" in terms of old-school darkroom film processing -- the dynamic range of film is much higher than that of photo paper).

    In practice, though, you end up with weird highlights around dark areas, like the ones you saw around the man's arms, because the tone-mapping algorithm is trying to maximize the local contrast in the image. It's brightened up the coat, and so it also brightens nearby pixels to compensate for the reduction in contrast. Some people try to adjust the algorithms to minimize this effect, while others try to maximize it for dramatic effect, or even an oversaturated, impressionistic look -- it's largely an artistic choice, though when done badly it can also be a sign of amateurism. Still others will manually composite multiple exposures to get the benefits of HDR imaging while avoiding its side effects entirely,

    The Wikipedia article on tone-mapping [wikipedia.org] goes into great detail on the different approaches to HDR photography, if you're interested.
  • Re:HDR? (Score:5, Informative)

    by icegreentea ( 974342 ) on Thursday September 09, 2010 @09:54PM (#33529862)
    You can get HDR to look 'fine' or whatever adjective you want to use. It's just hard. The tone-mapping software/settings that many people use will just go and create doll skin and haloes everywhere. But if you do everything well (hard work!) you can get some really cool looking stuff. For example...

    http://www.flickr.com/photos/swakt1/2322363690/
    http://www.flickr.com/photos/swakt1/2322366898/in/photostream/
    http://www.flickr.com/photos/ten851/4972637653/in/pool-hdr

    Somewhat like many other art techniques, when best used, you barely notice it at all. And that is the most important thing to remember. HDR + tone mapping isn't just a technology, it is an art. Being able to capture video in 3 different stops at once is great, but it'll still look like crap unless you treat it with respect and give it the effort and time needed.

    Remember, HDR + tone mapping is just trying to create a low dynamic range image on a low dynamic range display that LOOKS something like what your mind perceives in a high dynamic range environment. Obviously, that's kinda hard, especially since the human eye can change its sensitivity as it focuses on different parts of a scene in real life, but not really when looking at a computer screen or print.
  • by icegreentea ( 974342 ) on Thursday September 09, 2010 @10:01PM (#33529920)
    You're seeing a moving halo effect. Most tone-mapping processes have trouble with dark on light transitions. Basically, in an attempt to 'smooth' out the transition between lightening/darkening, you get the lightening effect bleeding from the dark regions to the lighter regions creating a halo. If you watch the starting sequence with the buildings, if you look at the right side with one building in the foreground, and the dark side of another building in the background, you can once again see the halo effect. Just go google around HDR images, and you'll see it everywhere. It's very hard to get rid of, and simply put, if you run any tone-mapping process on default, you'll end up with them.

    It's basically the result of the software not being able to tell with confidence where the boundaries between higher/lower exposure is, so instead it assigns an approximate that "plays it safe" in one direction, and then smears out the boundary. Basically photoshop's magic selection wand + feathering.
  • Re:HDR? (Score:3, Informative)

    by Prune ( 557140 ) on Thursday September 09, 2010 @10:20PM (#33530022)
    HDR would look real if displayed as HDR--on an HDR display (Brightside Technologies had demoes of hardware at several SIGGRAPH instances). Instead, they display the output of a tone-mapping algorithm that transforms the HDR to LDR for display on a normal monitor that only has a low dynamic range. The only thing they're doing different is that they're using an algorithm to reduce the dynamic range, instead of the camera's sensor, because the sensor does it in a 'dumb' way--by being over- or underexposed, whereas a tone-mapping algorithm can preserve detail by nonlinearly and usually location-adaptively compressing the dynamic range.
  • Re:This is not HDR (Score:5, Informative)

    by black3d ( 1648913 ) on Thursday September 09, 2010 @10:48PM (#33530196)

    Incorrect, it's true HDR recording. The process of viewing it on LDR/SDR monitors is tone-mapping, which over the years has been tuned to represent the best known science of what the eyes actually see at once - our retinas already make us susceptible to only being able to view certain ranges of light at a time.

    In other words, more information is being recorded than your eye can see at once, and you're complaining because when you see it, all that information isn't there? That's a pedantic, unsolvable contradiction.

    A true HDR *display* (unfathomably difficult to imagine, I won't begin to go into the problems with the source for all the light being in one location, while other light is also hitting the eye from the real-world outside of the display, making visual processing of the HDR display massively erronous), would offer no advantage to a tone-mapped image, as your eye still can't see more than a certain range at any given time.

    Tone-mapped SDR images actually produce images with more visible detail *at once* than the eye can distinguish *at once*. Sure, the eye can do things the still image can't, like focus somewhere else, shield out certain bright or dark parts, and readjust automatically to what you're now viewing - I'm not claiming tone-mapping will ever produce as much variance as the eye is capable of - but it DOES bring to light more detail in HDR recorded scenes than the eye could otherwise see at once looking at the same scene.

  • by HizookRobotics ( 1722346 ) on Friday September 10, 2010 @01:03AM (#33530834) Homepage
    This goes a long way toward the "computational camera [hizook.com]" where you get flexible depth of field (focus at many depths), trading off pixel resolution for HDR / multispectral imaging, and other cool techniques (like stereo). Exciting stuff!
  • Re:HDR? (Score:3, Informative)

    by nomel ( 244635 ) <`turd' `at' `inorbit.com'> on Friday September 10, 2010 @01:12AM (#33530870) Homepage Journal

    There's some motivation to get the "pixels" to respond like the human eye, or the retinal response model, giving the most realism...although, this probably would be tweaked to give some effect since super real isn't necessarily the goal *cough* 24pfs video *cough*.

    Here's a cool paper:
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.2728&rep=rep1&type=pdf [psu.edu]

    They must not have access to the "raw" data stream for video, because these sensors have a pretty huge dynamic range, around +/- 2 stops. This is the reason pro's shoot in the "raw" format. It saves the pixel brightness data, each pixel in the Bayer pattern, as 14bit values, so you can adjust the exposure afterwords. This is what makes single image HDR possible. I imagine that the camera manufacturers will eventually do something like shown in that paper. Or, maybe they'll get the Super CCD (by Fujifilm) style sensor to work better.

  • Re:This is not HDR (Score:3, Informative)

    by Animaether ( 411575 ) on Friday September 10, 2010 @01:17AM (#33530878) Journal

    A true HDR *display* [...] would offer no advantage to a tone-mapped image, as your eye still can't see more than a certain range at any given time.

    I don't think you would have said that if you'd seen the BrightSide display at Siggraph 2005..
    http://en.wikipedia.org/wiki/BrightSide_Technologies [wikipedia.org] ..though I'll agree that ideally you'd have as little ambient light as possible, it was fine at the show floor with tons of different, flashing, lights around.

    I think I've noted it in a previous discussion on 3D displays... once they're done poking around at that, I'd love it if display manufacturers would go back to figuring out a way to make HDR displays cheaply along with industry-wide standards on how to address such displays.

  • by Anonymous Coward on Friday September 10, 2010 @02:18AM (#33531142)

    They have switched the overexposed and underexposed labels on the city scene.

  • by korean.ian ( 1264578 ) on Friday September 10, 2010 @02:40AM (#33531212)

    You can get them en masse for cheap in Asia.

  • Re:HDR? (Score:5, Informative)

    by davolfman ( 1245316 ) on Friday September 10, 2010 @03:24AM (#33531386)
    They used more of a dragan-ish style of HDR here. They set it up to preserve local contrast at the expense of actually mapping brightnesses linearly. That's why it looks so freakish: some tones are brighter than other tones that should have a physically higher brightness.
  • by Anonymous Coward on Friday September 10, 2010 @05:08AM (#33531798)

    I don't know if you could say it's mathematically better, but the camcorder gives you a more "true to life" look at the environment which we usually perceive as dull or "boring" while the movie camera adds drama with its more saturated colors, larger dynamic range, shallower depth of field and slower movements.

    This is why all of this gushing for 1080p60 is actually useless for "cinematic" movies and more suited for sports or anything requiring real-time playback.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...