Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Input Devices Media

HDR Video a Reality 287

akaru writes "Using common DSLR cameras, some creative individuals have created an example of true HDR video. Instead of pseudo-HDR, they actually used multiple cameras and a beam splitter to record simultaneous video streams, and composited them together in post. Looks very intriguing."
This discussion has been archived. No new comments can be posted.

HDR Video a Reality

Comments Filter:
  • by Above ( 100351 ) on Thursday September 09, 2010 @07:54PM (#33529130)

    HDR

    Focus Stacking

    Panoramic Stitching

    All in the camera, all 1-button easy to use, and all at once.

    • by Lehk228 ( 705449 ) on Thursday September 09, 2010 @08:49PM (#33529514) Journal
      and it also has to give BJs
    • by Prune ( 557140 ) on Thursday September 09, 2010 @10:09PM (#33529960)
      You forgot about full lightfield capture. This can be done with a single camera using ultra high resolution and a microlens array (or alternatively, an array of a very large number of tiny cameras). Think single camera, single shot capture of depth (3D) and all focus planes. Then you can reproduce the full 3D and multiple focus depths (as in, the eye would have to focus at different depths) on a flat display with microlens array covering it (again, need ultra-high resolution since focal depths and parallax viewpoints are discretized to the pixel number covered by each micro lens).
    • by Itninja ( 937614 )
      I think we have that already (except for the button). It's called the human eye.
  • by DWMorse ( 1816016 ) on Thursday September 09, 2010 @08:01PM (#33529194) Homepage

    The trumping technology to follow: 3D-HDR Video!!

  • by kaptink ( 699820 ) on Thursday September 09, 2010 @08:02PM (#33529196) Homepage

    C&P from the linked page (assuming a /.'ing imminent)

    HDR demo @ http://vimeo.com/14821961 [vimeo.com]

    Press Release:

    HDR Video A Reality

    Soviet Montage Productions releases information on the first true High Dynamic Range (HDR) video using DSLRs

    San Francisco, CA, September 9, 2010: Soviet Montage Productions demonstrated today the first true HDR video sourced from multiple exposures. Unlike HDR timelapse videos that only capture a few frames per minute, true HDR video can capture 24 or more frames per second of multiple exposure footage. Using common DSLRs, the team was able to composite multiple HD video streams into a single video with an exposure gamut much greater than any on the market today. They are currently using this technology to produce an upcoming film.

    Benefits of Motion HDR
    HDR imaging is an effect achieved by taking multiple disparate exposures of a subject and combining them to create images of a higher exposure range. It is an increasingly popular technique for still photography, so much so that it has recently been deployed as a native application on Apple’s iPhone. Until now, however, the technique was too intensive and complex for motion. Soviet Montage Productions believes they have solved the issue with a method that produces stunning–and affordable–true HDR for film and video.

    The merits of true HDR video are various. The most obvious benefit is having an exposure variation in a scene that more closely matches the human eye–think of filming your friend with a sunset at his or her back, your friend’s face being as perfectly captured as the landscape behind them. HDR video also has the advantage of reduced lighting needs. Finally, the creative control of multiple exposures, including multiple focus points and color control, is unparalleled with true HDR video.

    “I believe HDR will give filmmakers greater flexibility not only in the effects they can create but also in the environments they can shoot in” said Alaric Cole, one of the members of the production team, “undoubtedly, it will become a commonplace technique in the near future. ”

    Contact:
    Michael Safai
    Soviet Montage
    201 Spear Street #1100
    San Francisco, CA 94105
    1 415 489 0437
    mike@sovietmontage.com

    • by lgw ( 121541 ) on Thursday September 09, 2010 @08:27PM (#33529374) Journal

      TL;DR: in Soviet Montage, camera manages multiple exposure for you.

    • "Filmmakers"? I think he means "videographers". I don't see any film involved here.
    • Re: (Score:3, Interesting)

      by thrawn_aj ( 1073100 )
      So, HDR video would help make movies look like ... video games??? Is it just me or does that video (that parent linked to) look amazingly like a (post-HalfLife2) game? I guess this should be a fantastic clue for game programmers who usually try to go the other way ;). Lack of HDR = more "realistic" video? (where realistic is defined by what people are used to). Find an algorithm to intelligently degrade the dynamic range in a rendering and CGI becomes more photorealistic.
      • by yoyhed ( 651244 ) on Friday September 10, 2010 @03:22AM (#33531376)
        It's the other way around.

        Even though we call it high dynamic range in videos and photographs, it's actually just compressing all the extra information from multiple exposures into a LOWER dynamic range, so we can manipulate/display it on our 8-bit screens.

        Games, however - such as the Source engine after it got the HDR update with Half-Life 2: Lost Coast and Day of Defeat: Source, actually do increase the dynamic range of a scene beyond what your monitor can display. They underexpose and overexpose parts of the scene when transitions between light and dark places occur, just as your eyes would before they adjusted to the new light, or as a video camera would depending on what exposure the videographer chose. This makes it look more realistic - just take a look at a bright outdoor scene in Half-Life 2: Episode Two and check out how shiny objects in the sunlight have blown-out highlights that gleam brilliantly, and then look at the same scene in the original Half-Life 2, where that object would look flatly-lit and fake. The "non-HDR" looks more fake because the dynamic range is compressed so you can see all the detail everywhere, which also gives it that flat "game" look.

        Of course, that last part is just my opinion - but I believe that in order to look more realistic, CGI needs to simulate the behavior of traditional cameras with a lower dynamic range (or that of your eyes before they've adjusted properly to bright/dim light). The everything-is-exposed-properly, compressed-dynamic-range look just appears fake to me, even though my eyes could probably perceive that range at the actual scene. I'm not sure why.
  • HDR? (Score:4, Interesting)

    by afaik_ianal ( 918433 ) * on Thursday September 09, 2010 @08:02PM (#33529200)

    Can anyone give a brief rundown on what HDR is? I know it stands for "high dynamic range", but as someone who knows nothing about photography, it means nothing to me. What it has to do with overexposure/underexposure (to which the video refers)? Why is it harder to do with video than still images?

    • Re:HDR? (Score:5, Informative)

      by mtmra70 ( 964928 ) on Thursday September 09, 2010 @08:10PM (#33529246)

      Wiki explains it well:
       
       

      is a set of techniques that allow a greater dynamic range of luminances between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods.

      And their picture is a great example. If you expose the building well, the clouds are washed out. If you expose the clouds well, the building is dark. If you take pictures of both equally exposed then merge the photos, you now have a properly exposed building along with a properly exposed sky giving thus giving you more dynamic range. Think of it like instead of going to the lunch buffet and cramming everything into one plate, you go up to the buffet three times with three plates: one for salad, one for main course and one for dessert. With a little processing (trips) you end up with more range (food variety).

      • Re: (Score:3, Insightful)

        by jack2000 ( 1178961 )
        HDR looks so unreal even if at times aesthetically pleasing. Their "more real" filter didn't do the scene much justice too.
        Was the guy supposed to look that way?
        • Re:HDR? (Score:5, Informative)

          by mtmra70 ( 964928 ) on Thursday September 09, 2010 @08:43PM (#33529480)

          HDR looks so unreal even if at times aesthetically pleasing. Their "more real" filter didn't do the scene much justice too.
          Was the guy supposed to look that way?

          The video was not very good at all, so I'm not sure why it is a big deal. The video of the guy was more HDR than any other part, though it was very strange.

          Take a look at some of the HDR photos on Flickr http://www.flickr.com/groups/hdr/pool/ [flickr.com]. They give much better and proper example of HDR.

        • Re:HDR? (Score:5, Informative)

          by ColdWetDog ( 752185 ) on Thursday September 09, 2010 @08:48PM (#33529506) Homepage
          That's one of the problems with HDR photography. The light to dark transitions just don't look quite right and so the scene has an 'unreal' appearance. Either washed out or cartoonish.

          You see that all of the time in still HDR photography and I think it has to do with the limitations of the final media - movie screens, paper, computer screens - that do not reproduce the eye's ability to deal with contrast well. In prints, you can work with this and minimize but not completely remove the effect. I imagine that they could tweak their algorithms a little better but Internet video isn't a particularly high quality visual experience in the first place so there well be some limitations in how well they can do it.
          • Re: (Score:3, Insightful)

            by petermgreen ( 876956 )

            and I think it has to do with the limitations of the final media
            indeed, a normal monitor has a limited dynamic range. With many modern LCDs each channel is only 6 bit!

            So if you want to make both the shadow and highlight detail in a a high dynamic range image visible on a normal monitor you will have to compress the dynamic range down.

          • Re:HDR? (Score:5, Informative)

            by icegreentea ( 974342 ) on Thursday September 09, 2010 @09:54PM (#33529862)
            You can get HDR to look 'fine' or whatever adjective you want to use. It's just hard. The tone-mapping software/settings that many people use will just go and create doll skin and haloes everywhere. But if you do everything well (hard work!) you can get some really cool looking stuff. For example...

            http://www.flickr.com/photos/swakt1/2322363690/
            http://www.flickr.com/photos/swakt1/2322366898/in/photostream/
            http://www.flickr.com/photos/ten851/4972637653/in/pool-hdr

            Somewhat like many other art techniques, when best used, you barely notice it at all. And that is the most important thing to remember. HDR + tone mapping isn't just a technology, it is an art. Being able to capture video in 3 different stops at once is great, but it'll still look like crap unless you treat it with respect and give it the effort and time needed.

            Remember, HDR + tone mapping is just trying to create a low dynamic range image on a low dynamic range display that LOOKS something like what your mind perceives in a high dynamic range environment. Obviously, that's kinda hard, especially since the human eye can change its sensitivity as it focuses on different parts of a scene in real life, but not really when looking at a computer screen or print.
            • Re: (Score:3, Informative)

              by nomel ( 244635 )

              There's some motivation to get the "pixels" to respond like the human eye, or the retinal response model, giving the most realism...although, this probably would be tweaked to give some effect since super real isn't necessarily the goal *cough* 24pfs video *cough*.

              Here's a cool paper:
              http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109.2728&rep=rep1&type=pdf [psu.edu]

              They must not have access to the "raw" data stream for video, because these sensors have a pretty huge dynamic range, around +/- 2 stops

          • by sjwt ( 161428 )

            Just like any effect, you are applying art to art, and thus you get a) What you par for and b) Subjective output.

            HDR Is not a magic 'make it more real' nore is it a 'fix for bad photos',
            some exampels will look like a normal photo, others will look like a fluro water colour, its what the 'artist' chose to make.

            http://www.toxel.com/inspiration/2008/11/15/beautiful-examples-of-hdr-photography/ [toxel.com]
            http://www.stuckincustoms.com/hdr-tutorial/ [stuckincustoms.com]

        • Re:HDR? (Score:5, Interesting)

          by plover ( 150551 ) * on Thursday September 09, 2010 @09:33PM (#33529756) Homepage Journal

          One problem I realized after watching the scene with the guy is the video compression artifacts can be different between the two cameras. Even if the sensors were perfectly aligned with each other and the optics, the MPEG compression could be different because the values at each pixel will still be slightly different due to the differences in exposure levels. Different pixel values can cause different compression schemes to be invoked in each block, which will result in weird combinations of aliasing. I think this may have been partly responsible for the shimmer on his denim jacket.

          • by theJML ( 911853 )

            Sounds like we'll just need to dump that video from the cameras in RAW, do the post processing and then compress it. Which is the way it should happen anyway if it wasn't for speed limitations in getting RAW 24fps 1080p video off of the camera.

        • Re:HDR? (Score:5, Insightful)

          by im_thatoneguy ( 819432 ) on Thursday September 09, 2010 @09:47PM (#33529818)

          "HDR" images don't look unreal. Tonemapped HDR images look unreal.

          You can do the same thing to Low Dynamic Range Images, and they'll look just as unreal. Similarly you can take a 18 stop HDR image and apply normal image processing techniques and get realistic looking images.

          The *only* defining aspect of HDR images is the large amount of dynamic range they contain. The fact that people abuse that dynamic range is an aesthetic one completely separate of HDR.

          It's like saying that Photoshop makes images look fake. *Photoshop* doesn't make images look fake, bad artists make images look fake. You don't have to apply a stock lens flare to your family photo. It won't be too long before all cameras just shoot HDR. The largest application then will be to adjust the exposure at home without worrying about under or over exposing that shot of your friends on the beach.

        • Re: (Score:3, Informative)

          by Prune ( 557140 )
          HDR would look real if displayed as HDR--on an HDR display (Brightside Technologies had demoes of hardware at several SIGGRAPH instances). Instead, they display the output of a tone-mapping algorithm that transforms the HDR to LDR for display on a normal monitor that only has a low dynamic range. The only thing they're doing different is that they're using an algorithm to reduce the dynamic range, instead of the camera's sensor, because the sensor does it in a 'dumb' way--by being over- or underexposed, w
      • Where there are bright halos around every transition and your picture has no clear subject.

      • and the net result is it looks like a video game. I mean seriously, did anyone else notice that? the lighting range looks like something from a game, not quite full-bright, but the shadows just seem.. wrong.
      • Re: (Score:3, Insightful)

        by Prune ( 557140 )
        They record HDR but then they compress the image to LDR (low dynamic range) for display on a regular monitor. You don't see the HDR display, just the result of the tone-mapping algorithm that transforms the HDR data into an LDR one. This is a common abuse of the term HDR. It's the same thing with the graphics effect in games. The internal processing is HDR, but then it's tone-mapped to LDR for display on a regular monitor, often with the addition of simulated bloom on overexposed areas. It's unfortunat
    • Re:HDR? (Score:4, Informative)

      by treeves ( 963993 ) on Thursday September 09, 2010 @08:11PM (#33529252) Homepage Journal

      It requires post-processing. You combine images shot at bracketed (above and below the "optimum") exposures, in order to get the details in both the brightest and darkest parts of the image which are sometimes lost in high contrast situations. You end up compressing (to use an audio analogy) the brightness range into a smaller range so it can be reproduced on a monitor or paper.
      The post-processing of a LOT of frames requires a lot of processing power and time.

    • Re: (Score:3, Insightful)

      by lee1026 ( 876806 )

      Well, a camera can only capture so much of the difference between the brightest parts of the image and the dimmest part of the image. How HDR works is that you take one picture that is extremely dark, and then you take another picture that is extremely bright, and you merge them together so that the resulting picture can capture more of the super bright parts and more of the super dim parts. Now, the problem for video is that it is hard to take the bright shots and the dim shots at the same time, because yo

      • How HDR works is that you take one picture that is extremely dark, and then you take another picture that is extremely bright, and you merge them together

        In Soviet Montage, HDR merges YOU!

        Oh wait... no, you're right. :-D

    • Enter HDR in Google and you don't even have to click Enter.

    • by Qzukk ( 229616 )

      The wikipedia article is pretty good: http://en.wikipedia.org/wiki/High_dynamic_range_imaging#Example [wikipedia.org]

      The idea is that you merge together overexposed photos (which show all the darker details) and underexposed photos (that only show the brightest details) to come up with a picture that has all of the details in it.

    • Re: (Score:3, Interesting)

      by EnsilZah ( 575600 )

      I'm no expert on the subject but the basics as I understand them are you take several photos at different exposures, that way you have all the details in the dark areas from the overexposed photo, the details in the bright areas from the underexposed photo (that would otherwise be burnt out) and you can even use an HDR image for lighting a 3D scene by I guess analyzing the nonlinear way lighting changes between exposures (this area I'm a bit less clear about)

      It's difficult to do for video since for a still

    • by takev ( 214836 )
      HDR means you use more bits when recording the image. More than the usual 8 bits per color component. One can already do a bit of HDR when you take the raw image from most photo cameras that have 10 to 14 bits of depth. However these 10 to 14 bits are linear light (as opposed to gamma corrected for the display, so their dynamic range is not much better).

      The real improvement comes from taking multiple exposures of different lengths of the same subject. Then combine these exposures into a single image; basica
    • Re: (Score:3, Informative)

      by ADRA ( 37398 )

      I'm not an expert, but from my limited knowledge:

      HDR is taking frames of varying exposure levels and merging them into a single picture that contains color levels combined from both. It would help in correcting contrast washout areas of the image that aren't the target exposure of the image without needing touch ups. Taking HDR pictures at multiple exposure levels allows for a richer range of captured detail. When I overexpose in sunlight, I get an effect that takes all detail away from a darker piece of th

    • Basically the range of light that exists in reality, far exceeds the recording capability of most devices, including our eyes.

      Our eyes are actually one of the best at reacting to very dynamic ranges of light.

      Cameras have varying degrees of dynamic range, but most of them, if not all (as far as i'm aware of) cant capture the full range of light.

      Put it this way... In real life, light ranges from 0 to 1. 0 being the absolute absense of light, and 1 being the brightest thing possible. (I'm not sure we know what

      • Put it this way... In real life, light ranges from 0 to 1. 0 being the absolute absense of light, and 1 being the brightest thing possible. (I'm not sure we know what that is yet) :)

        Light levels found on earth range from nearly 0 photons per second (due to blackbody radiation you will find a complete lack of photons only in an absolute-zero temperature environment) to at least 10^13 times brighter than the surface of the sun (if you find yourself too close to an H-bomb explosion).

        It is physically impossibl

    • Dynamic range is the ratio of the smallest signal you can detect (under a given set of settings) and the largest.

      If your dynamic range is too small for the scene then no matter what exposure you chose you will lose detail in some parts of the image.

      Cameras have a much smaller dynamic range than the human eye so scenes that our eyes deal with no problem can pose a problem for cameras.

      High dynamic range imaging gets around this by taking two or more exposures. Longer exposures (or wider apetures or higher sen

  • Very impressive! (Score:3, Interesting)

    by WilliamGeorge ( 816305 ) on Thursday September 09, 2010 @08:08PM (#33529228)

    I've been a long-time fan of HDR photography, and was just thinking about ways that HDR could be implementing in video camcorders as well. Personally I'd like to see a correctly-exposed stream mixed in with the other two, as is common in photography, but even without that the effect is pretty darn cool.

    By the way, in case any camcorder manufacturers are watching, consider this idea: make a video camera with three (or more) times the required number of sensors for the resolution you want to record at. Set the logic in the device up to use three unique sets of sensors inside to pick up three different sets of images, at differing exposure settings. Then have them saved separately so that they can be integrated later on for various editing effects - or have a mode where they are integrated on-the-fly for easier use by non-professionals. I imagine it would be expensive to make such a complex sensor and camera, but it might be easier to manage than multiple cameras as the folks in the article did.

    • Re: (Score:3, Insightful)

      by timeOday ( 582209 )

      By the way, in case any camcorder manufacturers are watching, consider this idea: make a video camera with three (or more) times the required number of sensors for the resolution you want to record at.

      That's crazy. You'd get practically the same effect just by alternately under/over-exposing successive frames. From there you could interpolate whatever level of exposure you wanted without losing too much detail.

  • Just in case anyone was wondering. It would be nice if editors would get into the habit of making sure that the front page summaries had a definition of these TLA's in at least 10% of the posted articles. TLS == three letter acronym, by the way.
  • by scdeimos ( 632778 ) on Thursday September 09, 2010 @08:11PM (#33529260)
    Wasn't the first HDR video camera back in 1993? Granted, they called it Adaptive Sensitivity [technion.ac.il] back then.
    • by jd ( 1658 )

      The first still HDR camera was developed in the late 1800s. The chances are extremely high that video HDR dates back much earlier than 1993. The chances are extremely high that garage developers had started work on such devices before sound had been introduced.

      • The techniques for HDR in the 1800s involved taking negatives with different exposures and splicing them to create a positive with varying degrees of exposure. ie, a standard camera, using dark-room techniques rather than a camera doing variable ranges at all.

    • Re: (Score:3, Interesting)

      by forkazoo ( 138186 )

      Spheron had an awesome single-sensor HDR video camera demo at SIGGRAPH this year. It records 20 stops of latitude, and after some processing for debayering and whatnot, you get an EXR sequence. I got to see it live, in person, and stand a few feet away from the camera. The guy running the demo even let me play with some footage in Nuke on the demo laptop. I'm confused about why a hacked up beamsplitter based system would be so noteworthy, when the single-sensor method will suffer less light loss thanks

  • I had my first foray into HDR still photography recently and I have to say I'm very very impressed with the results. Certain night-time scenes look absolutely stunning using 4-5 exposures. Here's some shots by a friend of a friend: http://roache7.deviantart.com/gallery/ [deviantart.com].
  • by T Murphy ( 1054674 ) on Thursday September 09, 2010 @08:14PM (#33529288) Journal
    In the video, there is a part showing a man talking, and eventually he waves his arms around. At that point, you can see some parts of the picture become brighter near his arms- clearly not shadows, so it must be an artifact of the HDR processing. Anyone care to explain what might cause this, or how it might be addressed? I don't know much about HDR so I wouldn't have a clue, but some insight into the technical stuff behind the process would be interesting (and help people like me better learn and appreciate HDR).
    • Re: (Score:3, Informative)

      by EnsilZah ( 575600 )

      I haven't noticed it and now it's been slashdotted so I can't confirm but I imagine that if they used two different exposures on the cameras then on the longer exposure a fast moving object would be blurred so at its core it would be darker because it's always blocking the light while at the edges it would be lighter since it's only blocking the light part of the time.
      So I guess it would create edge artifacts because of the mismatch between the short exposure which has less motion blur and is mostly at the

    • by jd ( 1658 )

      If you take a look at the still photographs at the LOC from the photographer who toured Russia in 1913 or so, you'll see that anything moving splits into the components as a function of the speed of motion. I would imagine something similar is taking place here, albeit to a smaller degree because of the higher speed of the film and not needing to swap filters physically.

    • by Bo'Bob'O ( 95398 )

      I think it was bloom in the higher exposure version.

    • by arcsimm ( 1084173 ) on Thursday September 09, 2010 @09:51PM (#33529846)
      The bright spots are indeed an artifact of the HDR process -- partulcarly the tone-mapping algorithms. On its own, HDR is basically a method of capturing intensity values that would otherwise fall above or beneath the threshold of a camera's sensitivity. The problem is, when yo do that you end up with image data that can't be completely represented within the gamut of a printer or a screen. You could simply display a "slice" out of the data, which results in a regular images at whatever exposure setting you've chose, or try to "compress" the tone values into your available gamut, which results in a washed-out appearance. This is where tone-mapping comes in. What tone-mapping does is try to compute the correct exposure levels on a per-pixel basis, by comparing its intensity relative to nearby pixels. Ideally, this results in shadows being brightened to the point where you can see detail in them, and blown-out highlights brought toned down (analogous to "dodging" and "burning" in terms of old-school darkroom film processing -- the dynamic range of film is much higher than that of photo paper).

      In practice, though, you end up with weird highlights around dark areas, like the ones you saw around the man's arms, because the tone-mapping algorithm is trying to maximize the local contrast in the image. It's brightened up the coat, and so it also brightens nearby pixels to compensate for the reduction in contrast. Some people try to adjust the algorithms to minimize this effect, while others try to maximize it for dramatic effect, or even an oversaturated, impressionistic look -- it's largely an artistic choice, though when done badly it can also be a sign of amateurism. Still others will manually composite multiple exposures to get the benefits of HDR imaging while avoiding its side effects entirely,

      The Wikipedia article on tone-mapping [wikipedia.org] goes into great detail on the different approaches to HDR photography, if you're interested.
    • by icegreentea ( 974342 ) on Thursday September 09, 2010 @10:01PM (#33529920)
      You're seeing a moving halo effect. Most tone-mapping processes have trouble with dark on light transitions. Basically, in an attempt to 'smooth' out the transition between lightening/darkening, you get the lightening effect bleeding from the dark regions to the lighter regions creating a halo. If you watch the starting sequence with the buildings, if you look at the right side with one building in the foreground, and the dark side of another building in the background, you can once again see the halo effect. Just go google around HDR images, and you'll see it everywhere. It's very hard to get rid of, and simply put, if you run any tone-mapping process on default, you'll end up with them.

      It's basically the result of the software not being able to tell with confidence where the boundaries between higher/lower exposure is, so instead it assigns an approximate that "plays it safe" in one direction, and then smears out the boundary. Basically photoshop's magic selection wand + feathering.
      • by pspahn ( 1175617 )

        This one.

        If you can compose a shot with this in mind, you can help minimize the effect. That's still shots. Obviously with video it would be impossible to take into account. It tends to be seen more when the tonemapping is overdone, though I was working on one shot that no matter what I did or how subtle the tonemapping channel was, it was still blindingly obvious. The best solution was to try and get it to blend in somehow with the background. In this case it was a very contrasted sky with thick dark clou

  • It really makes you think...

  • The results are beautiful, in my opinion. HDR looks great when done with restraint. I've even used HDR for work a few times, such as this "portrait of a truck" for a haulage company:
    http://www.flickr.com/photos/meejahor/2073616479/sizes/z/ [flickr.com]

    There's a slight mix-up in the video captions though. Where the captions say underexposed and overexposed, they've got the terms the wrong way round. Probably just a language barrier thing, though, as it's a Russian team.

    • Before anyone jumps in to point out my mistake, the team might not be Russian after all. I didn't know until now that "Soviet Montage" (company name) is a film-making technique.

  • Unimpressed (Score:5, Informative)

    by Ozan ( 176854 ) on Thursday September 09, 2010 @08:37PM (#33529442) Homepage

    The technique is promising, but the provided example video does not demonstrate a true advantage it has over conventional cinematography. They filmed with two cameras, one overexposing one underexposing, but they don't have one with the right exposure to compare with the composed HDR images. The city scenes are filmed at daylight, without any areas of high contrast that would make a high dynamic range necessary. The same with the people example, they even overdid it to give it a vibrant effect, making it more of an artistic tool than capturing shadows and lights naturally.

    They should make a short film with city nighttime and desert scenes, that should be impressive. They should also contact director Michael Mann, he would jump at the opportunity to film HDR.

    • by XaXXon ( 202882 )

      There is no such thing as a "right exposure".

      That's why they were talking about the dynamic range of the eye - if you expose for the highs, you lose the lows that a human can see and vice versa.

  • Call me a bitter old man, but wake me up when they have cheap HDR displays available to purchase so we don't have to do tonemapping on HDR images. I've been into HDR for a while and tonemapping KILLS hdr, it makes it look cartoonish.

    There are VERY few cases where I have seen HDR done right. Everyone thinks HDR means using bloom like its going out of style... /grumpy

    • 99.9% of HDR photos I've seen look like clown puke. Now we'll be able to watch clown puke videos, yay.
    • by pspahn ( 1175617 )

      Your eye just knows what to look for. For everyone else, the tonemapping creates an interesting effect. Sure, it's not always realistic, and is mostly overdone, but it's still interesting when you first see it.

      It's the autotune of photography.

  • But, I have to say I wouldn't be someone who would ever use this. I can see the merit for some stills, I can see some use for documentary, I can see the merit for amateurs wanting to capture a wedding, or for limited VFX scenes in motion pictures, but as a cinematographer this is pretty much the opposite of what I would ever want to achieve.

    Give me chiaroscuro every time. You only have to look at the work of Conrad L Hall, Gordon Willis, Caleb Deschanel or Nestor Almendros to name but a few, to see how b
  • by LoudMusic ( 199347 ) on Thursday September 09, 2010 @09:18PM (#33529662)

    This is so much better than 3D technology. It's even better than high definition video. This is actually the process of creating better images. I am actually really excited about this!

  • At work we routinely create images with 12 bits of dynamic range. It's trivial to map this for 8 bit display. We use monochrome sensors, though, and I don't know if if the dynamic range is available for color.

  • This looks to me like the video equivalent of audio compression - squeezing the life out of the media to make it fit within a certain constraint,

    Thanks but until the entire chain is HDR, I'll pass.

    Ron

  • Is it me or does that look exactly like newer videogames with heavy textures?

    I had no idea what HDR was so when I started looking at the video I actually thought it was a videogame. Kindda reminds me of COD MW.

    • by 0123456 ( 636235 )

      Is it me or does that look exactly like newer videogames with heavy textures?

      That was my first thought: 'hey, it's Half-Life 2!'

  • This is not HDR (Score:2, Flamebait)

    by Trogre ( 513942 )

    It it was true HDR then we'd need a (non-existent AFAIK) HDR monitor to see it. This is two exposures compressed down to a standard dynamic range image, aka fake HDR.

    The novel part here is in the simultaneous capturing process of these two exposures.

    • Re:This is not HDR (Score:5, Informative)

      by black3d ( 1648913 ) on Thursday September 09, 2010 @10:48PM (#33530196)

      Incorrect, it's true HDR recording. The process of viewing it on LDR/SDR monitors is tone-mapping, which over the years has been tuned to represent the best known science of what the eyes actually see at once - our retinas already make us susceptible to only being able to view certain ranges of light at a time.

      In other words, more information is being recorded than your eye can see at once, and you're complaining because when you see it, all that information isn't there? That's a pedantic, unsolvable contradiction.

      A true HDR *display* (unfathomably difficult to imagine, I won't begin to go into the problems with the source for all the light being in one location, while other light is also hitting the eye from the real-world outside of the display, making visual processing of the HDR display massively erronous), would offer no advantage to a tone-mapped image, as your eye still can't see more than a certain range at any given time.

      Tone-mapped SDR images actually produce images with more visible detail *at once* than the eye can distinguish *at once*. Sure, the eye can do things the still image can't, like focus somewhere else, shield out certain bright or dark parts, and readjust automatically to what you're now viewing - I'm not claiming tone-mapping will ever produce as much variance as the eye is capable of - but it DOES bring to light more detail in HDR recorded scenes than the eye could otherwise see at once looking at the same scene.

      • by 0123456 ( 636235 )

        The process of viewing it on LDR/SDR monitors is tone-mapping, which over the years has been tuned to represent the best known science of what the eyes actually see at once - our retinas already make us susceptible to only being able to view certain ranges of light at a time.

        Yet the end result looks far less real than a normal photograph. Odd, that.

        • Re: (Score:3, Interesting)

          by black3d ( 1648913 )

          That's pretty much down to our mental training that a photograph is a realistic representation of lighting in a scene.

          This is similar to the mental effect which makes high frame-rate 60-90fps video look "fake" and less true-to-life to us, who have been watching 25fps movies for decades, despite the opposite being true.

          In truth, printed photographs are terrible representations of light and instead rely on our knowledge of the elements to trick our brain into viewing lit scenes in the context of previous expe

      • Re: (Score:3, Informative)

        by Animaether ( 411575 )

        A true HDR *display* [...] would offer no advantage to a tone-mapped image, as your eye still can't see more than a certain range at any given time.

        I don't think you would have said that if you'd seen the BrightSide display at Siggraph 2005..
        http://en.wikipedia.org/wiki/BrightSide_Technologies [wikipedia.org] ..though I'll agree that ideally you'd have as little ambient light as possible, it was fine at the show floor with tons of different, flashing, lights around.

        I think I've noted it in a previous discussion on 3D displ

  • Unless they're somehow collecting at least twice as much light, assuming that the beam splitting is perfectly efficient, then I fail to see how what they're doing really helps, as both of the images (or image streams) are going to wind up being shot at at least twice the ISO that they otherwise would be. That would not be good for sharpness and noise. There better be a large lens collecting light for these cameras.
  • Franken/3D cameras (Score:5, Interesting)

    by gmuslera ( 3436 ) on Thursday September 09, 2010 @11:07PM (#33530290) Homepage Journal
    With frankencamera [stanford.edu] you could do HDR and a lot more things in an "intelligent" camera with software. In fact the first implementation in a mass consumption device was in the N900, it takes several photos, regulates exposition and other parameters to make that photo in a more parametrizable way that the iphone could do. But not sure if that would be enough for HDR video, if needs that the input, in real time, have different something at hardware level. In that case maybe something like this 3D camera [pcr-online.biz] would be needed. And could give some meaning to such devices... not only shooting in 3d, but in HDR video.
  • by cvd6262 ( 180823 ) on Thursday September 09, 2010 @11:34PM (#33530420)

    Especially the part with the guy talking, made me think...

    So someone's found a way to make real life look life Half-Life 2 Episode 2?

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Friday September 10, 2010 @10:23AM (#33533552) Journal

    For the third of the Fast and Furious movies, we had to film at night in the spectacular Shibuya Square in Tokyo, with its many animated billboards and video screens. I really wanted to get an HDR film of the billboards.

    For the driving green-screen sequences of the film, we had built a plate to mount three cameras, at 0, 45, and 90 degrees, to shoot panoramas driving down the street. To get the nodal points closer together, we had the cameras facing toward each other, with the lenses almost touching. It worked wonderfully.

    By taking the center camera out, and replacing it with a beam-splitter, we had a down-and-dirty HDR rig using the other two cameras. Now, this was HDR on film, not video -- but film already has a very high dynamic range -- so two cameras with very different effective exposures gave us a tremendous dynamic range. In the 'normal' exposure all of the brighter signs were blown out, but on the beam-splitter camera you could see all the details of the structure of the lighted billboards. Quite cool.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...