The B&W HDR Experiment

Browncoat

Senior Member
Hi kids...it's been awhile since I've posted anything of my own, so I figured I'd let you all in on a little experiment.

You can view a larger version in my gallery, here. B&W photography is made for HDR in my opinion. Contrast and tonal differences can really make a B&W photo incredible, so why not use HDR to really bring out those subtle details?

Just wondering what everyone else thinks?

EXIF:
1/200 sec @ f/5.6
ISO 200
LumoPro LP160 fired remote via Cactus v4


5073597262_6a0b90e3ea_z.jpg
 

ohkphoto

Snow White
Anthony, I really love this shot! I've been waiting to see some of your results with your "strobe" equipment. Tell us more about your lumapro . . . manual setting at what strength? (1/1, 1/2, etc. ) also which HDR software did you use?

Super job!
 

Browncoat

Senior Member
Thanks for the comments, gang. This is "true" HDR, compiled from 3 shots. My strobe setup was:

  • LumoPro LP160
  • Fired from about 4 ft away via Cactus v4 remote/trigger
  • 1/4 power, 50mm (due to rapid succession of shooting in bracket mode for HDR)

The rest of the post-processing went something like this:

  • WB correction in ACR
  • HDR compiled in Photomatix
  • Some HDR tweaks in Oloneo PhotoEngine
  • Final B&W conversion/sharpening in Photoshop CS4
 

Maxine Abbott

New member
I have not yet ventured in to HDR, but have been looking over the last few months at a lot of HDR pictures, mostly colour ones, but this b&w picture is awesome, love it
 

Browncoat

Senior Member
Wondering what the process is that occurs during HDR. How does it enhance the picture file.

As hi-tech as digital cameras are these days, image sensors are still nowhere near capable of reproducing what the human eye can see. At best, digital cameras survey a scene and do a "best guess" or "average" on tonal differences and lighting. This is especially true if part of your image is dark and another part is light.

Middle to high-end cameras (such as your D90) can shoot in bracket mode. This mode takes 3, 5, or even 7 successive shots of the same image...each with a different exposure, usually 0.3 difference between them. My photo used 3 shots: 1 overexposed, 1 under exposed, and 1 normal exposure. When combined into a single image with HDR software, there is a more accurate representation of light and shadow areas. Using the software, one can choose to be artistic and exaggerate these areas for dramatic effect or leave them be for a more real-to-life photograph.
 

Browncoat

Senior Member
This may do a better job of explaining HDR. The first image is my "normal" exposure shot. Had I not been using bracket mode, this is what my camera would've given me. As you can clearly see, the second image shows much more detail and shadows.


test.jpg

5073597262_6a0b90e3ea_z.jpg
 

Ruidoso Bill

Senior Member
Good explanation, I have done the bracketed and when using "Picturenaut" it advises not enough variation between exposures so tried it with 3, over, under and on and it does create a HDR image, although not as striking as yours. I also don't understand how HDR software can process a single image, any idea what its doing.
 

Browncoat

Senior Member
I also don't understand how HDR software can process a single image, any idea what its doing.

I'll do my best to expound on what I've already written...

I once read somewhere that the human eye can see about 14 exposure values (EV). Most cameras can, at any given time, see 5-8 at best. Any single exposure must therefore sacrifice EV one way or another (+ or -) for that single exposure. Which way ultimately depends on the camera's software and user settings. Therefore, a single exposure photograph is referred to as LDR, low dynamic range, because some EV has not been captured.

The only way to compensate for this is to take multiple exposures, usually via bracketing mode. This accurately shifts the EV to a single exposure for high/middle/low tonal ranges. EV is still being sacrificed in each of these images, but at different ends of the spectrum. Think of these 3 images as film transparencies, or layers.


  • Layer 1: Dark. This exposure would be -0.3, had you taken a single shot. When looking at it in your LCD preview, it would appear dark because it is underexposed.
  • Layer 2: Natural. This exposure would be 0.0 and would be your normal "keeper" shot.
  • Layer 3: Light. This exposure would be +0.3 and appear overexposed.

When we stack our layers on top of each other, we have a complete image that has a high dynamic range (HDR)...basically, a triple exposure that has a wider spectrum of EV than a single one. Because the data is still technically separate, you can do tone mapping in an HDR image and enhance highlights and shadows in a way that you can't do with a single exposure. My guess is that if someone is just using a single image to begin with, the software is attempting to perform tone mapping.
 
Last edited:
Top