I also don't understand how HDR software can process a single image, any idea what its doing.
I'll do my best to expound on what I've already written...
I once read somewhere that the human eye can see about 14 exposure values (EV). Most cameras can, at any given time, see 5-8 at best. Any single exposure must therefore sacrifice EV one way or another (+ or -) for that single exposure. Which way ultimately depends on the camera's software and user settings. Therefore, a single exposure photograph is referred to as LDR, low dynamic range, because some EV has not been captured.
The only way to compensate for this is to take multiple exposures, usually via bracketing mode. This accurately shifts the EV to a single exposure for high/middle/low tonal ranges. EV is still being sacrificed in each of these images, but at different ends of the spectrum. Think of these 3 images as film transparencies, or layers.
- Layer 1: Dark. This exposure would be -0.3, had you taken a single shot. When looking at it in your LCD preview, it would appear dark because it is underexposed.
- Layer 2: Natural. This exposure would be 0.0 and would be your normal "keeper" shot.
- Layer 3: Light. This exposure would be +0.3 and appear overexposed.
When we stack our layers on top of each other, we have a complete image that has a high dynamic range (HDR)...basically, a triple exposure that has a wider spectrum of EV than a single one. Because the data is still technically separate, you can do tone mapping in an HDR image and enhance highlights and shadows in a way that you can't do with a single exposure. My guess is that if someone is just using a single image to begin with, the software is attempting to perform tone mapping.