RAW bit-depth linked to the 8-bit vs. 16-bit TIF comparison?

jimjamz

New member
Hello all,
I'm starting with HDR photography using Auto Exposure Bracketing (AEB) on my D5100.
Always shooting using RAW only, I'm then post processing my bracketed photos using HDRSoft's Photomatix Pro because the D5100 in-built feature won't HDR-process RAW images, only JPEGs. I'm a bit of a purist and never want to lose any quality where possible. I also get better customisation on my HDR photos with Photomatix.
Not sure whether my question relates to the (restricted?) features of my D5100 or not at this point, so apologies beforehand if it sounds like I don't know what I'm talking about.
I've been taking a few sets of bracketed photos and then loading them directly into Photomatix to produce a 32-bit HDR image (.hdr file).
After doing some post-processing using Tone Mapping, I've saved the results as both a 16-bit and 8-bit TIF respectively. I've noticed that the 16-bit TIF is twice the size of the 8-bit TIF with no visual difference between them.
I've used comparison tools such as Beyond Compare to spot differences that the eye cannot see using Tolerance, Mismatch Range and Binary Operation comparisons.
Through all of these comparisons, Beyond Compare reports the images are exactly the same.
AFAIK, on the D5100, the NEF images have a maximum bit-depth of 14-bit. Does this have any relevance to saving my post-processed HDR photos in either 8-bit or 16-bit TIF images and not seeing any difference? Is the bit-depth on my camera restricting any improvements being created using 16-bit TIF images?
As I mentioned above, the two could clearly have nothing to do with each other. However, is this is the case, what is the difference between 8-bit and 16-bit TIF images?
 

WhiteLight

Senior Member
Quite simple
A 16 bit TIFF stores more information than an 8 bit- hence the larger file size.
The difference in quality is invisible to the naked eye when shot as there are minute changes in the images, for example tonal gradations are much smoother in a 16 bit as opposed to an 8 bit.

In a 8 bit image, there are 256 bands of information/channel from black to white.
In 12 bit, there are 4,096 bands
In 16 bit, there are 65,536 bands of info/channel

As you edit your file, you start loosing bits of info from each band.
so more number of bands you have, more accurate is the correction

This is the technical aspect of it, though definitely they have no real change on your images that can be seen with the human eye, be it on the screen, print or maybe even a medium sized billboard.

So, save some space & shoot 8 bit unless you are looking for perfection in every pixel of your image :)
Besides, all output devices are 24 bit which is ideally 8 bit/channel for RGB, so it;s best to output a complete 8 bit image
 
Last edited:

jimjamz

New member
After reading here, it seems that 16-bit depth would be a waste of time as the CCD sensor is capable of only 14-bit, hence the upscale save would result in a similar (or the same) as an 8-bit saved image.

As I said, I saved both images from a 32-bit .hdr file into both a 16-bit then an 8-bit TIF image respectively. Using comparison tools such as Beyond Compare and their picture comparison features, even they couldn't spot any differences in pixels. The comparison tools are also not limited to what a printer or monitor can see so it gives both the exact result and states whether they are the same (even binary equivalence) and would graphically show the difference in pixels, if the monitor can show them.
 
Last edited:
Top