sonicbuffalo_RIP
Senior Member
I'm thinking you may not have read it right. I was hoping you would think: "Of course". I understood it.
I said the math was right, even the 0.4% was divided right, but the interpretation was questionable.It is not a useful number. We cannot view Raw files (and would not like it if we could).
It requires at least four Raw pixels to convert to one RGB pixel (Bayer). Pixel count does remain the same, but the data becomes interpolated. That is a loss of resolution, true of any digital camera (except Foveon - do they still exist?). So the camera JPG suffered this issue too. Any JPG did. They all suffer the same process, from the original Raw. So FWIW, your 0.4% number is true of camera JPG or JPG from Adobe Raw, but it is not the right way to view this conversion of 4 pixels to one RGB value. Does not take it into account.
It is only any subsequent 8 bit JPG processing that suffers. Subsequent Raw processing is still 12 bits, same as the camera did it, so really, the difference is we can see what we are doing, instead of blind wishful hoping with our crude camera settings.
But in general, at least very often (certainly pertinent to this JPG vs. Raw discussion) that camera JPG idea is done to avoid doing any processing at all. Don't know how, don't want to, don't really care, don't want to look at them again, whatever, but the common idea is to avoid doing any processing. This practice seems questionable, but in those cases, at least there is no difference in the bit depth.
Which format do you shoot in, Wayne?