Forums
New posts
Search forums
What's new
New posts
New media
New media comments
New profile posts
Latest activity
Media
New media
New comments
Search media
Members
Current visitors
New profile posts
Search profile posts
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Forums
Nikon DSLR Cameras
D3300
Which one to choose? Nikon d3300 or d5200?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Mark F" data-source="post: 322293" data-attributes="member: 12825"><p>Best general explanation I've read...</p><p>Quote:</p><p>The whole issue is complicated partly because of JPG files. Every Time you save a JPG file it reapplies the compression. Withe other formats like TIF, PSD and a host of other formats there is no compression. With JPG you are talking 8 bit files and that is what most monitors and printers work with. The concept of editing making images worse is clear to see with JPG.</p><p></p><p>When you work with uncompressed files like say merging a 7 bracket set into an HDR finished image you'll along the way generate a 32 bit file to hold the extra dynamic range then use a process. You can artistically remap the colors pace down to something that looks fantastic using tone mapping. </p><p></p><p>You can't see or display or print a 32 bit file. The better wide gamut monitors only display 10 bits and so you see (in my eyes) two extra stops of black and two more of white compared to the best made 24 bit monitors. In a 32 bit color file more than 1/2 of all the color space is shades of white the human eye can never see.</p><p></p><p>As you edit uncompressed files they process mathematically and need to round off the end digits. Using 12 bit NEF file rounding still leaves a LOT of room inside 12 bits that you can never see. Unlike JPG processing it's not a recompression process when you save a file. </p><p></p><p>The arguments with JPG files getting worse are pretty well founded and easy to see but it's not what raw processing is about. The extra bits in a raw file isn't really why they are better it's the lack of recompression with each saved generation.</p><p></p><p>The debate between 12, 14, and 16 bit raw files seems to go on all the time. In my mind even with 12 bits you have two orders of magnitude to lose before you could see it then a third one that you can't print or otherwise reproduce. A factor of 1000 for rounding should be enough but if you shoot 14 bits it bumps to 100,000. 16 would boost it to one million. At 16 bite you could lose more than you had and never see it. We face a mathematical challenge in that we can process more bits than we can see. We eventually will capture more but we will never see any more. I shoot 14 bits because I can. I used to shoot 12 and can't tell the difference now.</p><p></p><p>None of this will make a crappy shot better.</p><p>Unquote</p><p></p><p>Granted, 14 bit lossless gives you a little more wiggle room. But the question is whether or not you need it. </p><p></p><p></p><p> </p><p></p><p></p><p>Sent from my iPhone using Tapatalk</p></blockquote><p></p>
[QUOTE="Mark F, post: 322293, member: 12825"] Best general explanation I've read... Quote: The whole issue is complicated partly because of JPG files. Every Time you save a JPG file it reapplies the compression. Withe other formats like TIF, PSD and a host of other formats there is no compression. With JPG you are talking 8 bit files and that is what most monitors and printers work with. The concept of editing making images worse is clear to see with JPG. When you work with uncompressed files like say merging a 7 bracket set into an HDR finished image you'll along the way generate a 32 bit file to hold the extra dynamic range then use a process. You can artistically remap the colors pace down to something that looks fantastic using tone mapping. You can't see or display or print a 32 bit file. The better wide gamut monitors only display 10 bits and so you see (in my eyes) two extra stops of black and two more of white compared to the best made 24 bit monitors. In a 32 bit color file more than 1/2 of all the color space is shades of white the human eye can never see. As you edit uncompressed files they process mathematically and need to round off the end digits. Using 12 bit NEF file rounding still leaves a LOT of room inside 12 bits that you can never see. Unlike JPG processing it's not a recompression process when you save a file. The arguments with JPG files getting worse are pretty well founded and easy to see but it's not what raw processing is about. The extra bits in a raw file isn't really why they are better it's the lack of recompression with each saved generation. The debate between 12, 14, and 16 bit raw files seems to go on all the time. In my mind even with 12 bits you have two orders of magnitude to lose before you could see it then a third one that you can't print or otherwise reproduce. A factor of 1000 for rounding should be enough but if you shoot 14 bits it bumps to 100,000. 16 would boost it to one million. At 16 bite you could lose more than you had and never see it. We face a mathematical challenge in that we can process more bits than we can see. We eventually will capture more but we will never see any more. I shoot 14 bits because I can. I used to shoot 12 and can't tell the difference now. None of this will make a crappy shot better. Unquote Granted, 14 bit lossless gives you a little more wiggle room. But the question is whether or not you need it. Sent from my iPhone using Tapatalk [/QUOTE]
Verification
Post reply
Forums
Nikon DSLR Cameras
D3300
Which one to choose? Nikon d3300 or d5200?
Top