Forums
New posts
Search forums
What's new
New posts
New media
New media comments
New profile posts
Latest activity
Media
New media
New comments
Search media
Members
Current visitors
New profile posts
Search profile posts
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Forums
Learning
Computers and Software
Sharpen jpgs in camera, or not?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="WayneF" data-source="post: 253858" data-attributes="member: 12496"><p>Not what anyone will want to hear, but FWIW, it is always said that sharpening should be the last thing done. Meaning specifically, done AFTER any resampling, and done for the specific purpose (screen viewing or printing, etc.)</p><p></p><p>Because 1) any later resampling destroys any previous sharpening (including auto resize done to show a too large image on a too small screen - this is resampling too). </p><p></p><p>When viewing images on the screen, unless we resample them smaller ourself, the viewing software must automatically resample them smaller to screen size. Either way, like from 12 or 24 megapixels to at most 2 megapixels (if even that). This throws away most pixels, destroying any previous sharpening. Technically, sharpening should follow any resampling, for the applicable purpose.</p><p></p><p>Only a few years ago, computers were slower, and image screen viewers did this automatic size resampling using poor (but fast) nearest neighbor resampling (we often saw artifacts of it, so we much preferred to control it ourself, doing the necessary preparation). Today, better software can use better bicubic methods and it comes out better, but any original camera resharpening is of course still lost and overwritten (when resized smaller).</p><p></p><p>And 2) the degree of sharpening needed depends on the use. Whether we are capable enough to resample them ourself or not, then we view those images shown on the screen necessarily at around 100 dpi resolution (pixel density, capability of the viewing device). Whereas if we print them, it is more like 300 dpi resolution - but still less than the original 12 or 24 megapixels. Because of the higher density of this view, printing can use (needs) more sharpening than the screen. Perhaps USM radius 2 or even 3 for printing instead of radius 0.8. So if we are knowledgeable, we sharpen last, for the specific purpose, judged when seen in that actual final purpose. If an image has multiple purposes, that means multiple sharpenings of multiple copies, for specific uses (i.e., done last). </p><p></p><p>Sharpening done once in the camera is actually pointless (at best, unnecessary pixel manipulation), because any viewing or printing resampling destroys it (discards most of the pixels).</p></blockquote><p></p>
[QUOTE="WayneF, post: 253858, member: 12496"] Not what anyone will want to hear, but FWIW, it is always said that sharpening should be the last thing done. Meaning specifically, done AFTER any resampling, and done for the specific purpose (screen viewing or printing, etc.) Because 1) any later resampling destroys any previous sharpening (including auto resize done to show a too large image on a too small screen - this is resampling too). When viewing images on the screen, unless we resample them smaller ourself, the viewing software must automatically resample them smaller to screen size. Either way, like from 12 or 24 megapixels to at most 2 megapixels (if even that). This throws away most pixels, destroying any previous sharpening. Technically, sharpening should follow any resampling, for the applicable purpose. Only a few years ago, computers were slower, and image screen viewers did this automatic size resampling using poor (but fast) nearest neighbor resampling (we often saw artifacts of it, so we much preferred to control it ourself, doing the necessary preparation). Today, better software can use better bicubic methods and it comes out better, but any original camera resharpening is of course still lost and overwritten (when resized smaller). And 2) the degree of sharpening needed depends on the use. Whether we are capable enough to resample them ourself or not, then we view those images shown on the screen necessarily at around 100 dpi resolution (pixel density, capability of the viewing device). Whereas if we print them, it is more like 300 dpi resolution - but still less than the original 12 or 24 megapixels. Because of the higher density of this view, printing can use (needs) more sharpening than the screen. Perhaps USM radius 2 or even 3 for printing instead of radius 0.8. So if we are knowledgeable, we sharpen last, for the specific purpose, judged when seen in that actual final purpose. If an image has multiple purposes, that means multiple sharpenings of multiple copies, for specific uses (i.e., done last). Sharpening done once in the camera is actually pointless (at best, unnecessary pixel manipulation), because any viewing or printing resampling destroys it (discards most of the pixels). [/QUOTE]
Verification
Post reply
Forums
Learning
Computers and Software
Sharpen jpgs in camera, or not?
Top