Megapixels versus Earthquakes

WayneF

Senior Member
It seems we are aware that camera movement destroys detail. Doesn't it make sense that if we destroy the same square MM's of detail, a 24.2MP sensor suffers more resolution loss than a 12MP of the same size.

OK in that comparison case, only because the lesser 12mp image never had the resolution in the first place, blurred or not. :) Cannot say it is better, we just turned the volume down so we don't hear it as well (so to speak). The one pixel image seems about like not even taking the image in the first place ... either of which methods should eliminate the blur, so to speak. Or you could use fewer larger pixels, or you could defocus the image, as two other ways to reduce the resolution (to hide detail, less well seen). The detail might be missing, but the image area is always there, and the blur is covering that image area.

If you shake the camera, you're going to have blur. The size of that blur area is simply not affected by pixel size or resolution. Its area size is affected by sensor size, by lens focal length, by focus distance, shutter speed, etc, but not by pixel size.

Techies always seem to think as if the pixels create the image, which loses sight of the truth. The lens creates the image and projects it onto the sensor (including any motion blur), and pixels simply provide resolution to sample the color of tiny areas of the sensor, to attempt to reproduce the analog image that the lens projected.

We can discuss pixel size in regard to added noise, or reproduction dynamic range, or reproduction quality, but blur is a property of the projected lens image... that we are merely trying to reproduce.
 
Last edited:

Eyelight

Senior Member
Ok. Maybe not so near the edge. Excerpt and link from Nikon below. More to it, but it explains what I imagined, more or less.

"As pixel counts have increased, the size of a single pixel has decreased. For example, if two DX-format cameras; the D40 (image size: 3008 x 2000) and the D3200 (image size: 6016 x 4000) are compared, the pixel size of the D3200 is approximately a quarter of that of the D40. When images are displayed on a computer monitor at 100%, D3200 images are actually displayed approximately 4 times larger than D40 images (area ratio). Even if images are captured under the same conditions and with the same level of hand or camera movement, blur in the D3200 images could effectively be quadrupled when displayed and become more noticeable. For this reason, it can be said that high pixel count cameras are more susceptible to slight movement."


https://support.nikonusa.com/app/an...ing-photos-with-high-resolution-d-slr-cameras
 

WayneF

Senior Member
Ok. Maybe not so near the edge. Excerpt and link from Nikon below. More to it, but it explains what I imagined, more or less.

"As pixel counts have increased, the size of a single pixel has decreased. For example, if two DX-format cameras; the D40 (image size: 3008 x 2000) and the D3200 (image size: 6016 x 4000) are compared, the pixel size of the D3200 is approximately a quarter of that of the D40. When images are displayed on a computer monitor at 100%, D3200 images are actually displayed approximately 4 times larger than D40 images (area ratio). Even if images are captured under the same conditions and with the same level of hand or camera movement, blur in the D3200 images could effectively be quadrupled when displayed and become more noticeable. For this reason, it can be said that high pixel count cameras are more susceptible to slight movement."


https://support.nikonusa.com/app/an...ing-photos-with-high-resolution-d-slr-cameras


Do you observe the specific qualifications listed there? :)

Says "When images are displayed on a computer monitor at 100%, D3200 images are actually displayed approximately 4 times larger than D40 images (area ratio)."

Do you do that? It is speaking about enlargement, and about directly viewing the pixels - i.e., making all pixels view as the same size on the screen, when they were not the same size before (which is generally a rare thing to do).

Or, if you just resample both to maybe 1000 or 1600 pixels wide to fit on the monitor screen for viewing,
or resample both smaller to print a 6x4 inch print,
then these are a drastically different situation (normal however, more a general rule),
and the blur size and resolution would be the same for both cameras (in this smaller resampled same size state).


 
Last edited:

Dave_W

The Dude
Myth: Diffraction and Motion Blur Worsen With More Megapixels

The idea that diffraction blur increases as you increase photosite density (e.g.- use “more megapixels”) is rooted in the very same misunderstanding that I described in the original post – failing to take into consideration differences in the way that images are seen at 100% magnification on the screen versus how they are seen in real final images such as prints or on-screen jpgs. If you look for diffraction blur in a 100% magnification crop from a 21MP image on your computer screen and then look for it in a 100% crop from a 12MP image, the diffraction blur will appear to be “larger” in the first case than in the second.
But it isn’t.
Imagine some very gross diffraction in which (to use loose terminology) the “blur” from diffraction is 1% of the width of the frame. (This would be absolutely horrible blur, and it is far beyond what you’ll see in the real world – but 1% is a nice convenient value for this explanation.) Since the lens produces the blur, not the sensor, this “1% blur width” will be the same whether the image is projected onto a piece of 35mm film, a 8 MP full frame sensor, or a 21MP full frame sensor. In fact, for the thought experiment, imagine that you make photographs with all three media. Now make three prints at whatever size you prefer – let’s say 16″ x 24″ for the sake of having a real size in mind. The “1% blur width” will be 1% of 24″ in all three of the prints. In other words, there is no difference in the amount of diffraction among the prints due to different recording media or different photosite densities.
The situation with motion blur is essentially the same. The crucial issue is over what portion of the image the blur takes place. If it is, say 1/10,000 of a frame width the blur will be 1/10,000 of the print width no matter what number of photosites you use – ignoring for a moment the fact that no current full frame DSLR can resolve 1/10,000 of the width of the frame. But let’s say the motion blur is grosser – perhaps 1/100 of the frame width. It will be 1/100 of the picture width in all three cases, independent of the film/sensor characteristics.
 

Eyelight

Senior Member
We're not talking about the same thing. Have not been since the first post. But that's okay.

My apologies that I failed to describe the concept adequately the first post, and that my subsequent attempts were equal to the first. Hey, at least I'm consistent.:)
 
Top