Is there any difference between a photo taken using the monochrome setting on my camera compared to a photo taken in colour and then switched to monochrome on the computer afterwards?
Thanks
I imagine that Nikon knows the details of how to convert the cameras color image to Grayscale. There is one "correct" way to do it, and then a few creative ways.
Earliest television (NTSC specifications, 1941) derived the specification of how their B&W TV cameras recorded color. Not with direct monochromatic sensors (a desaturation method) which is unacceptable, but instead, results are based on how the human eye sees color. Specifically, the eye perceives Green as brightest, and Red next, and Blue not perceived very bright at all. So green should be a lighter gray tone, and blue a darker gray tone (not just ignoring the color).
The NTSC formula is called Luminosity. Luminosity = Red x
0.3 + Green x
0.59 + Blue x
0.11. Computes the one grayscale luminosity value. The three coefficients add to 1.0, but the colors are weighted.
This is also how B&W panchromatic film records color (based on Luminosity). There is just one correct way, in regard to our eyes. Early scanners used monochromatic sensors, which was fine for black text, but were miserable to copy color work. Photo detectors are in fact monochromatic, but now they have color filters on them, separating the light that hits them into the three RGB colors. So they of course definitely ARE color sensors.
Photo editors however have different methods for converting color to gray scale. One always present is the Grayscale menu, the obvious choice, which does it the right way, Luminosity above. Desaturate is another way, to simply ignore the color and just convert to one gray tonal value. This has no actual useful value. Photoshop Channel Mixer is another popular creative way, you adjust each of the three RGB channels any way you want it. That is not a straight conversion however, it is creative editing of the tones.
Here is a sample. The color image only has tonal values of 150 or 250 in it (of 255). Some samples are one color of 150 or 250. Some are two colors of combinations of 150 and 250. Maybe (250, 0, 150) or (150, 150, 0), etc. Inquisitive readers can put it in their editor and investigate it with the eyedropper color sampler.
The middle image uses the photo editors regular Grayscale menu, which uses the standard NTSC TV formula. I did not check, but I would be dumbfounded if the camera did not use the same method. It is the classic method considered the correct way to do it, the same brightness as the eye sees color. The Grayscale menu is the intended obvious best choice. It is a MODE, not just an effect.
The bottom image uses the Desaturate method, just ignoring color and determining one value for it. The results are NOT the same. (150, 150, 0) comes out 75. (250, 0, 250) comes out 127 (in all three channels, still RGB, and NOT grayscale at all). Simple division, ignoring color. Basically, green is darker, blue is brighter (than the eye sees it). But notice there are ONLY TWO RESULTS in this sample, only 75 or 127 (because only two color tonal values). It is a miserable gray scale conversion, and it seems really hard to make any case for Desaturate.
Grayscale does tend to be flat, it has no color to dress it up or to create differences. But the way to address that is with contrast, not by screwing up the color results. The Levels control (standard histogram) is the classic tool to address that. The White Point should be lowered to where the bright data actually starts. The Black Point should be raised to where the dark data actually starts. Or both maybe a little past the start points, if the clipped areas are not important detail. This is whiter whites and blacker blacks, which is contrast, more dramatic, and the results are invariably good. Goal is some pure white areas, and some pure black areas. Ansel Adams did that via dodging and burning in, but digital Levels are much easier now. What's hard about that?