Would not 12-bit raw actually be 36-bits per pixel? At least at the editing stage........ 12 bits of each color channel. JPEG is only 8 bits per color.
I didn't realize this was a serious discussion...
No, raw is 12 bit pixels, but raw is only one color per pixel (red OR green OR blue sensor cells in a Bayer pattern), but raw is NOT three RGB colors per pixel, not until color is interpolated from adjacent pixels to create 24 bit JPG which is RGB.
24 megapixels as 12 bit raw is 24 x 1000000 x 1.5 bytes per pixel = 36 million bytes = 34.3 MB data
24 megapixels as 24 bit JPG is 24 x 1000000 x 3 bytes per pixel = 72 million bytes = 68.7 MB data. Then JPG file compression makes much smaller files, possibly about 1/4 to 1/12 that size while in the JPG file. But it will be 68.7 MB again when uncompressed into memory again.
Calculator at
Pixels, Printers, Video - What's With That?
So JPG pixels are interpolated RGB color from at least two adjacent raw pixels, which is arguably a lower spatial resolution, but raw pixels are only one color (which is that lower detail, lower than we might imagine), and of course, raw is otherwise unusable until this JPG is created.
This part seems a pointless debate since the camera sensor is always raw. Raw is what the camera will create, one 12 bit color per pixel. The only choice is if we choose to have the camera create JPG from it, or if we wait until our raw software does the same thing later. The advantage of shooting raw and waiting is that it allows us to see the image first, before we otherwise have to simply guess in advance about white balance or color profile settings. So we can SEE, and decide and know (and even change our minds) about best settings, and do this while still 16 bits in computer memory. Then either way, we are going to get 24 bit JPG files. But we still retain our raw advantages, and can go back anytime and decide different choices.
If you want to look at a JPG as being 24-bit color because you're combining all three 8-bit color channels, you have to do the same for the color channel information stored in the raw file. Three 12-bit color channels would result in 36-bit color using your method. Not too mention most of us who shoot raw are probably shooting 14-bit raw files which would result in 42-bit color by your method. Just because you can't "view" the color doesn't mean the color information of a raw file isn't there.
Raw does have three 12 bit color
channels, but these "channels" involve the spatial resolution of at least three adjacent sensor pixels (one red, one green, and one blue, which is from three pixels). Each sensor pixel is only
one color per pixel. At least three spatial pixels are interpolated into each one JPG pixel, which is RGB then. So this recreates some color resolution in each pixel, but it loses spatial resolution. Saying, every output pixel detail from raw is distributed from among three image pixels. And this always happens regardless of when the JPG is created.
The Sigma Fovean sensors are exceptions with a depth of three layers of cells, so three color layers like color film layers, which is three colors per pixel. I think there are other disadvantages though.