A picture taken at sunset in San Marino last week, processed for display on the screen.
While this was obviously common sense for important painters such as Georges de La Tour, Rembrandt, and probably – I believe – for Georges Seurat, few people realize how twisted pictures – and today’s digital pictures – have to be in order to be displayed or printed by our imaging devices.
Serious people in the field of photography know about this too, of course, since this is their daily routine problem – however, they usually don’t care.
I am talking about the fact that when we experience real scenes in the real world, outside – like me when I took this picture in San Marino at sunset – the colours we see are very bright, intense, and the contrast within the different visible parts of the landscape is very high. Ruffly speaking, the contrast ratio between the lightest and darkest parts of the scene in the photo was between one and two hundred here, and the colours in the sky were much more vivid than what we see in the poor picture above.
When we use a digital camera, the data measured by the sensor is converted: it is mathematically flattened to fit the limits of a computer screen, for example.
But if we keep the original data (what photographers call the “RAW” picture), we actually have access to a much more accurate light measurement – even if it doesn’t “look good” on the screen, the value of each pixel is actually proportionnal to the amount of light in the original scene, plus the electronic noise of the camera’s sensor, plus the distortion of the lens, plus the irregularity of the sensor itself (usually a bit more efficient from left to right, or the opposite).
This is how the RAW image looks like on the screen:
Raw linear picture: pixel’s values are theoretically proportionnal to the amount of light in the original scene, but noise and optics interfere a bit.
When we print the picture, it’s even worse. While some colours will appear more intense than on the screen (cyan, yellow…) some others will fade, and the general contrast will drop. I have a quite good printer but still, the white of the media reflects 90.73% of the light that a pure white would reflect, and the pure black of the printer reflects 3.85% of it. That means the maximal contrast ration on a print is 23.56 – while for the original scene it’s between 100 and 200! It’s about 5 to 10 times less…
Now, what we can do and what nobody does, is to colorimetrically render – as exactly as possible – a fragment of the picture within the limits of what the printer can copy accurately. The amount of light will probably still be inferior to the one in the original scene, but the relations between the colours will be proportionnal to the ones in the original scene.
When we do this, we get as close as we can to making exact copies of (fragments of) the original scene. The better the camera sensor, the better the copy. What do we see? The colours are much closer from the original sunset. When we want the entire sky, we have to darken it, then the floor disappears more. Why? Because when we darken the sky more we ask the printer to print blacker than its black, which is impossible. Same if we want more floor: we ask the sky to become lighter than the media, which is unprintable…
La Tour understood this very well back in 1642… That’s why he hid the flame with the little girl’s hand. The fact is not a well-kept secret! :)
Georges de La Tour, Saint Joseph charpentier (detail), 1642, Musée du Louvre
Below are some examples of this, within the limits of my printer:
(Click on the pictures to zoom!)
The principle is the same except for the light: instead of its colour temperature, its dominant wavelength changes
I played around with the measured spectra of my Epson and simulated materials colours that strongly switch hue when the colour temperature of the light source changes. The materials are impossible because the spectrums are either emissive (higher than 100% reflection, like fluorescent materials) or have negative values. The results are funny since they look like natural colours close to some exotic flowers or to some “birds of paradise”.
Recently in the LIMA conference: http://lima-project.fr/programme-de-la-journee-industrielle-lima/ Noël Richard was tlaking about the colour differences across the visual EM spectrum and said that the DeltaE L*a*b* (1976) formulae were suited to compute it. I am not sure I did the right thing in the picture above (it’s still a bit complicated and obscure to me…) but this picture, as it would be printed by my Epson, should contain something close to the rendering of equally-colour-different EM bands. The picture contains 31 bands, like “keys” of a colour piano… The background is a 10% relative luminance neutral grey and each “key” has the same power.
Notice that the bands are very close in the yellow, and even closer in the blue-green areas: that’s where we are supposedly most sensitive to frequency-difference.
Notice that on the other hand the bands are far from each other in the red area.
Original spectral data: 31 gaussians with a variance of 1nm (click on the diagram to zoom)