Back to the physics of light and colorimetry
Spectral decomposition of light
As for every wave phenomenon, light can be described by its wavelength. A light ray or beam may have one specific wavelength (it is then called monochromatic), or be a blend of several wavelengths. More generally, a light ray is a blend of an infinite number of wavelengths. It is described by a spectrum, the distribution of light power for every wavelength. The picture below shows the sunlight spectrum viewed from the earth’s surface:
Why doing spectral calculations?
When a light beam with a known spectrum hits a surface, calculating the spectrum of the reflected light in a given direction is rather simple. We need to know the reflection spectrum of the surface, and multiply it wavelength by wavelength by the incident light spectrum. That’s fully physically correct, and rather simple. This how light-surface interaction is computed in any realistic light simulation software.
Why not doing spectral calculations?
Whereas light simulation tools are generally doing spectral calculations, virtual image renderers generally don’t. They use a simplified light decomposition model, based on the way the human eye perceives light.
In standard light conditions (daylight or good artificial lighting), light is captured by three different retina cell types called cones. Each type of cone, named L, M and S, is sensitive to different wavelengths, as shown on the graph below. These three cones will then produce three nervous signals L, M and S to the brain, which will interpret them in terms of light intensity and color.
This means, that using only three numbers, called a tristimulus value, it is theoretically possible to represent any color visible by the human eye. Therefore, instead of computing and storing light intensities for every wavelength, image renderers use only three values, such as red, green and blue (RGB), XYX, LMS… components. This saves both computing time and memory.
However, this is not physically correct. The color of light hitting the retina can be fully described using only three numbers. But light interaction with an arbitrary surface cannot, as materials have unique spectral responses, and cannot be described by only three color channels. Therefore, in order to provide fully correct colorimetric results, all light transport calculations should be done wavelength by wavelength, and the resulting spectra reaching the virtual eye (or camera) should be converted to tristimulus values at the very last step of the calculation.
Example : dispersion in glass
Basics of light dispersion
When light enters a transparent refractive material, such as glass, diamond or water, its direction is changed. This phenomenon is called refraction and governed by Snell-Descartes law. The direction of the refracted ray depends on the refractive index of the material. The refractive index is himself generally varying with wavelength : for instance, the following link shows the refractive index vs wavelength values for a few glass types. This means, that each wavelength will be redirected differently by the surface. This leads to spectral decomposition of the light ray.
Simulation of prism dispersion with Ocean
The following image illustrates the principle of light dispersion, applied to a glass prism lit by a beam of white D65 source:
Now, instead of using a D65 white light source, the simulation will be done using two orange light sources, with the following spectra:
The result for the sodium vapor light source is shown below. The decomposition is nearly monochromatic, apart from the orange ray, we can see a faint blue secondary emission ray.
For the red-green source, the result is totally different, despite the same light color before the prism. The two red and green primaries are well separated:
These results are physically correct, and could not have been obtained with a tristimulus based image renderer. The two orange light sources have identical RGB values, orange corresponding approximately to R=1, G=0.5 and B=0. Therefore, a tristimulus renderer would not make the difference between the two light sources, even before computing the refraction on the prism. It will have to make assumptions, and will not be able to render correctly the prism with both illuminants : the result image will be exactly the same for both. Depeding on its internal assumptions, it will either provide a good result for the first one, or the second, but not for both.
Non-spectral image renderers are not physically correct and cannot provide reliable results, as we have shown in this simple example. More generally, reliable colorimetry predictions cannot be achieved with non-spectral calculations. Light sources have to be described spectrally, as well as materials, and the simulation tool must convert the spectral values reaching the observer to tristimulus at the very last step.
A spectral tool has also another advantage : it is not restricted to rendering and simulation of the human vision, but can calculate other useful data, such as exposure to UVs, radiative heating, and much more.