Color Constancy is the ability to perceive colors of objects, invariant to the color of the light source. This ability is generally accredited to the Human Visual System, although the exact details remain uncertain. An example of the ability is shown in the figure to the right. In this example, the same flower is depicted four times, each rendered under a different light source. As can be seen, the color of the flower is strongly dependent on the color of the light source. Computational Color Constancy can follow different paths to maintain stable color appearance across light sources. One common path, which is now believed not to mimic the human visual system, but is very common among *computational* models, approaches the problem using two phases. First, based on several assumptions, the color of the light source is estimated from an input image. Then, using this estimated illuminant, the input image is corrected so that it appears to be taken under a canonical (e.g. white) light source. This approach is outlined in the figure to the left. The original image is recorded under a blueish light source. Assuming uniform illumination across the image, the estimated light source (the blue pane) is used to correct every pixel in the input image. The output image is the ultimate goal, but the main focus of this website is on the first phase: illuminant estimation.

# Image Formation

One common assumption in literature on color constancy is Lambertian shading, i.e. assuming an image consists of only matte, dull surfaces. In this case, an image ` f` = (

*f*

_{R},

*f*

_{G},

*f*

_{B}) is composed of the multiplication of three terms, i.e. the color of the light source

`, the surface reflectance properties`

*I*(**,λ)**`x``and the camera sensitivity function`

*S*(**,λ)**`x``: where`

**ρ(λ)**`c`=

`{R,G,B}`,

`ω`is the visible spectrum,

`is the Lambertian shading,`

*m*(**x**)`λ`is the wavelength of the light and

`is the spatial coordinate in the image. Further assumptions include a spectrally uniform light source, i.e.`

**x**`=`

*I*(**x**,λ)`for all locations`

*I*(λ)`in the image. Then, the observed color of the light source`

**x****e**depends on the spectrum of the light and the camera sensitivity function: Since there are two unknown variables (the surface reflectance function

`and the color of the light source`

*S*(**x**,λ)`) and only one known variable (the image values`

**e**`), the estimation of`

**f****e**is an under-constrained problem. A common approach to solving this problem is to make further assumptions, for instance on the distribution of image colors or on the set of possible light sources. These assumptions will further be explained here.

# Image Correction

Once the chromaticity of the light source is known, the input image can be corrected. The transformation to convert an input image, recorded under an unknown light source, to an output image that appears to be recorded under a canonical light source, is called chromatic adaptation. Chromatic adaption is often modeled using a linear transformation, which in turn can be simplified by a diagonal transformation when certain conditions are met. On this website, the diagonal model is used to correct the input images. Alternatives to this include CIECAT02 (part of CIECAM02) and linearized Bradford. The diagonal model is given by: where **f**_{u} is the image taken under an unknown light source, **f**_{t} is the same image transformed, so that it appears if it was taken under a canonical light source, and `M`_{u,t} is a diagonal matrix which maps colors that are taken under an unknown light source `u` to their corresponding colors under the canonical illuminant `t`.