Chapter   | 5 |

Introduction to colour science

Sophie Triantaphillidou

All images © Sophie Triantaphillidou unless indicated.

INTRODUCTION

This chapter is an introduction to the fundamentals of colour description, measurement, evaluation and appearance. Knowledge of theories and current practices related to these topics is paramount in comprehending how colour is recorded, reproduced and managed in imaging media, subjects that will be introduced in later chapters. Colour is evaluated in different ways. Physicists describe variations in the intensities of wavelengths of visible radiation. Colorimetrists describe the amounts of reference primaries which, when additively mixed, match a particular reference light. Sensory scientists describe an observer’s response as a result of stimulating their visual sensory mechanism. There is no single meaning or definition attached to the word ‘colour’. Although the word is understood by everyone, if we are asked to define colour, the answer is not so obvious. Further, if asked to describe a particular colour, for instance ‘blue’, it is difficult to do so without using an example, such as ‘blue is the colour of the sea’.

Regardless of how we interpret the word, colour exists because of the way our visual system interprets light of different wavelengths. It is not merely a physical phenomenon but a psychophysical phenomenon (i.e. a sensory response resulting from a physical stimulus) or simply an aspect of visual perception. Working with colour imaging has made the classification of colour and its description in terms of numbers essential. Several systems have been devised over the years for the purpose. The Commission Internationale de l’Éclairage (CIE, or International Commission of Illumination) methods are widely adopted and will be the main focus of this chapter. The CIE has, since its inception in 1913, developed standards and procedures of metrology in the fields of lighting, vision, colour and, more recently, imaging.

THE PHYSICS OF COLOUR

We have already seen, in Chapter 2, that white light can be dispersed by means of a prism into light of different hues – violet, indigo, blue, green, yellow, orange and red, ‘all the colours of the rainbow’ – and that these hues correspond to different wavelengths. The wavelengths of visible electromagnetic radiation range from approximately 380 to 780 nm. In practice, light is never made of a single wavelength but of a narrow band or a large combination of wavelengths. Light consisting of a single wavelength – or a narrow band of wavelengths – is highly saturated in colour. These colours are referred to as spectral colours, or spectral hues. This is an opportunity for us to note that ‘hue’ and ‘colour’ are not the same thing, as we will see next, although the terms are frequently used interchangeably.

While Newton identified the seven spectral hues listed above, his descriptions were slightly different from those understood today. In particular he described what we would call ‘blue’ as ‘indigo’ and a ‘blue–green’ (or ‘cyan’), as blue.

A revised set of names is shown in Figure 5.1, which indicates the spectral hues and the corresponding wavelength bands.

COLOUR TERMINOLOGY

In this section we provide some important colour terminology that is used throughout this book. The definitions presented are based on definitions from the CIE International Lightning Vocabulary and from R.W.G. Hunt’s publications (see Bibliography).

image

Figure 5.1   Seven hues identified in the visible spectrum, together with their approximate wavelength ranges.

It is widely accepted that colours have three main perceptual attributes, relating to the response of the observer to a colour, and this is why colour is often referred to as being three-dimensional:

•   The term hue refers to the appearance of a colour being defined as similar to one of the perceptual primary colours, red, yellow, green and blue, or to a combination of two of them. Achromatic colours are perceived colours devoid of hue (i.e. neutrals) and chromatic colours are perceived colours possessing a hue.

•   Colourfulness denotes the extent to which the hue is apparent. Colourfulness is therefore zero for achromatic colours, low for pastel colours, high for oil paints and very high for spectral colours. For most luminance levels, the perceived colourfulness increases with increased luminance.

•   Brightness (in the past referred to as luminosity) denotes the extent to which a colour appears to exhibit more or less light. It is very high for light sources, high for whites, medium for greys and browns, and low for blacks. Colours viewed in high levels of illumination generally look brighter than when viewed in low levels of illumination. For example, your coloured T-shirt appears more colourful under bright sunlight than on a dull day.

Colours may be seen and judged in relation to other colours (known as related colours), or in isolation (unrelated colours), and perceptual attributes may be defined according to each situation.

A real-life example of unrelated colours is traffic lights seen at night. Most imaging applications deal with related colours, since colours in images are intermingled and are judged in relation to one another.

Hue, colourfulness and brightness are attributes of both related and unrelated colours; they all refer to the absolute level of perception. In contrast, the perceptual attributes of chroma and lightness are defined only for related colours and they refer to relative levels of perception, or more specifically the judgements are relative to ‘similarly illuminated areas’:

•   Chroma is defined as the colourfulness of an area judged in proportion to the brightness of a similarly illuminated area that appears to be white (or highly transmitting). Chroma is therefore a relative colourfulness, i.e. relative to white.

•   Lightness is defined as the brightness of an area judged in proportion to the brightness of a similarly illuminated area that appears to be white (or highly transmitting). Lightness is therefore a relative brightness, i.e. relative to white.

The adjectives bright and dim are used in connection with brightness, light and dark in connection with lightness, and strong and weak in connection with chroma.

Another perceptual attribute that relates to colourfulness is saturation, defined as the colourfulness of an area judged in proportion to its own brightness (rather than that of white). Saturation is also a relative attribute, but it is relative to another attribute of the object itself, and hence it can be used for both related and unrelated colours.

Figure 5.2 provides an example of the perceptual colour attributes introduced in this section. The same scene is presented under two different illumination conditions: on the left the scene is lit by bright direct sunlight and on the right by diffuse light on a dull winter day. The colours in the scenes will have the same lightness and chroma under both viewing conditions, since both lightness and chroma are relative attributes; they are judged with respect to the white in the scene. They will also have the same saturation, since saturation is judged with respect to the area’s own brightness. In contrast, the brightness and the colourfulness of the colours will be higher on the bright sunny day than on the dull day, since these two attributes refer to absolute levels of perception.

In colour we deal with the perceptual nature of the stimulus and we employ objective measures to communicate the subjective impression. It is therefore essential to distinguish between perceptual, or subjective, and objective aspects of colour and the relevant subjective and objective terms. All terms mentioned up to this point in the section are subjective terms. Common relevant objective terms are listed in Table 5.1. Some of these will be introduced in detail later in this chapter.

THE COLOUR OF OBJECTS

Objects are visible because of the light they reflect or transmit. Coloured objects appear coloured because they absorb some wavelengths incident upon them and further reflect or transmit others. A red roof illuminated by white light, for example, appears red because it reflects more wavelengths that correspond to red hues than wavelengths that correspond to blue or green hues – which are absorbed by the roof’s material.

image

Figure 5.2   Computer images simulating colours seen on a bright sunny day (a) and on a cloudy day (b). Lightness, chroma and saturation will be the same in both conditions, whereas brightness and colourfulness will be higher on the bright sunny day.

Table 5.1   Perceptual colour attributes and relevant objective measures

image

Thus, the colour of objects exists because of the interaction of three components: (1) the spectral quality of the light source illuminating the object; (2) the physical and chemical properties of the object, which modulate the electromagnetic energy coming from the source; and (3) the human visual system. The modulated electromagnetic energy coming from the object is imaged by the eye, detected by the photoreceptors and processed by neural mechanisms to produce the perception of colour. These components, according to Fairchild (2004) form the ‘triangle of colour’, shown in Figure 5.3. It is important to note that the light source itself not only interacts with the object but also with the human visual system, as the spectral output and intensity of the light source play a vital role in the colour appearance of objects through adaptation (see Chapter 4 and later in this chapter).

Spectral absorptance, reflectance and transmittance

Absorption, reflection and transmission are physical phenomena occurring when light interacts with matter. The radiant energy absorbed, reflected or transmitted by an object can be described by a graph in which the absolute or relative absorptance, reflectance or transmittance is plotted versus wavelength (the spectral absorptance, reflectance or transmittance), as shown in Figure 5.4. This representation, often referred to as the object spectrum, is the equivalent for the object of the relative spectral power distribution for self-luminous emitting media, such as light sources (see Chapter 3). The amounts of absorbed, reflected and transmitted radiant power must sum to the radiant energy incident on an object:

image

Figure 5.3   The triangle of colour.

Adapted from Fairchild (2004)

image

Figure 5.4   Relative spectral reflectance, transmittance and absorptance of a photographic slide exposed to red light.

image

Figure 5.5   Spectral reflectances of common objects.

image

where Φ(λ) is the incident radiant energy in radiant flux, A (λ) is the absorbed flux, R (λ) is the reflected flux and T (λ) is the transmitted flux. Since the last three quantities sum to the incident energy, they are typically measured and presented in relative terms, such as the ratios of absorbed, transmitted or reflected flux to the incident flux, or as percentages (i.e. ratio × 100). Note that ratio measurements are the subject of spectrophotometry, which is the measurement of ratios of radiometric quantities (see Chapter 2). The spectral reflectances of some common coloured objects are illustrated in Figure 5.5.

The interaction of electromagnetic energy with objects is not merely a spectral phenomenon but depends also on the viewing angle of the observer (or the measuring instrument) with respect to the object, as well as the illumination angle – the illumination and geometry. A classical example is that of gloss versus matte photographic paper. Due to the geometrical characteristics of the surface of gloss paper, light is reflected specularly (see Chapter 6) and colours appear more vivid or saturated compared to colours on the matte paper, from which light is reflected in a diffuse way.

CIE STANDARD ILLUMINATING AND VIEWING GEOMETRIES

Illumination and viewing geometries are significant when the colour of objects and material is measured. The CIE has established two pairs of standard illumination and viewing geometries for measuring the reflectance of objects, illustrated in Figure 5.6. These are:

image

Figure 5.6   Standard viewing geometries.

1.   Diffuse/normal (d/0) and normal/diffuse (0/d). In the diffuse/normal geometry the object is illuminated in a diffuse way, that is from all angles, and measured from an angle normal (i.e. perpendicular) to the object’s surface. An integrating sphere is used to provide diffuse illumination; the measuring device is set perpendicular to the measuring surface. In the normal/diffuse geometry the object is illuminated at an angle normal to its surface and measured using an integrating sphere. Measurements made with these two reverse geometrical arrangements normally produce the same result. They are measurements of total reflectance and are used to provide the reflectance of objects, defined previously as the ratio of reflected over incident light energy.

2.   45/normal (45/0) and normal/45 (0/45). In the 45/normal geometry the object is illuminated with beams of light incident at an angle of 45° from the normal and is measured from the normal. In the opposite arrangement, normal/45, the object is illuminated at an angle normal to its surface and the measurements are made using beams at a 45° angle from the normal. These two geometrical arrangements are used in applications where colour objects may have various degrees of gloss, for example photographic papers. In measurements obtained by these arrangements, the ratio of reflected over incident light energy is very small, since only a small fraction of the reflected beam is recorded. They are usually employed when the reflectance factor (see later) of a colour is the objective. In this case the reflectance of the object is compared to the perfect diffuser, a theoretical medium that is both a perfect reflector (i.e. has 100% reflectance) and a perfect Lambertian emitter (i.e. produces equal flux in all directions).

Diffuse measurements are the norm these days; 45/0 or 0/45 give problems with directional surface textures, for example photographic prints with textures such as matte, pearl, etc.

CIE STANDARD ILLUMINANTS AND SOURCES

The CIE also recommends a number of spectral power distributions (see Chapter 3), the CIE standard illuminants, for use in colour measurements (see examples in Figure 5.7). Some of these illuminants correspond to ‘real light sources’ whereas some others represent ‘aim’ distributions. It is important to distinguish between ‘source’ and ‘illuminant’: a source represents a physical light source with a given spectral power distribution whereas an illuminant is an aim spectral power distribution, purposely defined to serve colour measurements with spectral data specified by the CIE. The most commonly employed CIE illuminants are:

1.   CIE illuminant A. This represents light from a Planckian radiator (see Chapter 2) at a temperature of 2856 K. It is used when incandescent illumination is of interest.

2.   CIE illuminant C. This represents a phase of daylight (i.e. sunlight plus skylight) with a correlated colour temperature (CCT) of 6774 K.

3.   CIE daylight series. Illuminants that have been statistically defined from a large number of measurements of real daylight. The most commonly used are: D65, with a CCT of 6504 K, the recommended illuminant when daylight measurements are of interest, commonly used in photographic and imaging applications; D55, with a CCT of 5500 K, known as the sensitometric daylight, for which daylight colour films are balanced; D50, with a CCT of 5000 K, often used in graphic arts applications.

4.   CIE fluorescent series. There are 12 in total, representing spectral power distributions for various fluorescent sources.

5.   CIE illuminant E. This so-called equal energy illuminant, or equi-energy spectrum, is defined as a source whose relative spectral power is equal to 100.0 at all wavelengths. There is no physical source that emits equal power at all wavelengths in the visible spectrum. This illuminant is of interest for mathematical use in colorimetry (see later).

MODELS OF COLOUR VISION

The last component in the triangle of colour is the human observer. The CIE has defined two so-called standard observers. But before we introduce them, it is worth revisiting the models of human colour vision.

The trichromatic theory of colour vision is based on Younge Helmholtz theory and on the experimental work of Maxwell (see Chapters 1, 2 and 4), and states that all possible colours can potentially be matched by superimposing combinations of three light stimuli of different wavelengths. This so-called additive matching is possible because of the spectral responsivities of the three cone types found in the human retina, often noted as L (λ), M(λ) and S(λ) – L standing for long wavelength, M standing for middle wavelength and S standing for short wavelength (see Chapter 4). Figure 5.8 illustrates the spectral responsivities of the L, M and S cones.

In parallel with the development of the trichromatic theory, a theory based on opponent colour signals, light–dark, redegreen and yellow–blue, was proposed by Hering (a German physiologist born in 1834 who did research into colour vision and spatial perception) and was supported by subjective observations on colour appearance. Hering, who among other visual phenomena observed that reddish-greenish or yellowish-bluish hues cannot be perceived simultaneously, proposed that there are three types of visual receptors with bipolar responses to light–dark, red–green and yellow–blue.

Today, the contemporary colour vision theory, known as stage theory, involves two stages. The first stage is trichromatic, as described by YoungeHelmholtz theory. The three colour-separated images, however, are not transmitted directly to the brain. Instead, the neurons of the retina encode the colours into opponent signals. In the second stage the output of the three cones are summed (L + M + S) to produce an achromatic response that matches the CIE V(λ) distribution (see Chapter 4). The separation of the cone signals also permits the construction of the redegreen (LM + S) and yelloweblue (L + M − S) opponent signals. The transformation from L, M, S to opponent signals serves to disassociate luminance and colour information. An illustration of the encoding of cone signals into opponent signals is shown in Figure 5.9.

image

Figure 5.7   Relative spectral power distributions of CIE standard illuminants A, D65 and F2.

image

Figure 5.8   Spectral responsivities of the L, M and S retinal cones.

image

Figure 5.9   Schematic illustration of L, M and S cone signals into achromatic (A), redegreen (R-G) and yelloweblue (Y-B) opponent colour signals.

Adapted from Fairchild (2004)

The three opponent pathways have individual spatial characteristics. This can be seen in Figure 5.10, which shows representative contrast sensitivity functions (CSFs; see also Chapter 4) for the luminance (i.e. achromatic), redegreen and yelloweblue (i.e. chromatic) visual channels. The luminance spatial CSF typically peaks between 3 and 5 cycles per visual degree, and approaches zero at 0 cycles per visual degree and then at approximately 50 cycles per degree. The two chromatic spatial CSFs are of a low-pass nature (i.e. only maintain lower spatial frequencies) and have significantly lower cut-off frequencies. Clearly, the visual system is more sensitive to small spatial changes in luminance contrast compared to small changes in chromatic contrast.

The achromatic and chromatic pathways also have individual temporal characteristics, with the luminance pathway having higher overall contrast sensitivity and sensitivity that extends to higher temporal frequencies. Note that the temporal contrast sensitivity is mostly relevant to moving (i.e. time-varying) images such as digital video, which can be delivered at different frame rates. The disassociation of luminance and colour information can be seen as an advantage in compression, encoding and transmission of imaging information, in that chromatic information can be compressed to a higher degree than luminance information without that compression being noticeable (see example in Figure 5.11).

image

Figure 5.10   Typical spatial and temporal contrast sensitivity functions for the luminance and the two chromatic visual channels. Top: spatial contrast sensitivities. Bottom: temporal contrast sensitivities.

From Fairchild (2004); reproduced with permission of Wiley-Blackwell

image

Figure 5.11   Top left: original image. Top right: reconstructed image with a full-resolution luminance channel and chrominance channels compressed at a ratio of 10:1. Bottom left: luminance channel. Bottom right: red–green and yellow–blue chrominance channels.

Original image from Kodak Master PhotoCD

THE BASICS OF COLORIMETRY

Colorimetry deals with methods of specifying, measuring and evaluating colour. When we speak about colour we generally refer, as we saw earlier, to a visual experience. When we refer to the colour of objects, we generally use subjective terms, such as dark or light, white, grey or black; we use names of hues such as red, yellow, green or blue, or we refer to rich or pale colours. It is therefore essential to specify and measure colour in relation to these subjective attributes as seen by a typical observer. Such an observer is defined by the CIE.

One of the roles of CIE colorimetry is to describe with a set of values – usually three – any given colour stimulus from its spectral power distribution, while taking into account a ‘standard’ human observer. The values defining the colour stimulus are the mathematical coordinates of a corresponding colour space. The latter is defined as an n-dimensional – usually three-dimensional – geometrical model, where colours are specified by their vector coordinates or colour components. An example of two different colour spaces is illustrated in Figure 5.12, with the same colour being defined by its colour coordinates in each space.

Note that spectral power distribution here is referring to different things for different types of stimuli. For self-luminous stimuli, light sources, displays, etc., it is the spectral radiance or the relative spectral power distribution, P(λ). For reflecting or transmitting stimuli, i.e. objects, it is the product of the spectral reflectance or transmittance of the object, R (λ) (the object spectrum) and the spectral radiance or the relative spectral power distribution of the light source or illuminant of interest, P(λ) (i.e. the product P(λ)R(λ)).

image

Figure 5.12   The same colour is represented by different colour coordinates in two different colour spaces.

image

Figure 5.13   Trichromatic matching set-up and the bipartite visual field used in colour matching experiments.

The CIE methods of colorimetry are based on rules of matching colours using additive colour mixtures. The rules of additive colour mixture, known as Grassmanns laws, involve the combination of a number of light stimuli (i.e. light sources) reaching the eye’s retina simultaneously to match the sensation produced by a monochromatic light stimulus (Figure 5.13). The three laws state:

1.   Three independent variables are necessary and sufficient for specifying a colour mixture. Mathematically this principle of the so-called trichromacy can be expressed by:

image

where X, Y, Z are the so-called tristimulus values of the colour stimulus, C, and X, Y and Z are units of the reference stimuli, called primaries, used in the colour mixture. According to trichromacy, any colour can be matched by certain amounts of three primaries, thus the amounts (i.e. tristimulus values) and the primaries (i.e. reference stimuli) allow the specification of colour.

2.   Stimuli evoking the same colour appearance produce identical results in additive colour mixture, regardless of their spectral compositions. Take two colour stimuli C1 and C2 with specifications:

image

If:

image

then:

image

This principle implies that stimuli with different spectral characteristics (i.e. spectral power distributions, spectral reflectance or spectral transmittance) may produce the same colour match. This phenomenon is referred to as metamerism and the stimuli that evoke the same match are called metamers (see later for types of metamerism).

3.   If one component of a colour mixture changes, the colour of the mixture changes in a corresponding manner. This third law establishes the proportionality (Eqn 5.4) and additivity (Eqn 5.5) of the stimulus metric for colour mixing. If Eqn 5.2 is true then according to the third law the following are also true:

image

where k is a constant factor by which the radiant power of the colour stimulus is increased or decreased, while its relative spectral power distribution/reflectance/transmittance remains the same.

If:

image

Also, if:

image

where C1, C2, C3 and C4 are four different colour stimuli. The symbol ‘+’ in this context indicates additive colour mixture.

Colour matching functions and the CIE standard observers

According to Grassmann’s first law, by means of additive mixtures of three stimuli it is possible to match all the colours of the spectrum (i.e. the spectral colours). When this is done the result is presented by three curves, referred to as colour matching functions, generally denoted as image (Figure 5.14). These are curves of radiant power of three primary lights – e.g. R, G and B – per wavelength λ, required to produce by additive mixture a colour sensation equal to a unit power of monochromatic light of wavelength λ. Mathematically, this is expressed by:

image

where R, G, B are the primaries and image the amounts of the respective primaries, i.e. they are the tristimulus values for the spectral colours produced with the specific primaries.

image

Figure 5.14   The CIE 1931 RGB colour matching functions.

The units of the radiant power of the three stimuli are not physical quantities but are chosen arbitrarily. They are chosen so that the mixture (i.e. the addition) of one unit of each of the three primary stimuli matches a specific white, the equal-energy white, with equal radiant power at all ‘visible’ wavelengths. This hypothetical white, denoted as SE, is very important in colorimetry. If a different white or units were chosen, the curves would have different relative heights but not a different shape. If different primaries were chosen the shapes of the functions would differ.

The first CIE colour matching functions were defined in 1931 Figure 5.14) and were referred to as the RGB colour matching functions, image, or the CIE standard observer. They were derived by averaging the (similar looking) responses from a relatively small number of non-defective colour observers, who took part in two separate trichromatic matching experiments and used a 2°visual field for the purpose (see Figure 5.13). It is important to note here that, before the 1980s, it was not possible to measure the cone responses (shown in Figure 5.8) and thus trichromatic matching, although it did not give the cone response functions themselves, gave the colour matching functions which are linear combinations of them and which are suitable for the specification of colour. These standard observer data consist of colour matching functions for the following three primary stimuli: R (i.e. red), G (i.e. green) and B (i.e. blue), of wavelengths 700, 546.1 and 435.8 nm respectively. The units of the primaries were so defined that equal amounts of the three stimuli were required to match light from the equal-energy illuminant, SE. The luminances of these stimuli, LR, LG and LB at colorimetric unity, were in the ratios 0.17697 to 0.81240 to 0.01063.

Soon after the definition of the RGB colour matching function, the CIE decided to transform the RGB to another set of primaries, the XYZ primaries, the ones that are still used today in modern colorimetry. This transformation was intended to eliminate the negative values in the RGB colour matching functions (that indicate negative radiant powers – an impossibility) and to force one of the colour matching functions to equal the CIE photopic luminous efficiency function, V(λ), which was defined by the CIE in 1924 (see Chapter 4). The negative values were removed by selecting by a straightforward mathematical transformation defining a set of imaginary primaries that could be used to match all physically possible colour stimuli (see Figure 5.18). The resulting image functions are related to the image functions by:

image

image

Setting one of the colour matching functions equal to V (λ) served the purpose of incorporating photometry (see Chapter 2) into CIE colorimetry. The projected transformation made the respective colour matching functions image positive across the visible spectrum and image. The CIE 1931 colour matching functions for the XYZ primaries represent the CIE 1931 × 2° standard colorimetric observer; they are illustrated in Figure 5.15. Another colorimetric observer was recommended by the CIE in 1964, the 10° standard colorimetric observer that is often employed for defining colour using wider visual fields – also illustrated in Figure 5.15. It is related to the X10, Y10 and Z10 primaries which are only slightly different from the XYZ primaries.

Calculating tristimulus values from spectral data

Once the tristimulus values of all spectral colours are defined, the next step in colorimetry is to derive tristimulus values for any given colour. The tristimulus specification for any set of primaries is built on Grassmanns’s law of additivity and proportionality, and uses (1) the spectral information of the light source or illuminant, (2) the spectral information of the object and (3) the corresponding set of colour matching functions – i.e. a standard observer. Remember: the triangle of colour!

image

Figure 5.15   The CIE 1931 2° (thick line) and CIE 1964 10° (thin line) standard colorimetric observers.

Thus, for the CIEXYZ system, each of the XYZ tristimulus values is mathematically obtained by integrating over the range of visible wavelengths – for example, λ from to 400 to 700 nm – the product of the spectra of the source or illuminant, P(λ), the object R (λ), and one of the CIE 1931 colour matching functions (see also Figure 5.16):

image

where k is a normalizing constant which is defined differently for relative and absolute colorimetry, and Δλ in the summations represents the interval over which the spectra and CMFs are sampled. Naturally, the object spectrum, R (λ), is excluded in the integrals in Eqn 5.8 in the case of self-luminous stimuli.

In absolute colorimetry, used mostly for self-luminous stimuli, k is set at 683 lumens W−1, which is the maximum spectral luminous efficiency, and the illuminant spectra, P(λ), must be in radiometric units (i.e. W m2 str−1 nm−1 − see Chapter 2), corresponding to the photometric units required. In this case the tristimulus Y becomes equal to the luminance of the stimulus L and colorimetry is made compatible with photometry. In relative colorimetry, k is defined by:

image

This normalization in relative colorimetry results in tristimulus values that are in the range 0–100. k is chosen so that Y = 100 for the light source, or for objects with a reflectance, or transmittance spectrum, R (λ), is equal to 1.0 for all wavelengths. The Y tristimulus value of the object then gives the luminance factor of the object (i.e. how luminous it is with respect to the perfect diffuser), expressed as a percentage.

In some cases, for example in graphic arts and colour reproduction industries or in gamut mapping techniques, the Y tristimulus value is set to 100 for the white of the paper rather than the light source (see Chapter 23). This is the practice of normalized colorimetry and should be differentiated from relative colorimetry, which deals with colour measurements.

image

Figure 5.16   Graphical representation of the derivation of CIEXYZ tristimulus values for a surface colour with spectral reflectance R, using an illuminant with relative spectral power distribution P and the CIE 2° colorimetric observer.

CIE specifies that the range of summation in Eqns 5.8 and 5.9 is essential in the tristimulus specification. It recommends that the summations are performed at a sampling interval Δλ of 5 nm over a range from 380 to 780 nm, resulting in 81 samples. However, many instruments, such as commercial spectrophotometers used to measure spectral data, use a Δλ of 10 nm over a range from 400 to 700 nm, resulting in 31 samples.

The definition of colour by the three tristimulus values could be represented by plotting each of the tristimulus components along orthogonal axes in the ‘tristimulus colour space’, which is perceptually a non-uniform space, meaning that perceptual distances between colours are not uniformly distributed within the space.

Example relative CIEXYZ tristimulus values for the Gretag Macbeth Color Checker Chart for CIE illuminant C are shown in Figure 5.17.

Chromaticity diagrams

In order to provide a method of representation of colours in a more convenient two-dimensional diagram instead of a rather ‘unreadable’ three-dimensional space, chromaticity diagrams were developed. The colour of the stimulus is represented by its chromaticity coordinates, which are derived through a normalization of the tristimulus values of the stimulus that removes luminance information. Chromaticity coordinates for the CIEXYZ 1931 system are defined by:

image

where x, y and z are the chromaticity coordinates of a colour stimulus with tristimulus values equal to X, Y and Z. Since x, y and z represent normalized quantities (i.e. x + y + z = 1), only two (usually x and y) are necessary to define the colour:

image

image

Figure 5.17   Relative CIEXYZ tristimulus values for the Gretag Macbeth Color Checker Chart for CIE illuminant C.

Be aware that chromaticity coordinates represent a three-dimensional phenomenon with only two variables. Thus, to fully specify a colour one of the tristimulus values also has to be reported. Usually this is the Y tristimulus, which relates to the luminance (in absolute colorimetry) or the reflectance/transmittance factor (in relative colorimetry) of the colour, and the colour is then specified by the triplet x, y, Y. It is possible to recover X and Z from chromaticities and luminance by:

image

The CIE 1931 x, y diagram lacks uniformity in that perceptual differences between colours do not correspond to equal distances in the diagram. The CIE currently recommends the CIE 1976 uniform chromaticity scales (UCS) diagram, with the u′, v′ chromaticities defined by:

image

The third chromaticity coordinate, w′, is equal to 1 – u′ – v′.

Both x, y and u′, v′ chromaticity diagrams are shown in Figure 5.19. The outer curved boundary in the diagrams is called the spectral locus, and is defined by the chromaticity coordinates of the spectral colours. The corresponding wavelengths are shown around the locus line. All visible colours are represented in the area between the spectral locus and the purple boundary, the straight line connecting the two ends of the spectral locus, where there are no corresponding spectral colours since ‘purples’ are only produced by mixing red and blue light. Colours nearer the spectral locus are more saturated than colours located in the central areas of the diagrams, which tend to zero saturation (i.e. lack of hue – neutrals or achromatic colours). The chromaticity coordinates of the white point, i.e. of the light source illuminating the colour objects of interest or of the illuminant used in the measurements, should be plotted along with the chromaticity coordinates of the colours. A change of white point will alter the position of all colours in a chromaticity diagram, except the spectral colours (which are derived with respect to SE), since the source or illuminant is taken into account in the calculation of the tristimulus values of all colour stimuli (Eqn 5.10).

image

Figure 5.18   The derivation of XYZ primaries with respect to the RGB 1931 chromaticity diagram.

image

Figure 5.19   The spectral loci on x, y and u′, v′.

The colour spacing provided by the CIE 1976 UCS chromaticity diagram is nearly uniform, with maximum difference in perceptual lines between colours up to only 4:1, as shown in Figure 5.20, a large improvement when comparing the same differences on the x, y diagram, in which maximum differences can be as high as 20:1. Unfortunately, the CIE x, y diagram is still widely employed, especially in the digital imaging literature and in software representing colour gamuts of imaging systems (see Chapter 23). Its use often leads to very wrong interpretations, such as in the example illustrated in Figure 5.21, which shows RGB primaries of two RGB colour spaces (see Chapter 23) in both CIE x, y and u′, v′ diagrams. It is interesting to note that the position of the respective primaries does not differ greatly, as shown in the uniform u′, v′ chromaticity diagram. However, in the x, y diagram, the green primary of colour space B is positioned much further from the green primary of colour space A, resulting in a false impression of a much greater colour gamut for colour space B. In general, the comparison of gamuts of imaging systems in chromaticity diagrams can be misleading due to the lack of the ‘third dimension’; thus they should be interpreted with caution.

image

Figure 5.20   Visually equal chromaticity steps at constant luminance on the CIE x, y and u′, v′ chromaticity diagrams.

From Hunt (2004); reproduced with permission of Wiley-Blackwell

CIE uniform colour spaces and colour differences

In 1976 the CIE proposed two colour spaces, the CIELUV and the CIELAB spaces, that unlike the CIEXYZ system extend tristimulus colorimetry to three-dimensional spaces with dimensions that correlate with perceived hue, lightness and chroma (i.e. three main perceptual attributes of colour). This was accomplished by incorporating elements to account for visual chromatic adaptation (see later in the chapter) and for the non-linear visual response to light energy (see Chapter 4). Accounting for the non-linear visual response provided visual uniformity to the colour spaces and allowed the measurement of visually meaningful colour differences between two colour stimuli, by taking Euclidean differences between two points in these spaces.

The derivation of the colour coordinates of CIELUV and CIELAB spaces is not dissimilar. Both spaces employ a common uniform lightness scale, L*, which, when combined with two colour coordinates (u* and v* for CIELUV, or a* and b* for CIELAB), provides a three-dimensional colour space. Here we will present only the CIELAB space, which is the predominant CIE colour space used in colour imaging applications.

The CIE 1976 (L*, a*, b*) colour space, referred to as CIELAB, is defined by normalizing the tristimulus values, X, Y and Z, of the colour to the white of the source or illuminant, Xn, Yn and Zn and then subjecting them to a cube root. This represents the relationship between physical energy measurements and perceptual responses for all levels of illumination but very low levels where the relationship is linear:

image

where f(x) is defined differently for very low and for normal and high ratios:

image

image

Figure 5.21   RGB primaries for two different RGB spaces, A (primaries represented by circles) and B (primaries represented by triangles), in both x, y and u′, v′ chromaticity diagrams. The diamond represents the white point, D65.

The ratios X/Xn, Y/Yn and Z/Zn are rarely smaller than 0.01 in imaging applications and therefore the function 5.13d is mostly employed in Eqns 5.13a, b and c.

The measure L* is a correlate for perceived lightness; it ranges from 0.0 for black to 100.0 for a diffuse white and can be higher than 100.0 only in cases of specular highlights. Measures a* and b* are approximate perceptions of red–greenness and yellow–blueness respectively and take both positive (a* for red, b* for yellow) and negative (a*for green, b* for blue) values, while they both have value of 0.0 for achromatic stimuli (i.e. neutrals). L*, a*, b* colour dimensions can be considered as the Cartesian coordinates of a three-dimentional colour space, where highly chromatic colours are located in the extremities of the space and less chromatic colours toward the centre, near the L* axis.

From a* and b*, predictors for perceived chroma, image, and hue angle in degrees, hab, are derived, in which case the colour space can be represented in terms of cylindrical coordinates, as illustrated in Figure 5.22:

image

Achromatic stimuli have image equal to 0.0. hab ranges from 0° (located on the positive a*) to 360°.

image

Figure 5.22   CIELAB cylindrical coordinates.

It is important to note that the location of all colours in the CIELAB space, including spectral colours, depends on the white of the illuminant (i.e. due to the normalization of the tristimulus values of the colour to the white of the source or illuminant). Also, the location of the perceptual unique hues (i.e. red, yellow, green and blue) do not align directly with the a* and b* axes. Under daylight illumination they are located at hue angles of 24° (red), 90°(yellow), 162° (green) and 246° (blue), but under different illuminants they lie at slightly different hue angles.

Colour differences between pairs of colour stimuli, images, in perceptual units are measured as the Euclidian distances between the Cartesian coordinates of two colour stimuli (Eqn 5.14a), or they can be expressed in terms of lightness, chroma and hue differences (Eqns 5.14b and c):

image

Typical perceptibility tolerances (i.e. just perceptible differences) for uniform colour patches are as low as approximately 1.0 images, but for complex scenes (such as complex images, where colours are intermingled) they usually range between 2.5 and 4.0 images, depending on the medium and the levels of luminance.

The CIELAB space was designed so that colour differences are perceptually uniform throughout the colour space, i.e. equal perceptual differences between pairs of stimuli correspond to equal magnitudes in the space. This is not entirely true. Research examinations in recent decades have illustrated discrepancies between observed and measured differences. An example of this is illustrated in lines of constant perceived hue plotted in a CIELAB a*, b* plane, as shown in Figure 5.23. These lines are shown to be curved, particularly for the blue and red hues. In an effort to compensate for these non-uniformities, the CIE has published two newer systems for colour difference measurements, the CIE94 and the CIE DE2000. The latter, which is the current CIE recommendation, is described in Appendix B.

Metamerism and types of metameric matches

According to Grassmann’s second law, stimuli with different spectral characteristics may produce the same colour match. Stimuli that match in colour appearance but differ in spectral composition are known as metamers. The phenomenon is referred to as metamerism and the match between two colours a metameric match. Metamers have the same colorimetry for a given illuminant and observer. Metamerism is an essential element in colorimetry. It is what makes colorimetry possible: the colour matching functions defining the human observer derive from the fact that many colours can be matched by additive mixtures of three selected primaries. Metamers of spectral colours are unattainable, because they possess the highest intensity. However, the problem can be solved by adding a primary to the monochromatic stimulus (i.e. the reference side of the match) to lower the intensity so that the mixture (i.e. test side of the match) can match. Mathematically, a metameric match can be described for the CIEXYZ system by:

image

Figure 5.23   Lines of constant perceived hue plotted in the CIELAB a*, b* plane.

Adapted from Hung and Berns (1995); reproduced with permission of R.S. Berns

image

where X, Y and Z are the CIE 1931 tristimulus values. Note that constant k in the definition of the tristimulus values (see Eqn 5.8) is dropped here, or, rather, incorporated in the stimulus functions, s1(λ) and s2(λ), of the two stimuli. The functions s1(λ) and s2(λ) may represent:

•  For emissive stimuli, the spectral power distributions, P1(λ) and P2(λ), of two different sources or illuminants (i.e. s1(λ) = P1(λ) and s2(λ) = P2(λ)). This is the case where two different sources have the same tristimulus values and appear to have the same colour. Typically, one source will have a continuous spectrum and the second a selective narrowband spectral power distribution. Note that, if the two sources illuminate a spectrally selective object, the object will not generally appear the same.

•  For objects, the product of the spectral reflectance or transmittance of two objects, R1(λ) and R2(λ), and the spectral radiance distribution (or the relative spectral power distribution) of the light source or illuminant that illuminates them, P(λ) (i.e. s1(λ) = P(λ)R1(λ) and s2(λ) = P(λ)R2(λ)). In this case, if a different illuminant is used the colours of the objects will probably not match.

•  For objects, the product of the spectral reflectance or transmittance of two objects, R1(λ) and R2(λ), and the spectral radiance distribution (or the relative spectral power distributions) of two different light sources or illuminants, P1(λ) and P2(λ), (i.e. s1(λ) = P1(λ)R1(λ) and s2(λ) = P2(λ)R2(λ)). This is the rare case, where different coloured objects illuminated by different illuminants match in appearance.

image

Figure 5.24   Spectral reflectances of a metameric pair under illuminant D50. The spectral curves of metameric spectra cross each other at least three times.

In most cases, a metameric match is particular to one illuminant, i.e. illuminant metamerism, and one observer, i.e. observer metamerism. When either the illuminant or the observer is replaced, the match ceases to exist. Hence, if in Eqns 5.15 the CIEXYZ colour matching functions are replaced with another set, the matches will break down. Usually, a metameric match may hold for a second illuminant, if peak spectral reflectances or transmittances of the two stimuli are equal at three or more wavelengths. Observer metamerism cannot be eliminated in practical situations but the CIE 2° colorimetric observer represents satisfactorily real, non-colour-defective observers. Figure 5.24 presents a metameric pair of reflective spectra under illuminant D50.

In 1953 Wyszecki indicated that the spectral power distribution of stimuli, s (λ), consist of a fundamental function, so(λ), which is inherently associated with the tristimulus values, and a function, k(λ), called the metameric black, unique to each metamer with tristimulus values (0,0,0) having no contribution to the colour specification. The metameric black is invisible to the eye and when it is added to any spectral power distribution so(λ), the resulting spectra s (λ) = so(λ) + k(λ) will be a metamer of so(λ).

THE APPEARANCE OF COLOURS

While having spent a large part of this chapter on CIE colorimetry, it is now time to appreciate that the appearance of colours is affected by the conditions under which they are viewed (i.e. the viewing conditions – see later in the chapter), such as the illumination level and colour, the background colours, the degree of visual adaptation, and the extent to which the illuminant is discounted, as well as the spatial structure of the stimuli. While the CIE system of colorimetry is extremely useful, it does not account for the viewing conditions. Two stimuli with identical CIEXYZ tristimulus values will only match under very specific viewing conditions. For instance, it is not usually appropriate to represent colours on a reflection print by the same tristimulus values as on an LCD. In this section, we introduce a number of visual mechanisms and visual phenomena that somehow invalidate colorimetry and show that the use of simple tristimulus values requires further treatment for successful colour match.

Visual adaptation and related mechanisms

We show in Chapter 4 that the full range of luminance to which our visual system is sensitive is roughly 13 orders of magnitude (from approximately 10−6 to 106 cd m−2). However, not all these levels can be perceived simultaneously. The human visual system is capable of decreasing or increasing its visual sensitivity according to the mean level of illumination, i.e. light and dark adaptation. It is due to light adaptation that we do not see the stars during the day, although they are present in the sky. The mean level of luminance in daytime is several orders of magnitude higher than in night-time and we simply cannot perceive them. In opposition, it is due to dark adaptation that, although we cannot see existing objects in a dark room when we first come in from a well-lit place, we can start distinguishing them after several minutes.

The processes of light and dark adaptation have a considerable impact on the perceived colour of objects, but a third type of adaptation is far more important: chromatic adaptation, the ability of the visual system to adjust – to a large extent – its colour sensitivity according to the colour of the illumination. This can be thought of as the visual system’s ability to keep the white point in a scene more or less consistently white, independently of the colour of the illuminating light source, and shift all the colours in the scene relative to it. It generally applies to the range of black-body illuminants.

When reading a novel we always consider the pages of the book as being white, independently of whether we read in our bedroom, most often lit by incandescent illumination, which is predominantly yellow, or in the park under daylight, which is predominantly blue. Further, when we look for a familiar garment in our house, we can often identify it despite the changes in the colour of the illumination. The amount of reflected light from the objects as well as their tristimulus values under different sources is quite different. For example, the Z tristimulus value changes by a factor of 3 when the illuminant changes from A to D for a sample that looks white under both sources. Figure 5.25 illustrates a book illuminated by daylight and by incandescent light, as seen by a system incapable of chromatic adaptation.

The examples above indicate that chromatic adaptation is important in maintaining the perceived colour of objects despite changes in illuminant. This phenomenon is known as colour constancy, which is also assisted by two other mechanisms: memory colour (i.e. ability to recognize objects due to having a ‘typical’ colour associated with them) and discounting the illuminant (i.e. ability to perceive the colour of objects after interpreting the illuminant colour and discounting it), all having a considerable influence in the appearance of the colour of objects.

Another situation where chromatic adaptation is evident is in (negative) after-images, which occur when the eye’s cones are over-stimulated by a constant stimulus and as a result they adapt, i.e. decrease, their sensitivity. Afterimages are a result of independent changes in the sensitivity of cones and are manifested as colours complementary to the original stimulus. Therefore, a cyan after-image will be produced when the eye is over-stimulated by a red stimulus which tires out the red photoreceptors and as a result their sensitivity decreases and produces a weaker signal. After-images are ‘seen’ when the eyes, after over-stimulation, are diverted to a relatively neutral colour, such as a white area. The lack of red stimulus in the white area will produce the cyan after-image. Similar explanations will hold for all observed afterimages. Figure 5.26 illustrates an example of after-images.

The equivalent of visual light and dark adaptation in photographic and digital imaging systems is automatic exposure. Instead of the photographer deciding aperture and shutter speed settings, in ‘automatic exposure’ setting the camera’s software sets these, after having used a light sensor to measure the mean light intensity in the image plane (see Chapters 11, 12 and 14). Further, the equivalent of colour adaptation is the automatic white balance, available in digital cameras only. Colour films are balanced for specified lighting and colour temperatures, i.e. they cannot ‘adapt’ their sensitivity to the colour of the illumination. Any mismatch can be compensated with suitable colour-balancing filters, typically placed over the lens or over the light source (see Chapter 3). Automatic white balance in digital cameras is achieved by the camera’s software but it is not always effective (see Chapter 14). Unsuccessful white balance can create unpleasant blue, orange or green colour casts, which are particularly damaging in portraits – correct rendering of skin tones is of great importance in image quality. White balance in digital imaging is achieved via white point conversion or a chromatic adaptation transformation, introduced later in the chapter.

image

Figure 5.25   A book illuminated by daylight (a) and incandescent light (b) as recorded by an imaging system incapable of chromatic adaptation.

Original image taken from a Kodak PhotoCD

The adaptation mechanisms mentioned in this section are all parts of the general mechanisms that make us less sensitive to a visual stimulus (or indeed most physical stimuli) when the physical intensity of the stimulus is increased. Visual adaptation is very important in imaging applications because images are viewed under a range of different illuminations, with luminance and colour characteristics often different from those in the original scene.

image

Figure 5.26   Illustration of after-images. Fix your eye on the black triangle in the middle for 30 seconds and then look at white paper. What colour are the after-images?

image

Figure 5.27   Examples of simultaneous contrast. (a) The grey patches in the centre of each square are physically identical, but appear lighter as the background becomes darker. (b) The pairs of green, red and grey patches are physically identical but appear yellower and darker on the blue background compared to a bluer and lighter appearance on the yellow background.

Other colour appearance phenomena

In this section we discuss briefly some more important visual phenomena that are due to changes in the viewing conditions. More details and a more extensive list can be found in Fairchild (2004).

Simultaneous contrast causes a stimulus to shift in colour appearance when the colour of the background is changed (Figure 5.27). The shift in appearance follows the opponent colour theory of colour vision. Hence, a light background will cause a stimulus to appear darker, a green background will cause a stimulus to appear redder, a blue background will cause a stimulus to appear yellower, etc. Crispening is the increased perceived magnitude of colour differences between two stimuli of similar hue, when the background on which the two stimuli are viewed is similar in hue to the stimuli themselves (Figure 5.28). Spreading is the apparent mixture of a colour stimulus with its surround at the point of spatial fusion, i.e. where the stimulus and the background are no longer viewed individually but they blend together. Examples of spreading are the half-tone dots and the CRT coloured phosphors, which cannot be resolved beyond a specific viewing distance despite the fact that they are individual colour points. In all these phenomena, which are directly related to the spatial structure of the stimuli, the colorimetry of the colour stimuli is invariable but their appearance differs.

The BezoldBrücke hue shift is the phenomenon in which the hue of a monochromatic stimulus changes with its luminance. Although it is widely assumed that the hue is merely specified by wavelength this is not true; the hue does not remain constant with changes in luminance levels. Another phenomenon which is related to hue changes is the Abney effect, in which hue changes with colorimetric purity. It has been demonstrated that, when white light is added to a monochromatic stimulus, its perceived hue shifts. The Helson and Judd effect states that under higher chromatic illuminants bright objects appear to have the same hue as the illuminant, whereas dark objects appear opposite in hue.

In CIE colorimetry the Y tristimulus defines the luminance, or the luminance factor, of a stimulus and it is assumed that perceived brightness is only a function of Y. The HelmholtzKohlrausch effect invalidates this assumption. It is the phenomenon in which at constant luminance the perceived brightness of a stimulus is shown to increase with increasing saturation, and therefore makes brightness not only dependent on luminance, but also on the chromaticity of the stimulus.

image

Figure 5.28   An example of crispening. The pairs of red patches are identical on all backgrounds.

Further, the Hunt and Stevens effects describe phenomena where the colourfulness and the contrast of stimuli increase with luminance. This is apparent on bright sunny days, when colours appear vivid and of high contrast. Likewise, Bartleson and Breneman in the 1960s observed that the perceived contrast in images varied with luminance level and surround (see example in Figure 5.29). Stevens and Bartleson–Breneman effects are taken into account in many imaging applications. For example, because the perceived contrast decreases with luminance, motion picture films and colour slides, viewed in dark environments, are designed to have high physical, sensitometric contrast for compensation. The same is true for images transmitted for television, which are usually viewed in dim environments (see Chapter 21).

AN INTRODUCTION TO CATs AND CAMs

Colour appearance models (CAMs) aim to extend basic color-imetry by specifying the perceived colour of stimuli in a wide range of viewing conditions. They provide mathematical formulae to transform physical measurements of the stimulus and viewing environment into correlates of perceptual attributes of colour, such as lightness, chroma and hue.

The most important step in colour appearance modelling is accounting for visual chromatic adaptation. This is achieved using chromatic adaptation transformations (CATs) (also called chromatic adaptation models). CAT is the computation (or modelling) of the corresponding colours under a reference illuminant for a stimulus defined under a test illuminant. Corresponding colours are stimuli that maintain their appearance when viewed under different viewing conditions. For example, a colour with (XYZ)1 tristimulus values in one set of viewing conditions might appear the same as another colour, specified with (XYZ)2 tristimulus values in another set of viewing conditions. (XYZ)1, (XYZ)2 and the respective viewing conditions represent a pair of corresponding colours.

A generalized CAT

Most chromatic adaptation models are based on the hypothesis of Johannes von Kries (1902), which postulates that the cones adjust their individual sensitivities independently and in a linear fashion. For example, under a red light the cones become less sensitive only to longer wavelengths. Von Kries provided a specific set of equations which represent today what is referred to as a von Kries transformation. A generalized CAT is presented below. Readers should refer to Fairchild (2004), Kang (2006), Westland (2004) and other relevant publications for detailed descriptions of several modern models, such as Nayatani’s, Hunt’s, Fairchild’s, the Retinex theory and the BDF transform.

image

Figure 5.29   The perceived lightness and contrast of the black-and-white photograph changes with the background luminance.

Based on the von Kries hypothesis, the tristimulus values of a stimulus for two different illuminants, a source and a destination illuminant, are related by a linear transformation:

image

where T represents the transformation, X1, Y1 and Z1 are the tristimulus values of the stimulus under the source illuminant and X2, Y2 and Z2 those under the destination illuminant. To model accurately the physiological mechanisms of chromatic adaptation, the stimulus needs to be expressed in terms of cone responsivities, L, M and S, instead of the CIEXYZ tristimulus values. Cone responsivities can be approximated by a linear transformation of the CIEXYZ tristimulus and thus the conversion matrix T takes the form:

image

where matrix MCAT converts the sample tristimulus into cone responsivities. Then the diagonal transformation, D, is applied for transforming the cone responsivities under the source illuminant (L1, M1, S1), to the cone responsivities under the destination illuminant (L2, M2, S2). Finally, a second linear transformation, image converts them back to tristimulus space. The diagonal transformation matrix, D, has non-zeros only in the diagonal, which contains the destination-to-source ratios of the cone responsivities:

image

The matrix MCAT is specific to each model. In the current CIE colour appearance model it converts the sample tristimulus into sharpened cone responsivities, which are similar to cone responsivities. Difference in various CATs are also due to the elements of the diagonal transformation, D, in which viewing conditions are taken into account. The generalized transformation can therefore be expressed by:

image

Colour appearance models (CAMs)

Although CATs extend tristimulus colorimetry towards prediction of colour appearance they cannot be used to describe the actual appearance of a stimulus. Appearance attributes have been defined in Table 5.1 as brightness, colourfulness and hue (i.e. absolute), and lightness, chroma, saturation and hue (i.e. relative).

According the CIE Technical Committee (TC1-34): ‘A colour appearance model (CAM) is any model that includes predictors of at least the relative colour appearance attributes of lightness, chroma and hue. For a model to include reasonable predictors of these attributes, it must include some form of chromatic adaptation transform. Models must be more complex to include predictors of brightness, colourfulness or to model luminance dependent effects such as the Stevens or the Hunt effects.’ This definition allows the CIELAB and CIELUV colour spaces to be considered as CAMs, since both these colour spaces include a form of chromatic adaptation (i.e. the normalization of the tristimulus with the white) and have predictors for lightness, chroma and hue. However, they do not have the sophistication to predict luminance-dependent effects, background and surround effects, and they do not include correlates for brightness and colourfulness.

The first colour appearance model was introduced by Hunt (1982, 1985) and almost in parallel one was introduced by Nayatani (1986). CIECAM97s (1997) was the first CAM recommended by the CIE and was an interim model. It was a great success and led to progress in colour appearance modelling. The currently recommended CIECAM02 (2002) is simpler and more effective than the previous CIE model. Table 5.2 presents the input and output data for the CIECAM02. CAMs are very computationally intense. They are of great importance for cross-media image renditions. Readers should refer to references mentioned earlier in this section for extended reading on CAMs.

COLOUR REPRODUCTION

The colour reproduction of various photographic and imaging media is introduced in separate chapters, dedicated to specific processes and media. In general, colours can be reproduced either in self-emitting media (such as displays) or by using reflected or transmitted light (such as in prints and slides). As we saw in Chapter 1, there are two types of colour reproduction: additive and subtractive colour reproduction. A brief reminder: additive colour systems use additive mixing of chosen primaries to produce all colours, in a similar fashion to what is done in the colour matching experiments. For example in an LCD display, red, green and blue filtered sub-pixels are used to reproduce all possible colours for each displayed pixel (see Chapter 15). Since sub-pixels are very small they are visually unresolved at a typical viewing distance and the light from the LCD light source which is transmitted through them is additively mixed in our retina. When reflected-light is used, colour reproduction is predominantly achieved by subtracting light energy from the source spectrum illuminating the medium. One example of a subtractive colour system is the photographic print that uses a cyan dye that absorbs (i. e. subtracts) long wavelengths corresponding to reddish hues, a magenta dye that absorbs middle wavelengths corresponding to greenish hues, and a yellow dye that absorbs short wavelengths corresponding to bluish hues.

Table 5.2   Input and output data for the CIECAM02 (c, Nc, FLL and F are all constants provided for different viewing conditions)

INPUT DATA

OUTPUT DATA (APPEARANCE CORRELATES)

X, Y, Z: Relative tristimulus values of color stimulus in the source conditions

J: Lightness

LA: Luminance of the adapting field (cd m−2)

Q: Brightness

Xw, Yw, Zw: Relative tristimulus values of white

C: Chroma

Yb: Relative luminance of the background

s: Saturation

c: Impact of surround

M: Colourfulness

Nc: Chromatic induction factor

h: hue angle

FLL: Lightness contrast factor

H: hue composition

F: Degree of adaptation factor

aM, bM: Cartesian colour coordinates derived from colourfulness and hue

 

aC, bC: Cartesian colour coordinates derived from chroma and hue

 

as, bs: Cartesian colour coordinates derived from saturation and hue

The aim in the reproduction of colours in many imaging applications is a ‘faithful’ reproduction in colour appearance. Faithful may mean different things in different applications and in some cases it may be impossible. For example, it is very difficult to achieve in a printed image the high luminance range of an outdoor scene on a bright sunny day, since the photographic paper has only limited dynamic range (see Chapters 21 and 22). Similarly, it is impossible to reproduce the gamut of all surface colours on a computer monitor since the physical primaries of the monitor are a limiting factor. Furthermore, a faithful reproduction might not be the preferred reproduction by most viewers.

Objectives of colour reproduction

R.G.W. Hunt in the early 1970s defined six different objectives of colour reproduction that are introduced in this section. The decision on which reproduction is appropriate depends on various criteria, including the purpose of the reproduction, restrictions imposed by the physics of imaging systems, illumination and viewing constraints.

Spectral colour reproduction refers to the reproduction where the spectral reflectance curves of the reproductions are the same as those of the original colour object. In this case the colour matching is independent of the illuminant or observer (i.e. no metameric matches are involved). Colours are matched under daylight, tungsten, fluorescent and other illuminations, and appear the same to all observers, independently of their colour vision. Spectral colour reproduction is desirable for colours in mail-order catalogues, for example, where the appearance of colours needs to be maintained for all viewing conditions; another example is the reproduction of garment colours. In traditional film photography, the cyan, magenta and yellow dyes used cannot achieve spectral colour reproduction. In image displays, such as computer and television displays, the spectral emission curves of the phosphors in cathode ray tubes, as well as the spectral transmittances of the colour filters of liquid crystal displays (see Chapter 15) are such that the relative spectral power distributions of the displayed colours are markedly different from those of the originals.

Colorimetric colour reproduction is defined as reproduction in which the colours have CIE chromaticities and relative luminances equal to those of the original, but the spectral power distributions may differ. In the case of photographic reflection prints, original and reproduction illuminants should have the same chromaticities. The colorimetry is usually carried out relative to a well-lit reference white in the original and relative to its reproduction in the picture to assure equal relative luminances. The same is valid for images displayed on electronic displays. Colorimetric colour reproduction is therefore useful in imaging, but we need to bear in mind that the appearance of colours can be affected, sometimes markedly, by the intensity of the illuminant (as well as other factors mentioned earlier) and thus colorimetric colour reproduction does not necessarily imply equality in the appearance of colours in the original and in the image.

In exact colour reproduction, the chromaticities, the relative luminances and the absolute luminances of the colours are equal in the original and the reproduction. Thus, differences in the luminance intensity existing in the colorimetric colour reproduction are eliminated here. This type of reproduction is also referred to as absolute colorimetric colour reproduction. In practice, the observer will rarely see the same colours in an exact colour reproduction and in the original. The appearance differences will be mainly due to viewing environment (surround, angle of subtense – see later), glare, visual adaptation state and spectral power distributions of the illuminants not being identical.

If a television studio is lit by tungsten light and the scene is reproduced with colorimetric accuracy in a viewing situation where the ambient light is daylight, the result will look too yellow to the viewers. Equivalent colour reproduction may be the appropriate type of reproduction in this example. It is defined as reproduction where the chromaticities and the relative and absolute luminances of the colours are such that, when seen in the picture-viewing conditions, they have the same appearance as in the original scene. Similarly, in corresponding colour reproduction the chromaticities and the relative luminances of the colours are such that, when viewed in the picture-viewing conditions, they have the same appearance as the colours in the original would have, if they had been illuminated to produce the same average luminance levels as those of the reproduction. The corresponding colour reproduction is related to the equivalent colour reproduction in the same manner that the colorimetric colour reproduction is to the exact colour reproduction. Both equivalent and corresponding colour reproductions are referred to as appearance colour reproductions.

Finally, preferred colour reproduction is defined as the reproduction where the colours depart from equality in appearance between original and reproduction in order to give a more pleasing reproduction, while the absolute or relative white in the original is maintained. There is considerable evidence that many commonly encountered colours, such as skin colours, the colours of grass, blue sky, blue water and others are preferred differently than real-life colours (see also Chapter 19). Preferred colour reproduction may be the aim in many photographic applications, such as advertising photography, but the objective is rather difficult to achieve using automated reproduction procedures. In consumer digital imaging, imaging system manufacturers provide colour profiles optimized to produce ‘pleasing’ reproductions, which are built to satisfy observer/consumer opinion on the reproductions of commonly photographed subjects (details in Chapter 26).

According to H. Che-Li (2005), it is useful to divide any colour reproduction into three main categories: (1) subjective colour reproduction that specifies what a desired reproduction should be according to the visual impression; (2) psychophysical colour reproduction that converts the subjective criteria specified in (1) into physically quantifiable criteria; and (3) objective colour reproduction that deals with calibrating and controlling imaging devices to desired reproduction in terms of physical quantities.

INSTRUMENTS USED IN COLOUR MEASUREMENT

There are three types of measurements in colour imaging: spectral, colorimetric and density measurements. In this section we introduce instruments used for spectral and colorimetric measurements. Densitometers, employed to measure densities from photographic prints and films and other hard-copy output, are discussed in Chapter 8.

The most complete description of a colour stimulus is its absolute spectral power distribution, which is a description of optical radiation (spectral radiance or spectral irradiance) as a function of wavelength. A spectroradiometer is used for the purpose Figure 5.30). A spectroradiometer consists of a set of collecting optics that collimate the light coming from the stimulus forming an image beam that reaches a monochromator. A diffraction grating (or an optical prism) is used inside the monochromator to disperse the light into its spectral components. Selected narrow-bands (or ideally single wavelengths) from the incident light are focused on a single or multiple detectors, sampled and recorded. Modern spectroradiometers use charge-coupled-device (CCD) arrays for recording because of their nearly linear input-to-output characteristics. A lamp with known, calibrated spectral output is used for calibration of the instrument. It is possible to equip a spectrophotometer with a telescopic type of imaging lens, in which case the spectral radiant power can be measured – through a narrow aperture – from a very small area in the scene. This is a useful feature when stimuli are complex scenes, also useful for display point measurements – i.e. telespectroradiometry. Spectroradiometers typically measure over the range of 380–780 nm, at a spectral resolution of 1–10 nm. The CIE indicates that for most spectral measurements a 2-nm sampling interval is sufficient. Spectroradiometers are employed for measuring both emissive and reflective spectra. For reflective spectra, an appropriate light source (white, with a smooth spectrum) is used to illuminate the subject.

image

Figure 5.30   Spectroradiometer.

For many imaging applications only the relative spectral power distributions of the stimuli are necessary. These can be measured using a spectrophotometer, an instrument that measures relative light flux with respect to the human luminous efficiency (see Chapter 2 – radiometry versus photometry). A spectrophotometer is a similar instrument to the spectroradiometer but has its own light source. It is not used to measure emissive stimuli but only spectral reflectances or transmittances from objects. Spectrophotometers are employed to measure spectra from prints and film targets for calibrating printers and scanners, but are not used for displays. The measurement geometries for the sensor/illumination arrangement (see earlier in the chapter) may vary with application. Calibration targets are provided with the instrument, which consist of white and mid-grey tiles and a light trap for setting up the black point (i.e. noise level).

Direct colorimetric measurements can be obtained with colorimeters that measure tristimulus values (CIEXYZ) and report these, or luminance and chromaticity (Y, x, y), or values from related colour spaces such as CIELAB, without having to measure spectral power distributions. Colorimeters use special colour filters in front of a light detector, which allow a device sensitivity that approximately matches the CIE 1931 colour matching functions and therefore have a relative response similar to that of the CIE 2° standard observer. Colorimeters are less expensive than spectroradiometers or spectrophotometers and can take faster measurements. Some colorimeters have an internal light source for measuring colour from reflective objects, but most measure only emissive stimuli or externally illuminated objects. Tristimulus values for samples under different illuminants might be an available function. Most colorimeters are hand-held instruments, such as those used for CRT measurements. Larger, more expensive devices are also available that employ telescopic optics and small apertures, such as those used for LCD measurements (Chapters 15 and 23).

VIEWING CONDITIONS

Colour perception is heavily affected by the environment in which the colour stimulus is viewed. Thus, when considering related colour stimuli (i.e. colours that are not viewed in isolation), or colour scenes and images, it is essential to describe appropriately the viewing environment or viewing conditions. The vocabulary used to describe the individual visual fields that comprise the overall viewing conditions is specified in Figure 5.31. This vocabulary is an essential component of colour appearance modelling:

image

Figure 5.31   The viewing environment in related colours.

•  Colour stimulus. The colour element of interest. It is considered as a uniform colour patch of 2° angular subtense. In imaging of spatially complex scenes it is difficult to say whether the stimulus is a pixel, a cluster of pixels or the entire image. The latter is sometimes assumed, but it is a simplification. At present, there is no universally agreed definition of the stimulus for complex scenes. When using the currently CIE recommended CIECAM02 – designed for colour management and imaging applications – most applications assume that each pixel can be treated as a separate stimulus.

•  Proximal field. The intermediate environment of the colour stimulus, extending typically for about 2° from its edge in most or all directions. Again, in imaging it is very difficult to separate the stimulus from its proximal field. In most applications, the proximal field is assumed to be the same as the image background.

•  Background. The visual field extending 10° from the edge of the proximal field, in most or all directions. When the proximal field is considered part of the background, the latter is regarded as extending from the edge of the colour stimulus. In imaging, the background is usually considered as the area extending 10° from the entire image. The specification of background is very important in colour and image appearance, as illustrated in Figure 5.29.

•  Surround. The field surrounding anything outside the background. It is considered as the entire environment in which the stimulus is viewed. In imaging applications, the surround often falls into one out of the three following categories: dark, dim and average.

The viewing environment, as described here, is used in colour and image appearance modelling. Although it is a simplification of the total environment in which related stimuli and images are viewed, it includes the most important factors affecting colour appearance.

BIBLIOGRAPHY

Che-Li, H., 2005. Introduction to Colour Imaging Science. Cambridge University Press, UK.

Fairchild, M.D., 2004. Color Appearance Models, second ed. Wiley, USA.

Grum, F.C., Bartleson, C.J., 1984. Optical Radiation Measurements. Academic Press, USA.

Hung, P.C., Berns, R.S., 1995. Determination of constant hue loci for a CRT gamut and their predictions using color appearance spaces. Color Research and Application 20, 285–295. Hunt, R.W.G., 1998. Measuring Colour, third rev. ed. Fountain Press, UK.

Hunt, R.W.G., 2004. The Reproduction of Colour, sixth ed. Wiley, Chichester, UK.

Jacobson, R.E., Ray, S.F., Attridge, G.G., Axford, N.R., 2000. The Manual of Photography, ninth ed. Focal Press, Oxford, UK.

Kang, H.R., 2006. Computational Color Technology. SPIE Press, USA.

Reinhard, E., Khan, E.A., Akyüz, A.O., Johnson, G., 2008. Color Imaging Fundamentals and Applications. A.K. Peters, USA.

Westland, S., Ripamonti, C., 2004. Computational Colour Science using MATLAB. Wiley, USA.

Wright, W.D., 1969. The Measurement of Colour, fourth ed. Hilger, London, UK.

The relationship between the two is defined by a 3 × 3 linear matrix transformation as described in Chapter 23.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset