Chapter   | 14 |

Digital cameras and scanners

Elizabeth Allen and Efthimia Bilissi

All images © Elizabeth Allen and Efthimia Bilissi unless indicated.

DIGITAL STILL CAMERAS

As we have seen in earlier chapters, conventional silver halide photographic systems produce images by the recording of a light intensity function as a latent image, which is a light-induced chemical change in the silver halide emulsion. The image is rendered visible and permanently stored as a result of film development and fixation; therefore, the image sensor in silver halide camera systems is also the storage medium. Each image requires a separate image sensor (film frame) and the characteristics of the imaging material may be changed from frame to frame by changing the photographic emulsion.

In an electronic camera system the image sensor converts a light intensity function to an electronic signal. The process differs fundamentally from that of a silver halide system in that the image sensor is a capture and conversion device only and is a permanently fixed component of the camera. The camera therefore requires a separate storage device in addition to the image sensor. As described in Chapter 1, the first prototype electronic still camera, the Mavica, was announced by Sony in August 1981. Such early electronic cameras were analogue devices, storing the image function as a video signal on some type of electromagnetic storage media. Digital still cameras (DSCs) similarly rely on a sensor which converts a light intensity function to an electronic signal, but they output a digital signal by passing it through an analogue-to-digital converter (ADC). The signal is then encoded and stored on some form of digital storage device. See Chapter 7 for information on image formation and Chapter 9 for further information on image sensors.

Because digital cameras have a fixed sensor, it is important to be able to change various characteristics of the sensor, such as its sensitivity (equivalent to ISO film speed) and its response to illuminants with different spectral characteristics (equivalent to the colour balance of film). This is achieved by manipulating the output signal either before or after analogue-to-digital conversion.

Many of the functions, features and architecture of digital still cameras are similar to those of conventional film cameras. They consist of a lens, shutter and viewfinder system, a means of determining exposure, focusing, automating features and data display. Many of these features are covered in detail in Chapter 11. Therefore, this chapter mainly concentrates on the aspects of digital cameras that differ from those of film cameras.

DIGITAL STILL CAMERA ARCHITECTURE

There is an enormous range of different types of digital camera. The main categories of camera designs, which may be broadly classified based on image format and camera features, are described later in the chapter. A block diagram of the generic architecture of a digital still camera is illustrated in Figure 14.1. Many of the mechanical and optical components of digital cameras may be similar to their analogue equivalents, especially in digital single-lens reflex cameras (SLRs). The characteristics of digital cameras are determined by a combination of the hardware (sensor, and optical and mechanical components), the camera firmware, the camera system controller and the image-processing algorithms applied in the digital signal processor (DSP).

image

Figure 14.1   Block diagram of a generic digital still camera.

The software in the DSP tends to provide higher-level image processing, which may be affected by user settings, some of which may be bypassed if capturing RAW images. By contrast, the camera firmware is software embedded in the camera’s read-only memory, which often provides lower-level processing, but which is essential to the basic functioning of the camera. Firmware programs control the microprocessors and circuits, which in turn control, for example, the LCD screen, autofocus function, sensor and buffers, hence the direct connection with the camera system controller in Figure 14.1. Firmware updates are often released by camera manufacturers to improve performance or add functionality.

The camera architecture may be broadly divided into three subsystems. The optical and mechanical subsystems define how the original scene is captured; these are often similar in structure to those in film-based camera systems, with some small differences. The analogue front-end consists of the image sensor, analogue pre-processor and ADC, which capture and process the analogue signal before converting it to a digital signal for further processing. The digital back-end consists of the DSP, camera system controller, LCD display and various other components. The digital circuits apply various image processes and image compression if required, before storing the image in a suitable format (see Chapters 17 and 29 for information on file formats used in digital cameras and compression methods). Generally, the processed images will be stored on a removable memory card. They may also be downloaded directly to a computer via a universal serial bus (USB) or other interface.

Automatic exposure, autofocus (AF) and automatic white balance control are performed by the camera system controller based on image signals generated in the DSP when the shutter is depressed halfway. In compact digital cameras these signals have usually come from the image sensor. Digital SLRs may have separate sensors for AF and auto-exposure.

The camera controls available to the user will be dependent on the type of camera. Digital SLRs, for example, may be similar in the design and positioning of external controls to equivalent film SLRs by the same manufacturer. An information display LCD is used to apply many camera settings, including image resolution, compression method, capture colour space, white balance and features such as red-eye reduction.

Many DSCs have an electronic viewfinder in the form of a thin-film transistor LCD display as well as an optical viewfinder, although some cameras have an electronic viewfinder only. For the purposes of framing and focusing, when the shutter is depressed halfway, a sub-sampled image is captured, processed and stored in a dynamic random access memory (DRAM) buffer and a thumbnail output to the LCD screen, for instant review.

In digital SLRs, however, the LCD display only plays back captured images and does not function as a viewfinder. The low resolution of the image and the fact that the image is often viewed in bright ambient lighting conditions can be a problem in evaluating the image exposure and colour reproduction, although the resolution and colour rendering of such LCD displays has improved in recent years. Many cameras have provision for the optional display of the image histogram alongside a reduced version of the captured image. This provides a much more accurate method of exposure evaluation and allows the user to identify clipped highlights or shadows and adjust the exposure to compensate. Additionally, some cameras allow the display of out-of-gamut colours or clipped highlight or shadow areas on the image itself (termed gamut warning; the pixels will usually be masked with a specific highly saturated colour).

IMAGING OPTICS

The imaging optics consist of the imaging lenses, an infrared (IR) cut filter and an optical low-pass filter.

The reader should refer to Chapter 6 to gain an understanding of geometrical optics, and to Chapter 10 for the design and characteristics of lenses. However, it is useful to summarize below a few important points about lenses for digital cameras.

Digital sensors are currently, in the majority of cases, smaller in dimensions than the main film formats. The lens focal length required to provide a ‘standard’ field of view (an angle of around 50°; see Chapter 6) corresponds approximately to the diagonal of the sensor and is therefore of a much shorter focal length than those for equivalent film formats. Aspheric lens elements and materials with high refractive indices are used to produce smaller lenses, particularly for compact cameras. Many digital cameras (almost all consumer digital cameras) make use of zoom lenses. In film compacts, the zoom lenses are normally retrofocus in design (see Chapter 10). Digital image sensors fitted with microlenses (see Chapter 9) require that the angle of light rays exiting the pupil is nearly parallel. For zoom lenses, this must apply across the full range of focal lengths. The configurations of groups of lens elements are reversed compared to retrofocus lenses; these are known as telecentric lenses.

The small sensors and shorter focal lengths result in a small depth of focus and much larger depth of field. This is particularly true for digital compacts, in which the sensor dimensions are smallest, meaning that it can be difficult to isolate subjects using a shallow depth of field.

In many cases the lenses used in film single-lens reflex cameras (SLRs) may be transferred across to digital SLRs. However, if the sensor size is smaller as described in Chapter 9, their effective focal length will be increased, as they image a smaller area from the centre of the image.

This change in focal length is often described as the ‘effective focal length’ or as the ‘equivalent focal length’ (relative to full-frame 35 mm cameras). The relationship may also be expressed as a ‘crop factor’ (sometimes termed the ‘focal length multiplier’), which is used to multiply the lens focal length to find its equivalent focal length. The crop factor (CF) is most commonly calculated by: CF = diagonal35 mm/diagonalsensor. The crop factor and equivalent focal lengths for a range of lenses for a 1.8 image sensor, typical of the type used in a number of currently available semi-professional digital SLR cameras, are given in Table 14.1.

It is clear from the table that what would be a ‘standard’ lens on a 35 mm frame (focal length 50 mm) becomes a telephoto lens with a 1.8 sensor, because it will only be imaging an area from the central area of the frame, and therefore the field angle of view is reduced. To achieve a field angle of view similarly to the standard lens with the sensor requires a much shorter focal length of between 24 and 35 mm, which is close to the diagonal dimension of the sensor.

This has some impact on the image: First, imaging from the centre of the lens may reduce some distortion effects as a result of lens aberrations, which are worse near the periphery. Secondly, the same desired image framing on both the full frame of film and the smaller image sensor may be achieved in two ways: either a shorter focal length lens can be used with the smaller image sensor, which will produce the same equivalent focal length; or if using the same lens on both, the camera with the smaller sensor will be further away from the subject. In either case, at the same aperture, the depth of field in the image will be greater for the image produced with the smaller sensor (see Figure 6.10).

Various surfaces in an optical system reflect light, producing a loss of contrast and ghosting artefacts, which are minimized by the use of anti-reflection coatings on lens elements. The problem is more pronounced in digital cameras, due to the highly reflective surface of the sensor and additional elements such as the IR filter (see below). Additionally, materials with high refractive indices used in the lens elements result in increased reflection. These factors necessitate better multi-layer anti-reflection coatings for digital lenses and mean that some reduction in image quality may be apparent when using a lens designed for a film camera.

As described in Chapter 9, the inherent IR sensitivity of the image sensor necessitates the use of an IR filter, which in the majority of cases is an absorption filter, although IR reflection filters and combined absorption and reflection filters are sometimes used. The IR filter is usually positioned behind the imaging lens, but may be attached directly to the lens. The filter is removable in a limited range of digital SLRs. This removal is a delicate procedure, during which care must be taken to avoid damage to the sensor and accumulation of dust in the camera body. Alongside the specialist high-end IR digital cameras developed for use in scientific applications, some of the larger mainstream camera manufacturers have also released versions of their digital SLRs without an IR filter for scientific and applied imaging applications.

Table 14.1   Equivalent focal lengths for lenses used with a sensor smaller than the 35 mm full-frame format (24 × 36 mm)

image

The optical low-pass filter (OLPF), sometimes called an anti-aliasing filter, is used to attenuate frequencies beyond the Nyquist limit of the sensor, to prevent aliasing (see Chapter 7). It is positioned directly in front of the image sensor. The filter is most commonly constructed of a birefringent material, usually quartz, although calcite and lithium niobate are also used. Thin plates of the material are sandwiched together. Bire-fringent materials have two refractive indices, which are dependent upon the polarization and direction of travel of a ray of light through the material (see Chapter 2). When an incident ray enters the material, it is split into two rays, which are refracted by different amounts and therefore emerge with a small separation. As a result of this, a point of light is slightly blurred in one direction. Each thin plate will separate the rays in different directions. By combining several different plates, several images of the same point will be produced, blurring in different directions. The most common type is the four-spot anti-aliasing filter, which produces four separate images of a point. The separation between the spots is determined by the thickness of each plate. To prevent aliasing, a ray of light must be separated so that the two rays fall on consecutive pixels. If a sinusoid is imaged at the Nyquist frequency, every point will be blurred across two pixels, therefore producing a modulation of zero (Figure 14.2).

image

Figure 14.2   The results of using a four-spot anti-aliasing filter on the image of a sinusoid at the Nyquist frequency of the sensor. The luminance at each peak of the sinusoid is imaged at two consecutive pixels, meaning that every pixel receives the same amount of light, producing a modulation of zero.

The combination of larger sensor sizes with very small pixel pitches now available requires lenses with higher resolving power than those required for film-based systems. This is particularly true for lenses to be used with larger-format digital backs. The modulation transfer function (MTF) (see Chapters 7, 19 and 24) of the imaging optics is the product of the MTF of the imaging lens and the MTF of the OLPF (also the MTFs of other imaging components such as sensor micro-lenses if they are used). The lens must be designed to ensure good performance at frequencies below and close to the Nyquist. This requires a reasonable MTF level at the range of frequencies up to 80% of the Nyquist frequency. A lens with high resolving power is required to achieve this; therefore, the lens MTF will normally have a higher cut-off frequency than the Nyquist frequency of the sensor.

IMAGE SENSOR

Both charge-coupled devices (CCDs) and complementary metal oxide semiconductor (CMOS) sensors are employed in digital still cameras. Chapter 9 covers the structure and characteristics of both types of sensor in detail. Until recently the CCD was predominant, but cameras with CMOS sensors are now becoming more widespread. There is much variation in sensor size, pixel pitch and numbers of pixels, leading to a diverse range of camera designs. Where film cameras are classified into three main types, which are based on common image formats, digital cameras of the same equivalent design may be very different in terms of sensor size and resolution. Table 14.2 provides some examples of the variation of sensor dimensions and types currently available in a range of digital still cameras.

The term full-frame format in the camera type description in Table 14.2 does not refer to the sensor readout architecture (see Chapter 9 and below for information on sensor architecture). In this case, it is a term which has found common use within the imaging community to describe cameras with a sensor that has the same dimensions as an equivalent film-based system. A few high-end digital SLRs and medium-format backs have this equivalent sensor size, whereas the majority of digital cameras rely on sensors that are smaller than the corresponding film format.

For compact cameras, small sensor dimensions are not really an issue. At the time of writing, there are pixel sizes available in consumer cameras of 1.4 μm or less, resulting in sensors containing 10–12 million pixels. The resolution of these cameras is more than adequate for images to be output to screen or print. A disadvantage to using smaller pixels, as described later in the chapter, is more noise, thus a lower signal-to-noise ratio (the fill factor of the pixel is lower) and reduced dynamic range compared to sensors employing larger pixels. Additionally, the higher noise levels require more noise reduction processing in the DSP, which often reduces sharpness in the final image. However, the small sensor allows the use of lenses with shorter focal lengths and therefore facilitates more compact camera designs. A 2/3 inch sensor, for example, with dimensions 6.6 × 8.8 mm, has a diagonal of only 11 mm, indicating a focal length of close to this value for a standard field of view of approximately 50°.

Table 14.2   Sensor dimensions and formats commonly employed in digital still cameras

image

As described earlier, for cameras with removable lenses, there is a significant advantage in using a sensor size equivalent to the relevant film format, as the lenses used in film-based systems may then be transferred across to digital camera systems, and the focal length and depth-of-field characteristics will be the same for both systems. Of course, the other main advantage with a larger sensing area is the space for more pixels. The increase in the manufacturing cost of larger sensors means that full-frame cameras are therefore high-resolution professional systems at the top of the price range for each format.

SHUTTER SYSTEMS

The method used to control exposure time in DSCs is dependent on the type of sensor and the sensor readout architecture. The main difference is in the use of a mechanical (usually focal-plane) shutter or an electronic shutter.

As described in Chapter 9, several readout architectures are used in the CCD image sensor. Full-frame CCDs (Figure 9.10a) are the simplest in design, using the majority of the area of the chip for image sensing, after which the charge is read out line by line into a horizontal serial CCD register. Frame-transfer CCDs, depicted in Figure 9.10b, consist of an image-sensing area similar to a full-frame array, but have a charge storage area shielded from the light, where charge is stored before being read out. Interline transfer CCDs have shielded strips for vertical charge readout between strips of pixels (Figure 9.10d). The charge is read out into these strips after the integration period and read off line by line into the horizontal serial CCD register while the next image is being captured. The fourth method is a combination of two of the others, the frame–interline CCD.

In the case of the full-frame CCD, the photosensitive area must be completely shielded from light during the readout, otherwise light-generated charge carriers will continue to accumulate as the charge is being transferred off the sensor, resulting in smear artefacts. Therefore, a mechanical shutter must be used. Full-frame CCD sensors are typical of professional digital SLRs, which tend to be constructed similarly to their film-based equivalents and therefore include mechanical shutters of similar design (see Chapter 11 for details of mechanical shutter systems).

Many consumer cameras use interline transfer CCDs (ITCCDs). There are two methods for reading out the signal from these CCDs, by progressive scan, in which the entire charge is transferred into the vertical shift register simultaneously, or by interlaced scan, where odd and even horizontal lines are read out separately. In the case of interlace scanning, the integration period for all pixels begins at the same point, but the odd and even fields are read out sequentially after the integration period has ended. This requires the use of a mechanical shutter to shield the sensor from light during the readout period. An ITCCD reading out by progressive scan does not require shielding and may therefore use an electronic shutter. After exposure, a transfer gate pulse is applied to the vertical electrode and the signal is read out. The time difference between the start of integration and the point at which the transfer gate pulse is applied becomes the shutter speed. The use of an electronic shutter has several advantages: it does not suffer from wear and tear or the inaccuracies of a mechanical shutter and the shutter speed can be more precisely controlled. This allows the use of much higher shutter speeds (a super high-speed shutter of 1/10,000 second is possible) than achievable using a mechanical shutter. However, this must be balanced against the inefficiency in terms of image-sensing area of using an interline transfer CCD as opposed to a full-frame or frame-transfer CCD. The extra space required for the vertical charge readout area reduces the potential fill factor of the pixels.

There are also different possible readout architectures for CMOS image sensors. Pixel serial readout architecture selects pixels one at a time by row and column and reads them out and processes them sequentially, allowing full X–Y addressing. Column parallel readout, which is used in the majority of CMOS image sensors, reads pixels in a row out simultaneously and processes the entire row (for example, suppressing fixed pattern noise and applying ADC) in parallel. Pixel parallel readout performs the signal processing on each pixel in parallel using a processor element contained within the pixel, and then transfers the set of compressed signals for all pixels simultaneously. The configuration of the pixels in pixel parallel sensors is much more complex and the pixels larger. This method is therefore not used in the majority of digital still cameras, but is confined to specialist high-speed applications. CMOS sensors require a reset scan in which shutter pulses scan the pixel array before readout. The shutter pulse corresponds to the beginning of the exposure, which is completed when the pixel is read out. This is a form of electronic rolling shutter. Because this shifts the exposure time between pixels it is not suitable for capturing still images. Therefore, a mechanical shutter is generally used to control exposure in DSCs using CMOS sensors. In this case the reset pulses are completed in a relatively short time or simultaneously; then exposure begins and is completed when the mechanical shutter is closed.

DYNAMIC RANGE IN DIGITAL CAMERAS

As described in Chapter 12, the optimum exposure is dependent on the relationship between subject luminance range and the dynamic range of the camera. It is also dependent on scene content and the intentions of the photographer, to determine which areas of the scene are most important in the image and require optimal tone reproduction, possibly at the expense of other areas.

The dynamic range of the sensor, prior to analogue-to-digital conversion, is defined by the ratio between the full-well capacity of the pixel (the saturation level, or maximum number of electrons that the pixel can trap) and the noise floor (the minimum charge detectable above the sensor noise), given in Eqn 9.1. The full-well capacity is related to pixel size, meaning that sensors with larger pixels (for example, those in digital SLRs compared to those in compact cameras) will have a larger capacity and therefore a higher potential dynamic range. The noise floor depends upon the sensor type and is highly variable.

The coarseness of quantization in the analogue-to-digital conversion is a limiting factor in dynamic range. The precision of the converter defines the maximum number of levels that can be reproduced and therefore the theoretical maximum dynamic range without the limitations of noise. In an unrendered RAW file 10-, 12- or 14-bit linear quantization may be employed. As the number of levels that can be encoded is equal to 2 bits, then the maximum contrast ratio achievable by a 12-bit quantizer, for example, is 1:4096, which corresponds to 12 stops (212 = 4096).

In linear quantization (which is most common in RAW files from digital cameras), however, each increase of luminance level by one stop corresponds to a doubling of light intensity and a doubling of the number of output levels allocated by the ADC. The minimum step level of quantization required to prevent visible contouring in the lowest intensity areas of the image (see Chapter 21) defines the minimum number of levels to be allocated to the lowest exposure zone. As the number of levels allocated to each subsequent stop/zone increases from the previous one by a factor of 2, the maximum number of stops represented and the usable dynamic range are lower than the bit depth of the quantizer in practice.

Additional factors influence the dynamic range of the camera when compared to that of the sensor. As described in Chapter 9, the process of analogue-to-digital conversion results in quantization noise due to rounding errors, raising the overall noise level. Subsequent processing further reduces the final usable dynamic range of the camera. Signal amplification, for example, when using an ISO speed other than the native ISO of the sensor, also amplifies noise; the application of non-linear mapping functions in gamma correction, or to mimic a film-like characteristic curve (to provide smoother gradation and detail in shadow areas and smoother gradation in highlights), while improving perceived contrast, may reduce the effective captured luminance range. The final dynamic range of the camera is likely to be closer to 5–9 stops (10 stops is possible for some medium-format cameras, assuming a higher precision ADC).

Since the dynamic range of many scenes will exceed that available to the camera, it is necessary to expose the scene so that the important information is correctly exposed, which may mean sacrificing some detail at either end of the tonal range. It should be noted that the dynamic range of the sensor will not be fully utilized in a rendered image. JPEG images, for example, are limited to 8 bits. Unrendered RAW data may retain more of the sensor dynamic range, allowing some adjustments to exposure at the post-processing stage before the image is rendered to 8 or 16 bits – this is discussed and illustrated in Chapter 25. ‘Exposing to the right’, as described in Chapter 12, is a technique used to optimize the tonal distribution in shadow areas in RAW files. This is achieved as a result of careful exposure and post-processing, without clipping the highlights, as highlight clipping is significantly more problematic visually than shadow clipping. Simple auto-exposure methods cannot determine what is important in terms of scene content. Thus, for certain types of scenes, spot metering of specific image zones or the use of sophisticated programmed exposure modes is critical to achieve satisfactory results.

High-dynamic-range imaging (HDR imaging) provides a potential solution to the problem of low-dynamic-range sensors. Described in Chapter 12, HDR imaging combines multiple images of the same scene bracketed at different exposures into a high-bit-depth file. Some of the tonal range of each image is used for the final image, which is then compressed into an 8- or 16-bit-depth image format. At a relatively early stage of development, there are limitations to its use, as it requires a static subject for bracketing and a reasonable degree of post-processing. At the time of writing, however, a number of compact cameras and one digital SLR have been recently released boasting built-in HDR imaging technology. The Fuji Super CCD SR described in the next section was a sensor using a very early adaptation of the principles of HDR imaging.

COLOUR CAPTURE

It is important to note that the sensor itself is a monochromatic device. It simply captures light and converts it to an electronic signal. As we have seen in previous chapters, capturing colours in photographic film requires that the incoming radiation is analysed into different spectral bands. This is achieved by sensitizing separate emulsion layers to bands of different wavelengths, which, upon processing, form layers of dyes to allow the generation of a colour image. Similarly, digital image sensors require some mechanism for the separation of light into different wavelength bands. There are various methods employed in digital cameras to achieve colour separation, as introduced in Chapter 9 and summarized below.

image

Figure 14.3   (a) Three-sensor camera. (b) Sequential colour camera.

Three-sensor cameras

These early digital cameras employ a beamsplitter, usually a dichroic prism, to separate the incoming light into three components (see Figure 14.3). These are directed to three separate sensors, filtered for red, green and blue wavelengths respectively. Each sensor produces a monochromatic record of the filtered light only, corresponding to one of the three channels of the image. The images from the three separate sensors require careful registration to produce the full colour image. Because each sensor produces a full-resolution record of the image, these cameras do not suffer from chromatic aliasing typical of cameras using Bayer arrays (see below). The use of three sensors, however, means that these cameras are both expensive and bulky.

Sequential colour cameras

These cameras capture successive red, green and blue filtered exposures, using a colour filter wheel or a tunable LCD filter to separate the light into the three components (see Figure 14.3b). The image is then formed by a combination of the three resulting images. As for three-sensor cameras, each channel is captured at the full resolution of the sensor, resulting in very-high-quality images. However, the colour sequential method is only suitable for static subjects, because the three channels are captured at slightly separate times and any misregistration of the subject will result in colour fringes at edges. An additional problem is that of illumination, which may vary during the successive exposures. Therefore, this approach is confined to a limited number of professional studio cameras.

Scanning backs for large-format cameras have similar limitations to colour sequential cameras. Most commonly they employ a trilinear CCD array, which scans across the image format. All three channels are captured at the same time in this case, so misregistration of the three channels is not a problem, but subject movement may result in image distortion. Inconsistent illumination produces changes in exposure across the image plane.

Colour filter array (CFA) cameras

Nearly all commercially available digital cameras (other than those using a Foveon sensor, described in the next section) use a colour filter array positioned directly in front of a single sensor, capturing separate wavelength bands to individual pixels. Each pixel therefore contributes to only one of the colour channels and the values for the other two channels at that pixel must be interpolated. The process of interpolating colour values is known as demosaicing. If rendered images are being produced by the camera, which is most commonly the case, then demosaicing is performed by the camera digital signal processor. Alternatively, if RAW camera data are to be output (in unrendered camera processing), then demosaicing will usually be performed later during RAW conversion (see Chapters 17, 25 and 26).

A number of different CFA patterns have been developed, but the two most frequently employed are the Bayer array and the complementary mosaic pattern. These are shown in Figure 14.4. a and b (see also Figure 9.21).

The Bayer array is the most common, consisting of red, green and blue filters, with twice the number of green to red and blue filtered pixels. As described in Chapter 9, the spectral sensitivity of the green filtered pixels most closely corresponds to the peak luminance sensitivity of the human visual system, hence the higher allocation, providing better luminance discrimination. This results in a higher Nyquist frequency for the green channel than that of the red and blue channels. Differing Nyquist frequencies produce different amounts of aliasing across the three channels, appearing as chromatic aliasing at high spatial frequencies. The effect, which is indicated by colour fringing, is counteracted by the use of the OLPF described earlier in this chapter.

The complementary mosaic pattern, used in some digital still cameras, consists of equal numbers of cyan, magenta, yellow and green filtered photosites. Because complementary filters absorb less light than primary filters, these CFAs are more sensitive to light than RGB filter patterns. However, more colour correction is required with complementary filters, leading to an increase in noise and reduced signal-to-noise ratio at higher illumination levels; therefore, RGB filter patterns are more common.

image

Figure 14.4   Colour filter arrays. (a) Bayer array. (b) Complementary mosaic pattern. (c) RGBE filter pattern (Sony). (d) Example RGBW filter pattern (Kodak).

Other combinations of filters may also be used, for example the red, green, blue and ‘emerald’ pattern (RGBE), similar to the Bayer array, but with a blueegreen ‘emerald’ filter replacing some of the green filters (Figure 14.4c), with the aim of more closely matching the spectral response of the sensor to that of the human visual system. RGBE arrays are used in some Sony cameras. Another fairly recent development is the RGBW (red, green, blue, white) pattern (an example is depicted in Figure 14.4d). Kodak announced several RGBW patterns in 2007, to be made available in some cameras in 2008. The ‘white’ filter elements are actually transparent, or panchromatic, transmitting all visible wavelengths and increasing the amount of light detected. The data from the remaining RGB filtered pixels is processed using a Bayer demosaicing algorithm.

The Super CCD™

The Super CCD was first announced by Fujifilm in 1999 and currently available Fuji compact and bridge cameras employ the eighth generation version of this proprietary sensor. A form of mosaic array, it is based on octagonal rather than rectangular pixels, which allows the pixels to be diagonally mapped (Figure 14.5), with smaller horizontal and vertical pitches than traditional mosaic arrays with equivalent pixel numbers. The increase in horizontal and vertical resolution is at the expense of diagonal resolution (although this is less important to the human visual system and therefore the architecture still remains an improvement on conventional mosaic sensors). The fourth generation of these sensors, announced in 2003, diversified into the Super CCD HR (‘high-resolution’) sensor, as depicted in Figure 14.5a, and the Super CCD SR (‘high-dynamic-range’) sensor (Figure 14.5b). The latter sensor has two photodiodes per pixel, of different sizes and sensitivity. The larger ‘primary’ photodiode has high sensitivity but a relatively low dynamic range and caters for dark and medium intensities, while the smaller ‘secondary’ photodiode has low sensitivity but a very large dynamic range. Both photodiodes are exposed at the same time, but the two signals are read out consecutively, before being combined in the camera DSP into a high-dynamic-range image. The combined output of the two photodiodes is four times the dynamic range of a conventional photodiode. The fifth and sixth generations of the sensor improved upon performance at high ISOs. The very latest version, the Super CCD EXR, announced in September 2008, uses a new arrangement of colour filters to ‘bin’ the output from two consecutive pixels filtered with the same colour, producing effective double-sized pixels, to combine the advantages of both the earlier HR and SR technologies (Figure 14.5c).

image

Figure 14.5   Fuji Super CCDTM sensors. (a) The Super CCD HR. (b) The Super CCD SR. (c) The Super CCD EXR.

The Foveon™ sensor

As mentioned in Chapter 9, there is another method of colour reproduction, developed by Foveon Inc, and currently used in a limited number of digital cameras. This approach is implemented through the design of the sensor itself rather than through the camera architecture, and is based upon absorbing different wavebands of light in different layers of the sensor, similarly to multi-layer colour film.

The Foveon X3 three-layer silicon image sensor was announced in 2002 and uses stacked silicon photodiodes, manufactured using a standard CMOS processing line. Different wavelengths of light penetrate to different depths in silicon; therefore, when light is absorbed by the sensor it produces electron–hole pairs proportional to the absorption coefficient (see Chapter 7). Short wavelengths will produce more electron–hole pairs near the sensor surface, while long wavelengths penetrate the deepest before producing charge carriers. By burying photodiodes at different depths in the silicon, different wavelengths are captured at these depths, therefore acquiring all three colour channels at every pixel. The spectral sensitivity of the sensor is determined by the depth of the photodiodes. Figure 14.6 illustrates the technology.

In CFA cameras using a Bayer array, only 50% of the pixels (the green-sensitive pixels) contribute to the luminance signal. However, there is luminance information in the red channel and a smaller amount in the blue channel. In cameras using the Foveon X3 sensor, this information may be captured at every photosite across all three channels, and linearly combined to produce a very-high-quality luminance signal. Without the interpolation required by a CFA, the luminance image produced may be significantly sharper than that of a CFA camera of an equivalent number of pixels, and there is not the problem of chromatic aliasing characteristic of CFA cameras (although the use of the OLPF and improvements in demosaicing techniques mean that this is less of an issue with most current cameras using Bayer arrays). At the time of writing, the most recent camera releases using the Foveon X3 sensor include the Sigma SD14 (a digital SLR released in March 2007) and the Sigma DP1 (a compact camera launched in the spring of 2008).

image

Figure 14.6   Foveon™ sensor technology.

RENDERED VERSUS UNRENDERED CAMERA PROCESSING

For the majority of imaging applications the requirement is for fully processed and colour rendered images to be output and stored in a standard file format. In this case the camera settings will be used to capture the raw data from the sensor. In a camera using a colour filter array for colour separation, the data will be demosaiced, white balance correction applied, and tone and colour rendering performed to provide an output-referred image. The image will then be sharpened, compressed and stored, most commonly in the Exif/JPEG file format using lossy compression, although some cameras also allow storage as JPEG 2000 or uncompressed TIFF files. See Chapter 17 for details on file formats and Chapter 29 for information on image compression.

The option to output unrendered image data has been a more recent development in digital cameras. In this case the raw and relatively unprocessed data from the sensor is optionally losslessly compressed, stored and output in a usually proprietary RAW file format. The image data is then opened in RAW conversion software, and the processing that would be performed by the camera, such as white balancing, exposure adjustments, sharpening and even changes in resolution, is applied prior to or in conjunction with colour demosaicing to produce a final output-referred image (in most cases). This affords the user a high degree of control over the rendering of the image and allows fine-tuning of exposure using more of the available dynamic range from the sensor. See Chapter 25 for information about RAW capture workflow.

EXPOSURE DETERMINATION AND AUTO-EXPOSURE

The factors influencing the necessary exposure for a scene captured using a digital camera are the same as those for film-based systems: a combination of scene illumination levels and reflectances within the scene (subject luminance range) and the sensitivity of the sensor, which together define the required lens aperture and shutter speed.

The sensor will have a native sensitivity, or nominal speed. This may be specified using several different approaches, which are defined in ISO standard 12232:2006 and described in Chapter 20. However, the majority of digital cameras have a range of ISO speed settings which may be selected by the user, the lowest of which usually relates to the sensitivity of the sensor. As for a film camera, changing to a faster ISO speed setting will alter the range of shutter speed and aperture combinations specified by the camera system controller from an exposure reading, to reduce the exposure to the sensor. In the case of a digital sensor, however, the sensitivity of the sensor is not actually changed and the reduced exposure produces a reduced output signal. Instead, the analogue or digital gain is adjusted to amplify the signal, but with an associated amplification of the noise in the signal. Although noise-processing algorithms continue to develop, this is a limiting factor in the use of speed settings above ISO 400 in many digital cameras.

Once an ISO speed setting has been selected by the user, the camera must take exposure readings from the scene. As mentioned earlier, exposure readings in compact cameras usually come from the sensor itself, whereas for digital SLRs, a separate sensor may be used. The various different exposure metering modes used in cameras are discussed in detail in Chapter 11 and types of exposure measurement are covered in Chapter 12.

During auto-exposure (AE) in DSCs, the image-sensing area or the separate AE sensor may be divided into non-overlapping segments, or AE windows. The signal from each AE window is converted into luminance values, which are passed to the AE signal-processing block. For each segment, the mean and peak luminance values (or sometimes the variance) are calculated, and these are passed to the camera system controller. An AE control application program evaluates all of these values to establish the optimum exposure and set the required shutter speed.

AUTOFOCUS CONTROL

There are a range of different systems used for focus control, including electronic distance measurement, which uses an infrared emitting diode, active range measurement using an ultrasound pulse transmitter and phase detection autofocus, all of which are described in Chapter 11.

A further method, used extensively in digital still cameras, uses focus adjustment to maximize the high-frequency components in an image. This approach uses an algorithm implemented in the digital signal processor on an image acquired from the sensor; therefore, the camera does not require an external autofocus sensor. It is based upon the assumption that high-frequency components in the image, which correspond to edges and fine details, will increase to a maximum when the image is in focus. A digital band-pass filter, which passes frequencies in a band while attenuating other frequencies (see Chapter 28 for information on filters implemented in the frequency domain), is applied to the acquired image and the output from the filter is integrated to produce an absolute value. The absolute value is used by the camera system controller to adjust the position of the lens until a peak output from the filter is achieved (see Figure 14.7.). The resonant frequency (peak frequency) of the filter depends upon the lens system being used.

image

Figure 14.7   Digital integration focus value curve. The curve is obtained by integrating the output from the AF band-pass filter at each focus position and taking absolute values of the result.

ANALOGUE PROCESSING

The analogue signals from the sensor are first processed in an analogue pre-processor, which performs various functions, depending upon the sensor. These may include correlated double sampling (CDS) to suppress reset noise and to reduce fixed pattern noise, automatic gain control, and generation of a reference black level for the reproduced images. The reference black is obtained from shaded pixels on the sensor, which produce a signal as a result of thermally generated dark current. This generated signal is then subtracted from the signals from the pixels in the sensing area of the array. Black level generation may be performed for each colour separately. Additionally, white balance may be performed on the analogue signal by using different gain settings for different colour channels, although this is most commonly performed on the digitized signal. The analogue front end may also control the operation of the image sensor by generating timing pulses. The signal is then digitized in the analogue-to-digital converter (ADC). Chapter 9 discusses amplification and analogue-to-digital conversion in detail.

DIGITAL PROCESSING

The digital signal output from the ADC consists of unrendered RAW data. To produce a rendered (and viewable) image, a significant amount of image processing must be performed to adjust and optimize the image. Additionally, some processes are applied in the digital circuits of the camera to adjust the camera response, for example auto-exposure, autofocus and auto-white balance. As stated previously, the majority of digital still cameras currently available are single-sensor CFA cameras; therefore, the rest of the chapter refers to the processing typical of these cameras.

The image processing functions will depend upon the features of the camera, but may include:

•  Demosaicing

•  White balance and colour correction

•  Tone and gamma correction

•  Digital filtering (non-adaptive and adaptive, noise removal and sharpening)

•  Zooming and resizing of images

•  Digital stabilization

•  Red-eye removal

•  Face detection

•  Image encryption

•  Image compression

•  Formatting and storage.

The order in which processes are performed varies. The particular algorithms selected will have an impact throughout the image-processing pipeline. Most image processes may be performed before or after demosaicing (red-eye removal and face detection are exceptions and are performed after colour interpolation). Because the CFA data is effectively greyscale prior to demosaicing, processes performed beforehand will be performed on only one channel as opposed to three, representing significant computational savings. Some processes will produce improved results once the data has been demosaiced however, although they may enhance other artefacts which must be further corrected. Even if unrendered RAW data is to be output, because RAW files are currently proprietary (see Chapter 17), the image processing applied to the RAW data, prior to colour interpolation, will vary significantly between manufacturers.

Further computational savings may be achieved by implementing processes together. Generally, this may be achieved if the individual processes are applied in a similar manner. Such an approach can improve the performance of the individual image-processing steps, suppress artefacts and enhance the resulting image quality. Examples of processes which may be applied jointly include image smoothing and sharpening, which are both filtering operations, or colour correction and white balancing. Additionally, processes such as denoising and resizing may be implemented together with the demosaicing process.

Many of the relevant image-processing algorithms are covered in detail in the last two chapters of this book; therefore, only brief descriptions are provided in this chapter.

Colour demosaicing

A range of demosaicing algorithms exist for various different CFA patterns, with varying levels of complexity. The simplest approach uses bilinear interpolation (see Chapters 23 and 25) in which a colour value is interpolated by taking averages from the four closest (two horizontal and two vertical) values of the same colour. This is computationally efficient, but reduces sharpness and increases the visibility of chromatic aliasing artefacts.

An alternative adaptive approach, edge-sensing demosaicing, can improve results. This method first calculates the green (luminance component), using the same neighbourhood as bilinear interpolation, the two closest horizontal and two vertical pixels of the missing green pixel, but first classifies the missing pixel according to whether it is part of an edge or a uniform area. This is achieved by evaluating the differences in the horizontal and vertical directions and comparing them to a threshold. If both horizontal and vertical differences are below the threshold, then the pixel is classified as uniform. If one or other of the differences is above the threshold, then the pixel is classified as part of an edge with a gradient in that direction (Figure 14.8). Pixels identified as uniform are calculated using bilinear interpolation of the four values. Those identified as part of a horizontal edge are calculated as an average of the two vertical values, and similarly those that are part of a vertical edge are averaged between the horizontal values. This ensures that values are not averaged across an edge, which would smooth the image. Once the luminance component has been calculated, it is used to calculate the missing red and green values. For a blue value, for example, the values of B/G on either side of it horizontally are calculated, and the value of B/G for the central missing value is taken as an average between them, from which the missing B value can be derived.

image

Figure 14.8   Pixel classification in edge-sensing demosaicing interpolation. (a) G4 − G2 < T, G3 − G1 > T, G5 = average (26,23) = 25. (b) G4 − G2 > T, G3 − G1 < T, G5 = average (47,46) = 47. (c) G4 − G2 < T, G3 − G1 < T, G5 = (average (119,107) + average (107,106))/2 = 110.

Many more advanced methods may be used, some based on adaptive approaches similar to those above, using a filtering operation. Other methods model demosaicing as a reconstruction process, creating a mathematical model based upon assumptions about a previous image or about the correlation between colour channels and providing a solution to the reconstruction problem based upon this model. Another approach, used by some algorithms, models the entire image formation process as a series of colour transformations which account for the effects of the CFA, lens distortions and noise. These algorithms then calculate the most likely output image from the measured CFA values.

Setting white balance

As described in Chapter 5, the human visual system performs chromatic adaptation, adjusting the sensitivity of the different types of cone receptors, to maintain colour constancy, ensuring that white objects will continue to appear white despite changes in the spectral quality of the illuminant. The equivalent process in digital cameras is white balance, to estimate the white point of the light source and adjust the image accordingly. Different approaches may be used. As described earlier, some cameras adjust the analogue gain of the channels, producing low signal values in the analogue pre-processor as they are read off the sensor. More commonly, however, white balance is applied to the signal after it has been digitized in the digital signal processor.

White balance may be implemented by the user in various ways. The white balance may be set manually, usually from a series of preset illuminants or colour temperatures in the menu settings. In this case the preset selected will define which chromatic adaptation transformation (CAT; see Chapter 5) is used by the camera during the rendering of the image. This approach may be adequate in a situation where the colour temperature of the illuminant reaching the camera can be measured externally and with accuracy. However, where preset illuminants are used it is important to note that the colour temperature of some illuminants varies significantly from unit to unit and over time (see Chapter 3). Mixed illumination will further alter the spectral quality of the light reaching the sensor, meaning that the results may not be accurate.

Alternatively, a custom white balance may be used, in which an area of white or neutral, such as a grey test target, is included in a test shot of the same scene. This is then selected and used by the camera to adjust the white balance in subsequent images. This method may also be used to perform white balance during the processing of RAW files in a RAW conversion application. The success of the approach will depend upon the size and position of the neutral target in the frame.

The third approach is automatic white balancing by the camera. This may be achieved using separate RGB filtered photodiodes on the front of the camera, which measure the light source. A more commonly implemented (and often more accurate) approach is to estimate the white balance from the colour gamut of the scene using a captured image. The light source is estimated by measuring the colour distribution of the image and correlating it with entries in a colour gamut database created for typical scenes and light sources. The image is divided into segments (between 20 and 100) and average RGB values are calculated for each segment. These are then converted to colour difference signals for analysis against values produced by different light sources. Once the scene illuminant or white point is established, the RGB signals may be amplified by appropriate amounts to white balance the captured image.

After, or at the same time as, white balance correction, colour correction may be applied to compensate for cross-colour bleeding in the filter, using a 3 × 3 matrix to provide corrected RGB values. Additionally, the image may be converted to the YCbCr colour space, which is a necessary step if the image is to be output as a JPEG compressed file. Chapter 23 provides details of this transformation.

Digital zoom, resizing and cropping

Digital or electronic zoom is based on digital signal processing of the captured image, as opposed to optical zoom, which is achieved by shifting the position of lens elements to change the focal length of the lens. The image is magnified by interpolating new pixel values between the existing ones. The concept of interpolation was introduced earlier in this chapter in relation to the demosaicing process, which is a special case where missing colour values are calculated. Interpolation is more generally applied in digital zoom and various other resampling operations throughout the imaging chain, where the spatial dimensions of the image are altered, for example in enlargement, or in the correction of geometric distortion.

Where optical zoom magnifies the image and may reveal further fine detail, this is not the case with digital zoom. The optical zoom operates prior to image capture, therefore increasing the Nyquist limit of the original image. The total number of pixels will not change, but a smaller area of the original scene will be captured by that number of pixels. Digital zoom, however, is applied after image capture. As with all interpolation processes, because they are based upon an averaging of existing pixels, the magnified image may appear blurred, and may display other interpolation artefacts. The three main methods of non-adaptive interpolation are nearest neighbour, bilinear interpolation and bicubic interpolation. In practice, the latter method is most commonly used as it produces fewer artefacts than the other two, but the computationally simpler bilinear interpolation may be used if speed is an issue. Interpolation methods and their associated artefacts are discussed in Chapter 25, and examples are illustrated in Figures 25.3 and 25.4. The reduction in image quality by digital zoom means that this method tends to be used in the lower-priced end of the consumer market, mainly for camera phones and compact cameras, although some mid-range cameras may offer both optical and digital zoom features.

The image may also be resized down, or an image of a lower resolution than the sensor may be output. This is a down-sampling process. The simplest approach to resizing down is simply to drop every other pixel, which will produce a lower resolution image of the same scene. However, this method is rather detrimental to image quality. A better approach is to low-pass filter the image prior to down-sampling. Low-pass filters, which are described in Chapters 27 and 28, effectively reduce high frequencies within the image, resulting in a blurred image compared to the original, but after down-sampling the blurring will not be noticeable. If the image is to be resized to a non-integer ratio, then an interpolation step will also be necessary. In this case the low-pass filtering may not be necessary in all cases, as the interpolation operation is also a blurring or low-pass operation, but the actual implementation will vary from manufacturer to manufacturer.

Noise reduction

There are a range of different causes of noise in a digital camera. Noise sources and their characteristics are described in Chapter 24. They may be broadly classified into noise associated with the quantum nature of the signal itself, fluctuations independent of the signal (for example, as a result of thermal generation, or the quantization process) and those as a result of defects in individual sensor elements.

The aim in designing the analogue front end of the camera must be to reduce noise in the signal as much as possible before CFA interpolation. False colour noise, and colour phase noise, which are generated when using a digital white balance and show up as colour shifts in dark areas of the image, may be reduced by increasing the resolution of the ADC, or using analogue white balance (described in the earlier section on analogue processing). Cross-colour noise, which is characteristic of CCD sensors using CFAs, is caused by cross-colour bleeding between adjacent colour filters and may be corrected using a 3 × 3 matrix on the RGB values, as described earlier.

Another significant source of noise is as a result of the dark current variability across the sensor. Recall that the reference black-level calibration is performed by averaging the dark current signal from optical black or shaded pixels on the sensor and that this value will then be subtracted from the pixel signals. The dark current may vary across the sensor, however, as a result of thermal generation. In a CCD, the dark noise level gradually increases from the beginning to the end of sensor readout. In some cameras, the sensor will be cooled to reduce dark current and associated noise. Further, the pixel-to-pixel variations in dark level may be determined by the manufacturer and used as a mask to alter the subtracted offset value for the reference black across individual pixels early in the signal-processing pipeline.

Further noise reduction may be performed using digital filtering processes. As many of the image processes implemented in the DSP, such as tone and colour correction, amplify the signal, and consequently the noise, noise reduction is usually performed prior to these processes. Additionally, the presence of noise may reduce the accuracy of the interpolation algorithms used in demosaicing, particularly if an adaptive method such as the edge-sensing interpolation algorithm described above is being used. Better results may thus be obtained by noise reduction prior to CFA interpolation, using a greyscale filtering process. This is achieved by collecting together non-interpolated pixels from each filter colour and treating each channel as a grey-scale image. Low-pass (linear) filters or median (non-linear) filters are typically used. The operations and characteristics of both are discussed in Chapter 27.

Sharpening

Sharpening is performed to counteract the blurring effects of the optical system and any interpolation processes. Blurring in the optical system is a combination of the optical limitations of the lenses and the deliberate blurring introduced by the anti-aliasing filter over the sensor. Sharpening is usually implemented using a variation of unsharp masking, known as high-boost filtering. Unsharp masking was originally a darkroom method used for sharpening images and involves the subtraction of a blurred version of the image from the original to enhance high frequencies. In high-boost filtering, an image containing only high frequencies, output from an edge detection filter, is added to an amplified version of the original. The sharpened image is usually obtained using a Laplacian second derivative edge detection filter, which is a two-dimensional linear convolution filter. The original image is amplified by multiplying it by a constant. Refer to Chapter 27 for details of all of these operations. The extent and design of the filter and the amplification factor will determine the degree of sharpening in the resulting image and the level of associated sharpening artefacts.

TONE/COLOUR RENDERING

The signal output from the image sensor is in a sensor image state, i.e. the values are entirely device dependent. Sensor image data are not viewable, fundamentally because the values have not yet been interpreted. The process of converting the data into a state in which they may be understood and reproduced as colour values by other devices such as displays or printers is known as colour rendering. Colour rendering converts the image to an output-referred image state, where its values are specified in terms of their reproduction on a real or virtual output device. The rendering may be implemented in the digital signal processor in the camera or, if unrendered RAW data are output, will be applied at RAW conversion. Colour rendering is a very complex procedure, involving multiple stages and is considered in detail in Chapter 23. It may also involve the use of colour profiles, if International Color Consortium (ICC) colour management is being implemented. This is the subject of Chapter 26. In terms of image processing, colour rendering will usually involve a tonal mapping stage, known as gamma correction (see Chapter 21). This is a process in which the (usually) linear output of the sensor is transformed using a non-linear function, which is most commonly implemented using a look-up table. The various stages in colour rendering may be performed using look-up tables or matrix transformations of the demosaiced RGB values from the sensor. White balancing and colour correction are implemented as part of the colour rendering process. The reader should refer to the relevant chapters identified above for more on these subjects.

CAMERA TYPES

Film-based camera system designs are mainly differentiated by image format, viewfinder system and complexity of camera components.

There are a diverse range of designs and price brackets for digital camera systems. The increased capabilities of desktop computers and the rapid growth in the use of the Internet have helped to stimulate the development of hardware and software for digital photography and to provide widespread application for digital images across a global network. The technology has evolved through the efforts of research and development from many different disciplines. Alongside the well-known manufacturers of photographic equipment, camera systems have been produced by manufacturers new to the market, previously involved in other types of technology.

The wide variety of digital camera designs have evolved in response to (and initiated) different methods and applications of imaging. Consider the rapid development of digital cameras incorporated into mobile phones. Early camera phone models consisted of very-low-resolution CMOS sensors and rather basic features, but models available at the time of writing may be integrated into sophisticated communications devices, which incorporate features such as network connectivity, email capability, video streaming and large high-resolution touch screens. These cameras have sensor resolutions up to 8 megapixels, built-in flash, manual and automatic focusing systems and various other advanced features, such as face detection and video capture. Improvements in image quality as a result of higher pixel counts, better optics and advanced image processing mean that camera phones have found widespread application as an alternative to compact digital cameras. This is in part because consumers are more likely to carry a mobile phone than a separate camera with them, but also because many phones are now capable of communicating image data via Bluetooth technology or broadband networks.

There are a variety of digital camera designs used for many different applications. These range from low-cost ‘toy’ cameras used mainly for capturing images for websites, up to full-frame medium-format cameras and large-format scanning backs aimed at the professional photographer, with prices running into tens of thousands of pounds. The features of the main camera formats are summarized below and examples are illustrated in Figure 14.9.

Compact digital cameras

These are by far the most popular type of digital camera among consumers. They are designed to be small, portable and easy to use, with many automatic features. Some advanced models offer a limited range of interchangeable lenses and optical converters, although the majority of these cameras have non-interchangeable zoom lenses. The LCD monitor display commonly functions as a viewfinder, and an optical viewfinder may be omitted, to keep costs and size down.

image

Figure 14.9   Types of digital camera. (a) Ultracompact. (b) High-end compact. (c) Bridge. (d) Digital SLR. (e) Medium-format digital camera and digital back.

(a)©iStockPhoto/bbee2000, (b)©iStockPhoto/Ronen, (c)©iStockPhoto/hayesphotography/Mark Hayes (d)©iStockPhoto/Joss/Jostein Hauge, (e)©iStockPhoto/Nikada

The very smallest models are often marketed as sub-compacts or ultra-compacts. These tend to be the most automated, most without optical viewfinders. With dimensions of around 100 mm × 55 mm × 22 mm and weights between 100 and 200 g, they are truly pocket-sized. Sub-compacts or ultra-compacts commonly employ CCD image sensors; with some current models boasting over 10 million effective pixels, resolution is more than adequate for consumer applications. With fewer manual controls the emphasis is on automatic features, with many advanced shooting modes and preset scene modes. Many of these cameras have a range of automatic exposure modes and also allow manual exposure setting. Some models offer functions such as macro and movie modes, and stitch assist for the production of panoramic images. Although many sub-compacts will only output JPEG compressed images, some of the newer models also output RAW files.

High-end compact cameras are sometimes labelled as prosumer compacts (professional consumer), indicating that they are marketed towards serious amateurs or professional photographers. Although small, they are bulkier than the majority of digital compacts and the external design is often more traditional, with black camera bodies and external controls for exposure and focusing more consistent with those of digital SLRs. They tend to have fewer automated features and an emphasis on improving image quality through more expensive optical components, often also incorporating image stabilization to reduce camera shake. With CCD or CMOS sensors containing up to 12 million megapixels, sensor resolution rivals that of many digital SLRs, although sensor sizes are usually significantly smaller. Some of these cameras have fixed rather than zoom lenses; alternatively, they may have very-high-ratio optical zooms.

Bridge digital cameras

The name for this class of digital camera originates from their design, which is meant to provide an intermediate step between compact cameras and digital SLRs. Like high-end digital compacts, they are often marketed as prosumer cameras. They are similar in dimensions, weight and body shape to digital SLRs, but without removable lenses or the single-lens reflex viewfinder, relying on live preview of images on the LCD display and either an optical or electronic view-finder. They have smaller sensors than digital SLRs, of dimensions more typical of compact cameras (a 1/2.5 inch sensor, with dimensions 5.8 × 4.3 mm, is typical). As a result of this, the lenses are smaller than those of digital SLRs, allowing them to incorporate a very wide range of focal lengths into the non-removable zoom lens. Recent models allow up to × 20 optical zoom, providing a range of focal lengths of around 20–500 mm (equivalent 35 mm focal lengths). For this reason some bridge cameras are classified as superzoom cameras. Sensor resolutions of around 10 million pixels are currently available and they include the same level of automated and advanced features as compacts alongside many of the manual controls of digital SLRs. Like compacts, many of the newer models output RAW files as well as compressed JPEGs. Some models also offer functions such as high-speed burst shooting modes, which allow the capture of many consecutive frames per second (13 frames per second, of images at a reduced resolution, is quoted for one current model), a feature less common in compacts. The future of this type of camera design is not certain, however, due to the development in advanced features in compact cameras and the significant reductions in the cost of digital SLRs, particularly of the semi-professional models (see below), compared to a few years ago.

Digital single-lens reflex (DSLR) cameras

These cameras are very similar in design to equivalent 35 mm film cameras and, as already mentioned, some manufacturers have maintained the positioning and design of external controls from their film cameras to ease the transition for photographers from film to digital imaging. As with conventional SLRs, they use a mechanical mirror system and a pentaprism to provide an image in an optical viewfinder, and they are aimed at the advanced amateur and professional photographer markets. The main difference from film SLRs is the use of a digital sensor and the LCD display which, unlike compact and bridge cameras, functions only as a playback device of captured images and not as a viewfinder. They incorporate the majority of features in terms of focus, exposure and exposure metering modes of film SLRs and have a similar range of accessories. They may also include many of the advanced features specific to digital cameras, such as the ability to apply custom or preset picture ‘styles’ to the image (e.g. to change saturation for the requirements of different types of scene content or to produce monochrome images). They do not, however, have the large range of automated functions characteristic of compact cameras.

Sensor dimensions are larger than those used in compact and bridge cameras (see Table 14.2), and CMOS sensors are more common, although this camera class is still dominated by CCDs. Sensor resolutions in recent models range from around 10 million up to 24.6 million pixels for some full-frame sensor formats which, as described earlier, are equivalent in dimensions to 35 mm film.

Digital SLRs fall into two broad classes. Semi-professional digital SLRs, aimed at the serious amateur, are significantly cheaper than professional digital SLRs (the majority of these are 35 mm equivalents, although there are some medium-format models available, described in the next section). In the UK at the time of writing, the former category are generally under £1000, whereas the cost of a professional SLR may run to several thousands of pounds for the camera body alone. The lower price of semi-professional DSLRs is indicated in smaller sensors, in the lower optical quality of the lenses that accompany them and in the build quality of the camera body, which tends to be manufactured from cheaper and less resilient materials than the higher-end professional cameras. The smaller sensor size means that the effective focal length of lenses from film SLRs is increased. Other than that, they have similar features to their more expensive counterparts, but may have a wider range of automated features to appeal to the consumer market.

Digital cameras and camera backs for medium and large format

It is difficult and expensive to manufacture large-area digital sensors without significant imperfections. This has meant that the development of digital cameras for the production of images equivalent to those of medium- and large-format film cameras has been much slower than for smaller formats. This is also partly due to the fact that photographers using larger image formats occupy a relatively small section of the professional market. These camera systems are prohibitively expensive for all but the professional photographer, and in general are significantly more expensive than their film equivalents.

A number of different options are available for digital medium format. These include medium-format digital SLRs with interchangeable lenses, digital backs to be used with existing medium-format camera systems and digital medium-format camera systems with similar design and viewfinder arrangements to medium-format film cameras.

Medium-format digital SLRs have larger sensors than 35 mm equivalent digital SLRs and use medium-format lenses. They vary in design; some have a very similar camera body to smaller format digital SLRs, with a fixed pentaprism viewfinder on top of the camera body which houses the sensor and reflex mirror system. An example is the earliest model, Mamiya’s ZD, which was announced at Photokina in 2004. Others look more like traditional medium-format camera systems and are modular in design, allowing the selection of different viewfinders, for example. Both types are relatively compact compared to other digital medium-format camera systems. They typically employ a 48 mm × 36 mm CCD sensor, with sensor resolutions up to 50 million pixels in models announced in 2008.

Medium-format camera systems have camera bodies based on the design of those in traditional medium-format camera systems, with interchangeable viewfinders (45°, 90° and waist level) and lens diaphragm shutter systems. The sensor is housed in a digital back, which in some cases is compatible with many different medium- and large-format cameras. Current models have CCD sensors which range from square format 36 × 36 mm, with 16 million effective pixels, up to true wide-frame sensors of 56 × 36 mm with 56 million effective pixels in recently announced models. As mentioned above, the digital backs may be sold separately for use with existing camera systems.

As already mentioned, some of the digital backs using area arrays for medium-format may also be used with large-format cameras. The other option for large format is the use of a scan back, which may be integrated into a camera system or may be modular for use with view cameras. Scan backs employ trilinear CCDs and operate by scanning across the image area. They are therefore only suited to still-life subjects and reproduction work. Resolutions of up to around 400 million pixels are available at the very top end of the market. Even at this resolution they do not fully cover the dimensions of 4 × 5 inch format film (102 × 127 mm), being closer to 3 × 4 inch (76 × 102 mm), but more than match the resolution required.

Specialist digital cameras

Cameras for industrial and scientific applications are offered by specialist manufacturers. These high-performance cameras are designed for specific purposes. The image quality requirements of the applications for which they are designed, using specific techniques under particular imaging conditions, determine the camera designs and are very different to the needs of standard consumer and commercial imaging applications. For example, in some cases the sensors used may not necessarily offer the same resolution as those used in consumer cameras, but the design and performance of the sensor under specific conditions will be optimal. Much emphasis is of course placed upon optical design, as many criteria will be the same as for film cameras and imaging devices for scientific applications.

Industrial cameras may be used for applications as diverse as automated optical inspection, metrology, flat-panel inspection, traffic management, biometrics, three-dimensional imaging and many others for different industries. Original equipment manufacturer (OEM) components are also available to be embedded in systems by other manufacturers (embedded OEM). Scientific cameras are designed for extended vision applications, such as digital microscopy, for use in medical and forensic imaging. Surveillance applications require specialist high-resolution cameras with superior performance under low light levels and may use specially designed sensor architecture and advanced image-processing techniques. In digital cameras for astrophotography, the emphasis is on fast frame rates and extremely low noise. As well as the advantages in terms of flexibility and speed offered by digital image sensors, many of these applications use digital image processing extensively to further expand the capabilities of the imaging systems.

IMAGE SCANNERS

Digitization of photographic originals, either on film or print, is carried out using image scanners. The first scanners developed were the Murray and Morse scanner in 1941 and the Hardy and Wurzburg scanner in 1948, aiming to produce continuous-tone photographic plates.

image

Figure 14.10   A flatbed scanner.

©iStockphoto.com/Hofpils

There are several different types of scanner, the most common being: drum scanners, used for scanning film and transparencies; flatbed scanners (see Figure 14.10), used for scanning printed photographs, documents and, with some models, film; and dedicated film scanners. A more recent type of scanner technology, the Flextight scanner, combines some aspects of both drum scanners and film scanners. All-in-one devices consist of a flatbed scanner and printer combined in one unit.

When a material is scanned, it is illuminated by a suitable light source and the transmitted or reflected light from the scanned material is captured by a digital sensor. The sensor’s voltages are converted to digital values by the analogue-to-digital (A/D) converter (see Figure 14.11).

Colour scanners usually have three rows of CCD sensor elements (i.e. three linear CCDs) covered with red, green and blue (RGB) filters, to separate the image into three colour channels. Colour separation may alternatively be achieved using an unfiltered linear CCD sensor and a light-emitting diode (LED) RGB array which provides a separate red, green and blue flash for each scanning line. An exception is the multi-spectral scanner described later in this chapter.

In scanners that employ three filtered rows of CCDs, the filters have narrowband characteristics and their red, green and blue peaks are selected to provide a reasonable match to the characteristics of the dyes of a range of hard-copy/film original media. For example, in scanners designed mainly for digitizing photographic images, the RGB peaks of the filters closely match the absorption of the subtractive cyan, magenta and yellow dyes typical of photographic paper. In practice, the effect of metamerism (see Chapter 5) may occur in some cases, where two colours that appear visually different in the hard-copy image result in the same RGB values in the digitized image.

The method of image capture, type of sensor and light source depends on the type of scanner. Temporal stability is an important characteristic of the light sources used in scanners and depends on the type of the source. Good stability for the duration of an image scan is essential; although any variations of the source can be adjusted either by correcting the reflectance or transmittance values of the scanned image, or by adjusting the power supply of the source, both will increase the cost of the scanner. The source’s spectral power distribution is an important parameter in colour reproduction. In early scanners fluorescent lamps were used as light sources. They were later replaced by cold cathode fluorescent lamps (CCFLs) and more recently by xenon arc lamps and lasers. At the time of writing, white LED lamps have been introduced in some models, minimizing the required warm-up time.

Non-uniformity of the illumination produced by the light source, together with the variation in lens transmittance and variation in the sensitivity of individual elements in the digital sensor, may introduce spatial nonuniformity at a pixel level, which is very difficult to correct in the digital image. For this reason scanners are calibrated using targets specifically designed for the purpose and the results are used to calculate individual values of electronic gain for each pixel, i.e. the analogue-to-digital units (ADU) per electron. This information is stored in the memory of the scanner and is applied to the digital output image during the scanning process, ensuring that the response of all pixels is the same. Further variation in the scanned image may result from various other sources of noise present within the system (see Chapters 9 and 24).

image

Figure 14.11   The basic components of a scanner, where I°(λ) is the spectral distribution of the scanner illumination, Tf(λ) is the spectral transmittance (or reflectance) of the medium, T°(λ) is the spectral transmittance of the optical lenses, D(λ) is the sensitivity of the imaging sensor, V are the image sensor voltages and PV are the converted digital values (or pixel values).

Adapted from Triantaphillidou (2001)

It should be noted that scanners may be affected by extreme temperatures and humidity. A suitable location for the scanning device must therefore be selected to minimize the effect of environmental conditions. The use of a continuous power supply is also recommended because scanner components may be damaged by power surges. The scanner should therefore be attached to an uninterruptible power supply (UPS) device.

TYPES OF SCANNERS

Drum scanners

In a drum scanner the original material, transparent or reflective, is mounted around a clear drum and illuminated by a high-intensity xenon or tungstenehalogen lamp (Figure 14.12). When scanning transparent materials, the light source is located inside the drum, while for scanning reflective materials it is located outside the drum. The light is focused on the original material and rotation of the cylinder at high speed causes the focused light spot to move along the cylinder and progressively scan the original. The focused light that passes through (or is reflected from) the original material reaches a set of dichroic mirrors. These mirrors divide the light into red, green and blue (RGB) components via RGB filters and divert it to a sensor unit with photomultiplier tubes (PMTs). PMTs are devices which detect photons. They are sensitive to a range of wavelengths from the short ultraviolet (UV) wavelengths to the far infrared (IR). A current pulse produced by each detected photon results in an analogue signal produced by the PMTs. These data are converted into a digital form using an ADC. The number of quantized levels depends on the bit depth of the ADC (as described earlier in the chapter, the number of levels = 2b, where b is the number of bits allocated per pixel). Cooling of the PMTs is essential to reduce dark current (see Chapter 9).

image

Figure 14.12   In drum scanners the original hard-copy material is mounted around a clear drum. The light is focused on the hard-copy material and the cylinder rotates. The focused light spot moves along the cylinder and progressively scans the original. Dichroic mirrors divide the light into red, green and blue components and divert it to a sensor unit with PMTs. The data from the PMTs are converted into a digital form by an analogue-to-digital (A/D or ADC) converter.

A limitation of drum scanners is the requirement that the original is flexible and can be mounted around the scanner’s cylinder. However, for the majority of photographic materials this is not an issue. Bending the original allows very-high-resolution scanning; these scanners represent the professional end of the market in terms of both scanned image quality and cost.

Flatbed scanners

The original primary function of flatbed scanners was to scan reflective materials, but many currently available models also scan transparent materials. They employ either a CCD sensor or a contact image sensor (CIS) instead of photomultiplier tubes (see Chapter 9). A flatbed scanner with a CCD sensor contains a linear array of CCD elements.

Models that record colour information with three passes (for red, green and blue) contain a CCD sensor with a single row of elements. Each pass records one colour channel. The colour of the light is changed after each pass, either by switching RGB light sources or by using white light and switching RGB filters. One-pass image capture employing an unfiltered array has also been employed with the use of RGB lamps which flash sequentially during the scanning of each row. Models that record colour information with one pass use a trilinear CCD array (with three rows of elements). In this case the CCD elements in each row are filtered red, green and blue (Figure 14.13a).

It should be noted that there are variations in the CCD technology employed, for example high-resolution flatbed scanners may employ a six-row CCD, an array with double rows of CCD elements for each of the red, green and blue channels. In this case, high resolution is achieved by the overlapping of CCDs instead of employing CCD elements with very small dimensions, which helps to reduce the level of noise typically associated with small sensing elements. During scanning, each line of the original is scanned by both rows. The data of both rows is then combined for a single red, green and blue output for each line. An example of this technology is the Canon Hyper CCD (Figure 14.13b).

image

Figure 14.13   (a) A CCD trilinear array with RGB filtered elements. (b) A CCD array with double RGB rows of CCD elements.

When scanning reflective materials the hard copy is placed face down on the top glass plate (Figure 14.14). During the scanning process a linear light source, which can be a fluorescent tube or a halogen lamp, together with a mirror, moves down the length of the hard copy. The light is reflected by the hard-copy image and then by the mirror. With the aid of a second mirror it is directed on to the CCD array via a lens unit with a fixed magnification. The magnification depends on the size of the hard-copy image and the size of the sensor.

Because the original hard copy is placed on a glass plate, thorough removal of any dust or marks on the glass and the hard-copy surface is essential, otherwise they will appear on the digitized image. Although they can be removed later using imaging software, productivity may be adversely affected if a large number of images are scanned daily. Scratches on the print (or film) can be removed digitally or, in some scanners, they can be corrected using a scratch reduction feature described later in this chapter.

image

Figure 14.14   Cross-section of a flatbed scanner.

Flatbed scanners for reflective materials can be converted to scan transparent materials using a light-transmitting optical system which is embedded in the scanner cover. This system provides good-quality images when large-format transparencies (up to 200 × 250 mm) are scanned. For 35 mm film the optical resolution of the scanner should be greater than 2000 pixels per inch for good results. Optical resolution is discussed in more detail later in this chapter.

A flatbed scanner with a contact image sensor (CIS) is slimmer and lighter compared to a scanner with a CCD sensor. A CIS consists of red, green and blue light-emitting diodes (LEDs) which illuminate the image at a 45° angle, and a row of CCD or CMOS sensors which capture the reflected light via a lens array located above the sensors (see Figure 14.15). The width of the sensor row is equal to the width of the scanning area. With the use of CIS technology there is no need for an optical system, lamps or filters. For this reason the scanner has lower power consumption and reduced manufacturing costs. With a CIS, geometrical distortions of the scanned image, which may be introduced by the lens and mirrors in a CCD scanner, are eliminated. The colour gamut of a CIS scanner depends on the spectral output of the LEDs rather than the RGB filters.

Film scanners

Scanning of films is carried out by dedicated (film/transparency transmission) scanners, which also employ a CCD sensor. These scanners give higher quality images than the flatbed scanners described above, due to their higher dynamic range and resolution compared to a flatbed scanner. As with scanners for reflective materials, the CCD sensor is linear, comprising one row of unfiltered CCD elements (for capturing colour with three passes) or three filtered rows of CCD elements (for one pass). In most scanners the sensor is stationary and the film moves across the sensor. In the case where the scanner comprises an area CCD array, the image information is recorded with a series of exposures with red, green and blue light. It should be noted that when films are scanned, the emulsion should face the sensor to eliminate any diffusion of the image. Diffusion may occur as a result of light passing from the thicker film base after going through the emulsion. In addition, when the emulsion faces the sensor, the optics of the scanner are focused on the emulsion.

image

Figure 14.15   A scanner with CIS technology is based on a row of CCD or CMOS sensors equal to the width of the scanning area. They capture the light reflected by the original material via a lens array.

Flextight scanners

Flextight scanners are scanners that employ a CCD image sensor and a magnetic flexible holder which bends the original around a virtual drum. The bending of the original helps to improve the scanning resolution and there is no glass between the original and the sensor, which helps to improve the quality of the scanned image. The CCD array remains stationary while the original (which may be negative, positive or print media) is rotated. Illumination of the original is performed by a cold cathode light tube, which emits very low levels of IR radiation and therefore heat. The design of the scanner enables a simpler arrangement of the lens, without the use of mirrors. When different formats are scanned, the lens zooms accordingly to ensure that the CCD resolution corresponds to the width of the original. Another feature of the Flextight scanner is the change of the intensity of the light source depending on the density of the original. The maximum resolution of this type of scanners is, at the time of writing, 8000 ppi. For this reason, and because of the high dynamic range of these scanners, very-high-quality results may be achieved when scanning medium- and large-format films. These scanners are significantly more expensive than other types of desktop scanner and are therefore aimed at the professional market, but provide an affordable alternative to drum scans.

Multi-spectral scanners

Image capture in three colour channels (RGB) has limitations when there is a need to derive colorimetric data from the digital image, because the RGB values of the image depend on the scanner characteristics. Solutions such as colour correction via scanner characterization (see Chapter 23) provide accurate results only for the specific media for which the scanner was characterized. For specialized applications where colour fidelity is crucial, multi-spectral imaging is proposed. It was initially developed for remote sensing applications, where the spectrum captured may exceed the visible spectrum.

In multi-spectral imaging colour is captured in more than three channels using suitable narrowband filters. It should be noted that the amount of data processed with this method is much larger than the data from scanners that capture an image in three channels and this has an effect on the processing time and image storage. Spectral data of images is captured in multiple channels depending on the system. The digitization of works of art is an example of one of the applications employing multi-spectral scanning, as its larger colour gamut provides more information on the characteristics of the original, often necessary for restoration purposes.

SCANNER CHARACTERISTICS

Sampling and resolution

When a hard-copy photographic image is scanned, its continuous tones are represented in the output digital image by an array of digital values. The value of each element of the array represents a sample of the reflectance of the original at a corresponding discrete location of that image. The array, however, has a finite number of values, so sampling a spatially continuous-tone image means that some of its spatial detail may be lost. As described in previous chapters, determination of the sampling rate which avoids significant information loss is provided by the Whittaker–Shannon sampling theorem. According to that theorem, the spatial frequencies that can be fully recovered from the discrete samples are the ones that are below the Nyquist frequency (see Chapter 7). Frequencies above the Nyquist produce aliasing.

The resolution of a scanner is related to its sampling rate and is measured in pixels per inch (ppi) or samples per inch (spi). Resolution is sometimes, incorrectly, quoted in dots per inch (dpi). This term, however, more correctly refers to output resolution, such as the resolution of a printer or display. It should be noted here that there may be a difference between the optical resolution of a scanner and the resolution quoted by the scanner manufacturer. The optical resolution depends on the number of pixels in the scanner’s sensor and the pixel pitch. The quoted resolution may be either the optical or an interpolated resolution. In this case, interpolation is used to increase the number of pixels in the image (see Chapter 25). However, because the data in the interpolated pixels is the result of a computation of data from neighbouring pixels, the image does not have any additional detail. As described in the earlier section on digital cameras, the quality of the image depends on the interpolation algorithm. The interpolation method used by the scanner driver may not be quoted by the manufacturer. If an image with a different resolution than the optical resolution of the scanner is required, it is usually preferable to scan the original with the optical resolution and then apply a suitable interpolation method using imaging software. This will provide better control over the quality of the final digital image. This approach is used in an image workflow where the main requirement is to obtain and maintain optimal image quality.

The optical resolution depends on the type of scanner. A dedicated film scanner has typical optical resolution in the range of 1000–4800 ppi and a drum scanner in the range of 8000–12,000 ppi. At the time of writing, the optical resolution of a flatbed scanner can be up 6400 ppi. In most cases, the optical resolution is quoted by manufacturers using two numbers, for example 1200 × 2000 ppi. The first number refers to the optical resolution of the linear array image sensor. The second number, which may be different to the first, is the resolution in the direction perpendicular to the linear array.

It is important to understand that the optimal resolution for scanning may also depend on the resolution of the output media as well as the image quality requirements of the workflow. If the requirement from the workflow is for efficiency and speed of processing for a particular output, then scanning at a resolution higher than that of the output media will result in a large file size with information that will not be used by the output device. The large file size requires greater computational time during image processing and it occupies more space on the computer’s hard disk. Although for a small number of images this may not appear to be significant, it may have an adverse effect on efficiency and productivity when large numbers of images are scanned, manipulated and stored daily. Calculation of the scanning resolution in this case is carried out by taking into account the physical dimensions of both the hard copy and the output image and the resolution of the output device:

image

where Rs is the required scanning resolution, Ro is the output device resolution, So is the original size and Sd is the desired size.

Dynamic range

The dynamic range (or density range) of a scanner is dependent on its sensor and represents the range of density values that it can distinguish and capture. It is measured as the difference between the optical density of the darkest shadows (Dmax) and the optical density of the brightest highlights (Dmin). The dynamic range, DR, can be expressed as (see also Chapter 21):

image

The values of the dynamic range are in a logarithmic scale and can range from 0 (Dmin) to around 4.0 (Dmax). Scanner technical specifications may include the Dmax value instead of the dynamic range of the scanner. It should be noted, however, that there are losses due to the analogue-to-digital conversion which reduce the measured dynamic range of a scanner. Disparity between the dynamic range of the scanner and the original hard copy has an effect on the range of tones that will be represented in the digital image. When the original image has a higher dynamic range than the scanner, some of its tones will be clipped. Most reflective scanners nowadays have higher dynamic range than the printed material.

Some manufacturers relate the dynamic range of a scanner to the bit depth of the analogue-to-digital conversion. As seen earlier in the chapter, the computed dynamic range, DRcomp, relative to the bit depth, is given by the following equation:

image

where n is the bit depth of the A/D conversion. Using this equation, a high bit depth results in high computed dynamic range.

This number, however, represents the number of tones that the scanner is capable of reproducing. As for digital cameras, it does not take into account the dynamic range of the sensor, which may be higher or lower, and any effects from the analogue components which may reduce the final dynamic range of the digital image. For this reason, a number of methods for measuring the dynamic range of a scanner have been developed. These methods employ the use of a test chart with greyscale patterns. The greyscale must have a density range similar to that of the scanned material. It should be noted that the dynamic range of the scanner’s output image depends on the material that is scanned. Several ways of determining the Dmin and Dmax of the scanner for measuring its dynamic range have been proposed. The International Standards Organization (ISO) has published the standard ISO 21550:2004 on measuring the dynamic range of scanners. In this standard the Dmin is defined as ‘the minimum density where the output signal of the luminance opto-electronic conversion function (OECF) appears to be unclipped’. The Dmax is defined as ‘the density where the signal-to-noise ratio (SNR) is 1’. The OECF relates input values and output values of the scanner (see Chapter 21). The SNR is determined by the equation:

image

where σi is the standard deviation of the density patch, gi is the incremental gain of patch i and Ti is the transmission level of patch i. The number of grey patches is i, with imin as the lightest and imax as the darkest patch.

Incremental gain is defined in the ISO standard as the rate of change in the output level divided by the rate of change in the input density. The dynamic range is calculated individually for the red, green and blue channels. To report a single value for dynamic range (DR) the individual values for the red, green and blue should be weighted as follows:

image

where DR(R) is the dynamic range for the red, DR(G) is the dynamic range for the green and DR(B) is the dynamic range for the blue channel.

Bit depth

As previously mentioned, the analogue values from the image sensor are converted to discrete n bits per pixel digital values via the ADC. The bit depth (number of bits per pixel) defines the number of output grey levels per pixel (see Chapter 1). For a colour scanner, the bit depth defines the number of colours that can be reproduced. At present most models offer output bit depths up to 48 bits (16 bits per channel). Increase in the bit depth, however, results in additional quantization levels, increasing the file size. This requires more memory and results in increased processing time of the image. To prevent this, 48-bit images may be reduced to 24-bit colour output via the scanner software. The levels chosen are those which produce visually equal changes in brightness. Depending on the application and the available storage space, it may be preferable to store digital images in 48-bit depth. This could provide more options for future processing of the image (see Chapter 25).

Scanning speed

Scanning speed is determined by the resolution of scanning in the direction perpendicular to the sensor array. A high resolution will result in a low scanning speed. The scanning speed may affect productivity when large numbers of images are scanned on a daily basis. It also varies between models. A flatbed scanner, for example, would need 14 seconds to scan an A4 colour print at 300 ppi and 25 seconds to scan the same print at 600 ppi. When scanning film there are also differences in speed between flatbed and dedicated film scanners. The flatbed scanner may need 35–50 seconds for a 35 mm film, positive or negative, while a film scanner may need around 20–50 seconds. The scanning speed may also be affected by the ADC. One additional parameter used to affect the scanning speed was the time needed to store the image. The faster speeds of data storage nowadays mean that this does not have a significant effect.

Image transfer

There are four methods by which a scanner can be connected to a computer: parallel port, universal serial bus (USB), small computer system interface (SCSI), and FireWire (also known as IEEE-1394). The first type of connection, parallel port, is rarely used today because, compared to the other alternatives, it is the slowest method as its transfer rate is 70 KB per second. The parallel connection has now been largely replaced by the faster USB connection. The latest version, USB 2.0, is capable of transfer speeds of up to 60 MB per second, much higher than the 1.5 MB per second of the older USB 1.1. It is the most common connection standard at present. The SCSI connection is a faster connection with very high data rates. For example, the Ultra SCSI standard provides data rates as high as 160 MB per second. It can be, however, complicated to configure because it requires either an SCSI controller or an SCSI card in the computer. An advantage of the SCSI connection is that multiple devices can be connected to a single SCSI port. For example, up to eight devices can be connected in SCSI 2.

FireWire is used by scanners that have very high output resolutions which need faster transfer rate due to the high volume of data. It is faster than USB 1.0 and is comparable to earlier SCSI and to USB 2.0.

Scanner drivers

Scanner manufacturers provide the user with drivers, programs which provide control over the settings of the device. An additional option provided by the manufacturer is the control of the scanner via a TWAIN driver (TWAIN is not an acronym), an interface between the scanner hardware and the imaging software. This enables communication between the scanner and different imaging software applications. Third-party scanning software is also available. In some cases, the scanner driver may give more flexibility compared to scanning via an imaging software application using the TWAIN driver.

image

Figure 14.16   The process of acquisition, gamma correction and output performed during scanning.

Adapted from Triantaphillidou (2001)

Several options are available by the driver for setting the scanning parameters. These parameters may include sharpening, resolution, bit depth (colour, greyscale, black and white), colour balance, colour saturation, brightness, contrast and gamma correction (see Chapter 21). The option of adjusting gamma correction may be given by setting a gamma value (which is usually the inverse of the effective gamma applied in scanning) by adjusting the curves for the three RGB channels, individually or combined. Gamma correction is often applied to a 12-bit or a 16-bit signal which is then down-sampled to 8-bit (Figure 14.16). This is explained in more detail in Chapter 21.

The driver usually provides the option to use colour management and profiles. Some drivers allow the use of custom profiles. When films are scanned, setting of additional parameters is needed. These include the type of film (black and white, positive or negative) and format. Due to the fact that the characteristics of different photographic dyes vary, some scanners may allow the user to specify the brand of the film so that suitable profiles can be used. Additional features may also be included such as red-eye reduction, colour restoration or shadow correction. Another feature is the removal of artefacts on the digital image due to dust and scratches on the film. This is performed by first detecting the dust or scratch on the film, which is carried out by irradiating it with IR radiation. The artefact is then located and removed using image processing with combined information from the surrounding pixels obtained when the film is illuminated with white light. Selection of the output colour space (for example, sRGB, Adobe RGB – Chapter 23) may be available. The output digital RGB image can be saved via the driver as a TIFF, bitmap, JPEG or, in some scanners, as the PNG file format (see Chapter 17).

BIBLIOGRAPHY

Brown, D.S., 2008. Image Capture Beyond 24-Bit RGB. Available fromthe URL. http://www.imaging.org/resources/web_tutorials/Image_Capture/image_capture.cfm (accessed 15 August 2008).

Dougherty, E.R., 1999. Electronic Imaging Technology. SPIE Press, Bellingham, WA USA.

Gonzales, R.C., Woods, R.E., 2002. Digital Image Processing. Prentice-Hall, New Jersey.

Gunturk, B.K., Glotzbach, J., Altunbasak, Y., Schafer, R.W., Mersereau, R.M., 2005. Demosaicking: color filter array interpolation. IEEE Signal Processing Magazine, 44–54. January.

Hunt, R.W.G., 2004. The Reproduction of Colour, sixth ed. John Wiley, Chichester UK.

International Standards Organization (ISO) 21550:2004 2004. Photography – Electronic Scanners for Photographic Images – Dynamic Range Measurements.

Jacobson, R.E.J., Ray, S.F.R., Attridge, G.G., Axford, N.R., 2000. The Manual of Photography, ninth ed. Focal Press, Oxford UK.

Keys, R.G., 1981. Cubic convolution interpolation for digital image processing. IEEE Transactions in Acoustics, Speech, Signal Processing 29, 1153–1160.

Langford, M.L., Bilissi, E., 2007. Langford’s Advanced Photography, seventh ed. Focal Press, Oxford UK.

Lee, J., Jung, Y., Kim, B., Ko, S., 2001. An advanced video camera system with robust AF, AE, and AWB control. IEEE Transactions on Consumer Electronics 47 (3), 694–699.

Lukac, R., 2008. Single-Sensor Imaging, Methods and Applications for Digital Cameras. CRC Press, Boca Raton, FL USA.

Nakamura, J., 2006. Image Sensors and Signal Processing for Digital Still Cameras. CRC Press, Boca Raton, FL USA.

Parulski, K., Rabbani, M., 2000. Continuing evolution of digital cameras and digital photography systems. IEEE International Symposium on Circuits and Systems, 28–31. May, Geneva, Switzerland.

Ramanath, R., Snyder, W.E., Yoo, Y., Drew, M.S., 2005. Color image processing pipeline: a general survey of digital still camera processing. IEEE Signal Processing Magazine, January, 40–44.

Sharma, G., 2003. Digital Color Imaging Handbook. CRC Press, Boca Raton, FL USA.

Stroebel, L.D., Current, I., Compton, J., Zakia, R.D., 2000. Basic Photographic Materials and Processes, second ed. Focal Press, Oxford UK.

Sturge, J.M., Walworth, V., Shepp, A. (Eds.), 1989. Imaging Processes and Materials – Neblette’s, eighth ed. John Wiley, New York, USA.

Triantaphillidou, S., 2001. Aspects of Image Quality in the Digitisation of Photographic Collections. Ph.D. thesis, University of Westminster, Harrow, UK.

Vrhel, M., Saber, E., Trussell, H.J., 2005. Color image generation and display technologies, an overview of methods, devices, and research. IEEE Signal Processing Magazine, 23–33. January.

Wueller, D., 2002. Measuring scanner dynamic range. Society for Imaging. Science and Technology (IS&T) PICS Conference, Portland, OR, pp. 163–166.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset