Essential camera controls

There are a number of mechanical and electronic controls that are found on most broadcast camcorders. The position on the camera of some of these controls will vary between make and model but their function will be similar. The essential operational controls have a significant effect on the recorded image and their function must be understood if the operator is to fully exploit the camcorder’s potential. Each switch and function is discussed as a separate topic in the following pages but their effects may interact with other control settings (e.g. white balance and colour temperature filters). The newcomer to digital camerawork should experiment with a range of settings and observe the effect on the recorded image. Camerawork is a craft not a science and a video camera is simply a means to a production end.

Exposure, for example, can be described as a procedure that satisfies certain technical requirements. In those terms it appears to be a mechanical procedure that is objective and unaffected by the person carrying out the operation. But quite often exposure is customized to a specific shot and the engineering quest to produce a perfect electronic reproduction of a subject may be ignored to satisfy more important subjective criteria. Subjective production choices influence whether a face is in semisilhouette or is fully lit and exposed to see every detail in the shadows.

It is this ‘subjective’ element that often confuses someone new to camerawork in their search for the ‘right way’ to record an image. A useful first aim is to achieve fluency in producing television pictures that are sharp, steady, with a colour balance that appears normal to the eye and with a contrast range that falls within the limitation of a television display. The pictures should be free from noise and with a continuity in skin tones between shots of the same face. Similar basic aims are required in recording audio. When this is achieved, the means of customizing an image or audio to serve a specific production purpose can be learnt. Quite often the standard basic picture can be produced with the aid of the auto features on the camera. To progress beyond this point an understanding of how to manipulate the operational controls needs to be gained.

Operational controls

There are a few controls that need adjustment or at least checking on their existing setting before every shot. These include:

■  tape remaining and state of battery charge;

■  focus, exposure, colour temperature correction filter position, white balance setting;

■  gain, shutter speed and timecode;

■  adjustment of audio recording level.

The camera set-up controls such as gamma, onset of zebra setting, auto exposure settings, etc., are unlikely to be regularly adjusted unless there is a production need. They can be left at their factory setting until more experience is gained in video recording.

Typical camera control layout

images

Auto and manual controls

If you have ever used a spell checker on a word processor and found the auto device consistently queries correct words such as your address then you will understand why it is important to intelligently use and monitor the auto features available on a camcorder. A spell checker can be taught new words but the auto features on a camera remain as originally programmed although digital processing now allows much more flexibility in customizing auto facilities. An auto interpretation of what is the main subject in the frame may frequently be at odds with your own requirements. This is why a camcorder used for broadcast production purposes should have manual override on all auto facilities as there are going to be many occasions when the cameraman needs to take control and customize the image the way he/she wants it and not the way the auto feature is programmed to deliver. Auto control is a form of computer control and is stubbornly unintelligent in many situations.

Auto-exposure (see pages 82–3) has advantages and can sometimes accommodate rapid changes in light level faster than any manual correction. But the downside to this is the obvious visual change in the picture. For example, an interviewee may be wearing a white dress in sunlight. Using manual exposure it may be possible to find a compromise aperture setting that provides detail in the dress with acceptable skin tones. If cloud obscures the sun, the light level falls and a reasonable face exposure may need nearly a stop open compared to the bright sunlight condition. If this is done, as soon as the sun appears the dress will burn out and the lens needs to be stopped down. Some cameras provide a ‘soft’ transition between two auto-exposure conditions (see page 40), so that the sudden, auto-exposure adjustment is not so noticeable. Opening up and stopping down can be done manually to avoid abrupt exposure changes but the cameraman will need to be very attentive to follow a number of rapid light level changes.

Audio auto-gain can be convenient in one-man operations but produces too rapid an attenuation of the main subject sound if a loud background sound occurs. The voice level of a street corner interviewee using audio auto-gain can be pushed down to unintelligibility if a lorry passes close to the live side of the microphone. Obviously manual control is the better option in this situation. But if covering an item where a loud explosion is going to occur (e.g. a twenty-one-gun salute for a VIP), where there is no opportunity to manually check audio levels, auto audio level should accommodate the sudden very loud noise. Deciding between auto and manual control depends on circumstance and either operational technique should not be continually used as a matter of habit.

Manual control

images

The voice level of the reporter is manually set. The approaching aircraft will eventually force the sound level into the clip level (see page 202) but the reporter’s voice level will remain consistent throughout the shot. Television is often a compromise and by staging the item at the end of the runway the appropriate shot is obtained at the expense of far from acceptable sound.

Good auto control

images

To follow someone from an exterior which has been correctly white balanced with a daylight colour correction filter into a tungsten-lit interior would produce a big change in the appearance of the picture – it would become very yellow. One technique is to work on auto-exposure and switch between two white balance positions A and B which have been set on the same filter in exterior and interior. Many digital cameras have an ‘auto white balance correction’ and this can achieve a better adjustment when moving between light of different colour temperature. The colour balance is instantly adjusted moving into tungsten light although the adjustment to exposure may still be noticeable.

Lens controls

The three main operational controls on the lens are focus, zoom and the lens aperture (f number).

■  Focus: Sharp focus of the image is achieved by looking at the viewfinder image and either manually adjusting the focusing ring on the lens or by connecting a cable to the lens and controlling the focus by servo control mounted on a pan bar. Usually there is a switch on the zoom rocker module marked M (for manual control) and S (for servo control). On some DV format cameras there is the option of auto-focusing. As with other auto facilities, its value is limited and will depend on the shot and if it is sufficiently well designed and set up so that it does not continually hunt to check focus.

As we have discussed on page 52, the zoom must be prefocused on its narrowest angle before recording a zoom into a subject, and the back focus of the lens must be correctly aligned (see page 49) for the sharpest focus to be maintained throughout the zoom.

If the required object in frame is closer to the lens than its minimum object distance (MOD, see page 47), it may be possible to bring the subject into sharp focus by adjusting the macro ring on the back of the lens. The macro device alters several lens groups inside the lens and prevents the zoom operating as a constant focus lens. The macro after use should always be returned to its détente position and locked.

■  Zoom: Altering the variable lens angle on the zoom (zooming) can be achieved either by means of the rocker switch servo (or a pistol grip if fitted beneath the lens), or manually by using the zoom lever on the lens. Switching between the two methods of use is by a switch marked S (servo) and M (manual). In a similar way to control of focus, the zoom servo can also be controlled by a hand control attached to a pan bar. Pan bar servo control of zoom and focus is used when a larger viewfinder (possibly 5-inch display) replaces the monocular viewfinder.

■  Aperture: Opening up or closing down the lens aperture changes light level reaching the CCDs and therefore is the prime method of controlling exposure. Control of the aperture is via a switch marked M (manual control), A (auto-exposure, which is electronically determined) or R (remote – when the camera’s image quality is adjusted remotely from the camera to match the pictures produced by other cameras at the event).

■  Extender: If the lens is fitted with an extender, the range of focal lengths from widest to narrowest lens angles can be altered dependent on the extender factor.

Typical zoom lens controls

images

■  Record button/lens electrics: The cable connecting the lens to camera varies between makes and models of equipment. It is important to match the correct lens cable to the camera it is working with so that the correct drive voltages are connected to avoid lens motors being damaged. Conversion cables are available if lens and camera cable pins do not match.

■  Lens hood: A ray shield is essential for limiting the effect of degradation from flares. It is designed for the widest lens angle of the zoom but can be augmented on a narrower lens angle by the judicious use of gaffer tape if flare on this angle is a problem. Matte box and effects filters are discussed on page 62.

■  UV filter – skylight filter: This absorbs short wavelength ultraviolet (UV) rays that the eye cannot see. On a clear day these rays produce a bluish green cast to foliage. A zoom lens has so many lens components that almost all ultraviolet light is absorbed inside the lens. A UV filter is still advisable as a protection filter screwed on to the front of the lens to prevent damage or dirt reaching the front element.

Effects filters

Digital video has an enormous potential for manipulating the appearance of the image in post-production, but there are also the opportunities, when appropriate, to refashion the recorded image at the time of acquisition (see Scene files, page 100). Alongside the need for faithful reproduction, photographers have often attempted to customize the image to suit a particular emotional or aesthetic effect. In their hands the camera was not a ‘scientific’ instrument faithfully observing reality, but the means of creating a subjective impression. One simple method is in the use of filters usually placed in a filter holder/matte box positioned in the front of the lens. These are used for various reasons such as to control light, contrast or part of the subject brightness, to soften the image or to colour the image. Most camcorders also have one or two filter wheels fitted between lens and prism block carrying colour correction filters and/or neutral density filters.

Altering the appearance of the image

Filters fall into three main groups – colour correction filters (see Colour temperature, page 64), neutral density filters (used as a control in exposure) and effects and polarizing filters. They all alter the quality of light reaching the CCDs, but whereas the first two groups attempt to invisibly make the correction, effects filters are intended to be visually obvious in their impact on the appearance of the image. They are employed to change the standard electronic depiction of a subject. Filters can be chosen to bring about a number of visual changes including reduction in picture sharpness, a reduction in picture contrast, lightening or ‘lifting’ blacks, to induce highlight effects such as halos or star bursts and to modify the rendition of skin tones. Many of these effects are not reversible in post-production although improvement in picture matching can be attempted.

Factors that affect the filter influence

Many of the effects filters such as black and white dot, frosts, nets, fog, soft and low contrast achieve their results by introducing a varying degree of flare. The potential influence on the picture is identified by grading the effects filter on a scale of 1 to 5 where 1 has the smallest effect and 5 has the largest. Some filters are also available in smaller, more subtle increments and are graded as 1/8th, 1/4 or 1/2. Choosing which grade and which filter to use depends on the shot, lens angle, aperture and the effects of under- or over-exposure. In general, effects filters work more effectively on longer lens and wider apertures (e.g. f 2 and f 2.8). To ensure continuity of image over a long sequence of shots it may be necessary to vary the grade of filter depending on lens angle, camera distance and aperture. It is prudent to carry out a series of tests varying the above settings before production commences. Filters mounted on the front of the lens are affected by stray light and flares which can add to the degradation.

Filters that affect the blacks in an image

A strong black makes a picture appear to have more definition and contrast. Diffusion filters reduce the density of blacks by dispersing light into the blacks of an image. This effectively reduces the overall contrast and creates an apparent reduction in sharpness. White nets provide a substantial reduction in the black density and whites or any over-exposed areas of the image tend to bloom. If stockings are used stretched across the lens hood, the higher their denier number (mesh size) the stronger the diffusion. A fog filter reduces black density, contrast and saturation. The lighter parts of the scene will appear to have a greater fog effect than the shadows creating halos around lights. A double fog filter does not double the fog effect and possibly creates less of a fog than the standard fog filter, but does create a glow around highlights. Because of lightening of the blacks, the picture may appear to be over-exposed and the exposure should be adjusted to maximize the intended effect. Low contrast filters reduce contrast by lightening blacks and thereby reducing the overall contrast. Strong blacks appear to give the image more definition and low contrast filters may appear soft. They are sometimes employed to modify the effects of strong sunlight. Soft contrast filters reduce contrast by pulling down the highlights. Because blacks are less affected, soft contrast filters appear sharper than low contrast filters and do not create halation around lights. As highlights are reduced the picture may appear under-exposed.

Effect on highlights

Some filters cause points of light, highlights or flare to have a diffused glow around the light source. The black dot filter limits this diffusion to areas around the highlights and avoids spreading into the blacks. Super frosts, black frosts, promist, black promists and double mists diffusion work best on strong specular light or white objects against a dark background. The weak grades leave the blacks unaffected providing an apparent sharp image whilst the strong grades cause a haze over the whole image which leaks into the black areas of the picture. Black, white and coloured nets have a fine mesh pattern causing a softening of the image and a reduction in the purity and intensity of the image’s colour. This ‘desaturated’ look is thought by some to give video images more of a film appearance.

■  Neutral density filters: These reduce the amount of light reaching the lens and can be used to produce a specific f number and therefore depth of field.

■  Graduated filters: These can help to control bright skies by having a graduated neutral density from the top to clear filter at the bottom. The graduation can be obtained as a hard or a soft transition. There are also filters with a graduated tint to colour skies or the top part of the frame. They are positioned in the matte box for optimum effect but once adjusted the camera can rarely be tilted or panned on shot without disclosing the filter position.

■  Polarizing filters: These reduce glare reflections, darken blue skies and increase colour saturation. They are useful in eliminating reflections in glass such as shop windows, cars and shooting into water. The filter must be rotated until the maximum reduction of unwanted reflection is achieved. This changes the colour balance (e.g. can affect the ‘green’ of grass) so a white balance should be carried out when correct filter position has been determined. Moving the camera (panning or titling) once the polarizing filter is aligned may reduce or eliminate the polarizing effect.

■  Star and sunburst filters: These produce flare lines or ‘stars’ from highlights. Star filters are cross hatched to produce 2, 4, 6, 8 or 10 points whilst sunburst produces any number of points. They are more effective when placed between lens and prism block and can produce an unwanted degradation of definition when in front of the lens.

Colour temperature

Two sounds cannot be combined to produce a third pure sound but as we have discussed in Light into electricity (page 12), by combing two or more colours a third colour can be created in which there is no trace of its constituents (red + green = yellow). The eye acts differently to the ear. The eye/brain relationship is in many ways far more sophisticated than a video camera and can be misleading when attempting to analyse the ‘colour’ of the light illuminating a potential shot.

The camera has no brain

In discussing the conversion of a colour image into an electrical system by the three-filter system we overlooked this crucial distinction between how we perceive colour and how a camera converts colour into an electrical signal. Human perception filters sensory information through the brain. The brain makes many additions and adjustments in deciding what we think we see, particularly in observing colour.

A ‘white’ card will appear ‘white’ under many different lighting conditions. Without a standard reference ‘white’, a card can be lit by pink, light blue or pale green light and an observer will adjust and claim that the card is ‘white’. The card itself can be of a range of pastel hues and still be seen as white. The brain continually makes adjustments when judging colour. A video camera has no ‘brain’ and makes no adjustment when the colour of light illuminating a subject varies. It accurately reproduces the scene in the field of view. A person’s face lit by a sodium street lamp (orange light) will be adjusted by the brain and very little orange will be seen. The camera will reproduce the prevailing light and when displayed on a screen the face will have an orange hue.

Colour temperature

Because of the fidelity with which the camera reproduces colour, it is important to have a means of measuring colour and changes in colour. This is achieved by using the Kelvin scale – a measure of the colour temperature of light (see opposite). Across a sequence of shots under varying lighting conditions, we must provide continuity in our reference white. Just as the brain makes the necessary adjustment to preserve the continuity of white, we must adjust the camera when the colour temperature of the shot illumination changes (see White balance, page 68).

Colour temperature

images

Blue skylight 9500–20,000K
Overcast sky 6000–7500K
HMI lamps 5600K
Average summer sunlight 5500K
Fluorescent daylight tubes* 5000K
Early morning/late afternoon 4300K
Fluorescent warm white tubes* 3000K
Studio tungsten lights 3200K
40–60 watt household bulb 2760K
Dawn/dusk 7000K
Sunrise/sunset 2000K
Candle flame 1850–2000K
Match flame 1700K
*All discharge sources are quoted as a correlated colour temperature, i.e. it ‘looks like’, for example, 5000K.

 

A piece of iron when heated glows first red and then, as its temperature increases, changes colour through yellow to ‘white hot’. The colour of a light source can therefore be conveniently defined by comparing its colour with an identical colour produced by a black body radiator (e.g. an iron bar) and identifying the temperature needed to produce that colour. This temperature is measured using the Kelvin scale** (K) which is equivalent to the Centigrade unit plus 273 (e.g. 0° Centigrade = 273 Kelvin). This is called the colour temperature of the light source although strictly speaking this only applies to incandescent sources (i.e. sources glowing because they are hot). The most common incandescent source is the tungsten filament lamp. The colour temperature of a domestic 40–60 watt tungsten bulb is 2760K, while that of a tungsten halogen source is 3200K. These are not necessarily the operating temperatures of the filaments but the colour temperature of the light emitted. Although we psychologically associate red with heat and warmth and blue with cold, as a black body radiator becomes hotter, its colour temperature increases but the light it emits becomes bluer.

**Kelvin scale: The physicist William Thomson, 1st Baron of Kelvin, first proposed an absolute temperature scale defined so that 0K is absolute zero, the coldest theoretical temperature (−273.15°C), at which the energy of motion of molecules is zero. Each absolute Kelvin degree is equivalent to a Celsius degree, so that the freezing point of water (0°C) is 273.15K, and its boiling point (100°C) is 373.15K.

Colour temperature correction filters

Colour camera processing is designed to operate in a tungsten-lit scene. Consequently the output from the red, green and blue CCDs are easily equalized when white balancing a scene lit with tungsten lighting (3200K). When the camera is exposed to daylight, it requires significant changes to the red channel and blue channel gains to achieve a ‘white balance’. Many cameras are fitted with two filter wheels which are controlled either mechanically, by turning the filter wheel on the left-hand side at the front of the camera, or by selecting the required filter position from a menu displayed in the viewfinder. The filter wheels contain colour correction and neutral density filters and possibly an effects filter. The position of the various filters varies with camera model.

Variation in colour temperature could be compensated without the use of colour correction filters by adjusting the gains of each channel. With some colour temperatures this would require a large increase in gain in one channel and increase the noise to an unacceptable level (see page 88 for the relationship between gain and noise).

Cameras could be normalized to the colour temperature of tungsten or daylight. Because of the greater light levels of daylight, most cameras are designed to be operated with no colour correction under tungsten. The 3200K filter position is a clear glass filter whereas a 5600K filter (with no ND) is a minus blue filter to cut out the additional blue found in daylight. All colour correction filters decrease the transmission of light and therefore the minus blue filter cuts down the light (by approximately one stop) where most light is available – in daylight. A white balance is required after changing filter position.

In addition to the colour correction filter for daylight, many cameras also provide neutral density filters in the filter wheel. Neutral density filters are used when there is a need to reduce the depth of field or in circumstances of a brightly lit location.

Filter selection

Set the filter to match the colour correction filter appropriate to the light source and light intensity.

■  3200K filter: When this filter position is selected an optical plain glass filter is placed between the lens and the prism block to maintain back focus. Although the position is marked as 3200K, no colour correction filter is used because the camera is designed to work in a tungsten lit environment with sufficient gain variation of red and blue to cope with ‘tungsten’ colour temperature.

■  5600K filter: This position in the filter wheel has the required colour temperature correction filter needed for daylight exposure. The minus blue filter reduces the transmission of blue light but also reduces the exposure by about one stop. This is not usually a problem as daylight light levels are considerably higher than the output of tungsten light.

The colour of the sky

images

Sunlight is scattered as it passes through the atmosphere and combined with clouds and the orbit of the earth around the sun, the human perception of the ‘colour’ of the sky is constantly changing. At midday, visible solar radiation is scattered by air molecules, particularly at the blue end of the spectrum where 30–40 per cent of blue light is dispersed producing a ‘blue’ sky. The scattering decreases to a negligible amount at the red end. At sunrise/sunset when light from the sun passes through a greater amount of atmosphere, light scattering occurs across the whole of the spectrum and the sky appears redder. The amount of sunlight scattered by the earth’s atmosphere depends on wavelength, how far light has to travel through the atmosphere, and atmospheric pollution.

A common combination of colour correction and neutral density filters is:

position (1) 3200K sunrise/sunset/tungsten/studio
position (2) 5600K + 1/4ND (neutral density) exterior – clear sky
position (3) 5600K exterior/cloud/rain
position (4) 5600K + 1/16ND (neutral density) exterior exceptionally bright

NB 1/4ND is a filter with a transmission of 1/4 or 25%, i.e. 0.6 ND not 0.25ND!

White balance

Whenever there is a change in the colour temperature of the light illuminating a potential shot it is necessary to adjust the white balance of the camera. Some cameras will compensate for colour temperature variation but often it is necessary to carry out a white balance set-up. The white balance switch may be marked ‘auto white balance’ but this does not mean the camera will automatically compensate unless the specified white balance procedure is carried out. To successfully white balance a simple routine needs to be followed:

■  Select the correct filter for the colour temperature of light being used (e.g. tungsten 3200K or daylight 5600K).

■  Select either white balance position A or B. These positions memorize the setting achieved by the white balance. On each filter position there can be two memories (A or B). If preset is selected no setting will be memorized. This position always provides the factory default setting of 3200K.

■  Fill the frame with a white matte card that is lit by same lighting as the intended shot. Make sure the card does not move during the white balance and that there are no reflections or shading on the card. Avoid any colour cast from surrounding light sources and ensure that you white balance with the main source of shot illumination and that the card is correctly exposed.

A progress report may appear in the viewfinder during white balance, including a number indicating the colour temperature that has been assessed. If this is much higher or lower than your anticipated estimate of the colour temperature then check the white card position and the other requirements of white balance. It could also indicate the camera is either not properly lined-up or malfunctioning. During the white balance procedure, the auto-iris circuit adjusts exposure to make the output of the green signal correct, then the gain of the red and blue channels are adjusted to equal the output of green (see Figure 1 page 13). This establishes the ‘white’ of the card held in front of the lens as the reference ‘white’ and influences all other colour combinations. The fidelity of colour reproduction is therefore dependent on the white balance procedure.

Black balance

Many cameras do not require a manual black balance and this adjustment is carried out automatically when required. Black balance sets the black levels of the R, G and B channels so that black has no colourcast. It is normally only required if the camera has not been in use for some time, or if the camera has been moved between radically different air temperatures or the surrounding air temperature has significantly altered or, on some cameras, when the gain selector values have been changed. If a manual black balance needs to be done, first, white balance to equalize the gains and then black balance; then white balance again.

Light output

In tungsten light sources, light is produced from the heating effect of an electric current flowing through the tungsten filament. The visible spectrum from these sources is continuous and produces a smooth transition of light output between adjacent wavelengths. The intensity of light output will vary if the current is altered (dimming) which also affects the colour temperature although this is usually kept to within acceptable limits (see Continuity of face tones, page 76).

A discharge light source produces light as a byproduct of an electrical discharge through a gas. The colour of the light is dependent on the particular mixture of gas present in the glass envelope or by the phosphor coating of the fluorescent tube. Discharge light sources that are designed for film and television lighting such as HMIs are not as stable as tungsten but have greater efficacy, are compact and produce light that approximates to daylight.

Pulsed light sources

Fluorescent tubes used in the home, office and factory, and neon signs, do not produce a constant light output but give short pulses of light at a frequency depending on the mains supply (see Shutter and pulsed lighting, page 90).

images

Light output of a typical daylight-type fluorescent tube. Normal eyesight does not register the high intensity blue and green spikes, but they give a bluish green cast to a tungsten-balanced camera.

In recent years the development of improved phosphors has made the fluorescent tube (cold light) acceptable for television and film. Phosphors are available to provide tungsten matching and daylight matching colour temperatures. High frequency operation (>40 kHz) results in a more or less steady light output.

Colour rendition index (Ra)

A method of comparing the colour fidelity and consistency of a light source has been devised using a scale of 0 to 100. The colour rendition index (Ra) can be used as indication of the suitability for use in television production, with an Ra of 70 regarded as the lower limit of acceptability for colour television.

Viewfinder

The monocular viewfinder is the first and often the only method of checking picture quality for the camcorder cameraman. The small black and white image (often colour LCD, liquid crystal display viewfinders on DV format cameras) has to be used to check framing, focusing, exposure, contrast and lighting. It is essential, as the viewfinder is the main guide to what is being recorded, to ensure that it is correctly set up. This means aligning the brightness and contrast of the viewfinder display. Neither control directly affects the camera output signal. Indirectly, however, if the brightness control is incorrectly set, manual adjustment of exposure based on the viewfinder picture can lead to under- or overexposed pictures. The action of the brightness and contrast controls therefore needs to be clearly understood.

■  Brightness: This control is used to set the correct black level of the viewfinder picture and alters the viewfinder tube bias control. Unless it is correctly set up, the viewfinder image cannot be used to judge exposure. The brightness control must be set so that any true black produced by the camera is just not seen in the viewfinder. If, after a lens cap is placed over the lens and the aperture is fully closed, the brightness is turned up, the viewfinder display will appear increasingly grey and then white. This obviously does not represent the black image produced by the camera. If the brightness is now turned down, the image will gradually darken until the line structure of the picture is no longer visible. The correct setting of the brightness control is at the point when the line structure just disappears and there is no visible distinction between the outside edge of the display and the surrounding tube face. If the brightness control is decreased beyond this point, the viewfinder will be unable to display the darker tones just above black and distort the tonal range of the image. There is therefore only one correct setting of the brightness control which, once set, should not be altered.

■  Contrast: The contrast control is in effect a gain control. As the contrast is increased the black level of the display remains unchanged (set by the brightness control) whilst the rest of the tones become brighter. This is where confusion over the function of the two viewfinder controls may arise. Increasing the contrast of the image increases the brightness of the image to a point where the electron beam increases in diameter and the resolution of the display is reduced. Unlike the brightness control, there is no one correct setting for the contrast control, other than that an ‘over-contrasted’ image may lack definition and appear subjectively over-exposed. Contrast is therefore adjusted for an optimum displayed image which will depend on picture content and the amount of ambient light falling on the viewfinder display.

■  Peaking: This control adds edge enhancement to the viewfinder picture as an aid in focusing and has no effect on the camera output signal.

Setting up the viewfinder

1  Select aspect ratio if using a switchable format camera. Check that the viewfinder image is in the selected aspect ratio.

2  Switch CAMERA to BARS or place a lens cap on the lens.

3  Check the picture in the viewfinder and then reduce contrast and brightness to minimum.

4  Increase brightness until just before the raster (line structure) appears in the right-hand (black) segment of the bars.

5  Adjust contrast until all divisions of the bars can be seen.

6  Use the bars to check viewfinder focus and adjust the focus of the viewfinder eyepiece to produce the sharpest picture possible.

7  With the camera switched to produce a picture, recheck contrast with correctly exposed picture. Although the contrast control may occasionally need to be adjusted depending on picture content and ambient light change, avoid altering the brightness control.

8  Set peaking control to provide the minimum edge-enhancement that you require to find focus and adjust the eyepiece focus to achieve maximum sharpness of the viewfinder image. Adjust the position of the viewfinder for optimum operating comfort.

Aspect ratios and safety zones

images

With the introduction of widescreen digital TV and the use of dual format cameras (see Widescreen, page 106), programme productions may be shot in 16:9 aspect ratio but transmitted and viewed on 4:3 television receiver. To ease the transition between the two aspect ratios, many broadcasters use a compromise 14:9 aspect ratio for nominally 4:3 sets, but transmit the whole of the 16:9 frame to widescreen sets.

This requires the cameraman to frame up a 16:9 picture with these competing requirements in mind. Any essential information is included in the 14:9 picture area although the whole of the 16:9 frame may be transmitted in the future. A 14:9 graticule superimposed on the 16:9 viewfinder picture reminds the cameraman of this requirement. For the foreseeable future, actuality events such as sport may be covered for dual transmission – 16:9 and 14:9 – and therefore framing has to accommodate the smaller format if some viewers are not to be deprived of vital action.

Viewfinder indicators

There are usually a number of indicators available to be displayed in the viewfinder in addition to menus which provide information and adjustment to the camera (see Menus, page 98). These include:

• A red cue light or icon when recording.

• Tape remaining time with a visual warning when tape is close to its end.

• Battery indicator will also warn a few minutes before the battery voltage drops below the minimum level needed to operate the camera/recorder and will remain continually lit when battery voltage is inadequate.

• Gain, shutter, contrast control, selected filter, white balance preset, audio metering, etc. can also be displayed plus error or ‘OK’ messages when white balancing and fault reports such as tape jammed, humidity, etc.

Exposure

When viewing a film or television image, it is often easy to accept that the two-dimensional images are a faithful reproduction of the original scene. There are many productions (e.g. news, current affairs, sports coverage, etc.) where the audience’s belief that they are watching a truthful representation unmediated by technical manipulation or distortion is essential to the credibility of the programme. But many decisions concerning exposure involve some degree of compromise as to what can be depicted even in ‘factual’ programmes. In productions that seek to interpret rather than to record an event, manipulating the exposure to control the look of a shot is an important technique.

As we have discussed in Colour temperature (page 64), human perception is more complex and adaptable than a video camera. The eye/brain can detect subtle tonal differences ranging, for example, from the slight variations in a white sheet hanging on a washing line on a sunny day to the detail in the deepest shadow cast by a building. The highlights in the sheet may be a thousand times brighter than the shadow detail. The TV signal is designed to handle (with minimum correction) no more than approximately 40:1 (see Contrast range, page 74).

But there is another fundamental difference between viewing a TV image and our personal experience in observing a subject. Frequently, a TV image is part of a series of images that are telling a story, creating an atmosphere or emotion. The image is designed to manipulate the viewer’s response. Our normal perceptual experience is conditioned by psychological factors and we often see what we expect to see; our response is personal and individual. A storytelling TV image is designed to evoke a similar reaction in all its viewers. Exposure plays a key part in this process and is a crucial part of camerawork. Decisions on what ranges of tones are to be recorded and decisions on lighting, staging, stop number, depth of field, etc., all intimately affect how the observer relates to the image and to a sequence of images. The ‘look’ of an image is a key production tool.

A shot is one shot amongst many and continuity of the exposure will determine how it relates to the proceeding and the succeeding images (see Matching shots, page 156).

Factors which affect decisions on exposure include:

■  the contrast range of the recording medium and viewing conditions;

■  the choice of peak white and how much detail in the shadows is to be preserved;

■  continuity of face tones and the relationship to other picture tones;

■  subject priority – what is the principal subject in the frame (e.g. a figure standing on a skyline or the sky behind them?);

■  what electronic methods of controlling contrast range are used;

■  the lighting technique applied in controlling contrast;

■  staging decisions – where someone is placed affects the contrast range.

Exposure overview

■  An accurate conversion of a range of tonal contrast from light into an electrical signal and back into light requires an overall system gamma of approximately 1.08 (see Gamma and linear matrix, page 102).

■  Often the scene contrast range cannot be accommodated by the five-stop handling ability of the camera and requires either the use of additional lamps or graduated filters or the compression of highlights is necessary.

■  The choice of what tones are to be compressed is decided by the cameraman by altering the iris, by a knowledge of the transfer characteristics of the camera or by the use of highlight control.

■  Automatic exposure makes no judgement of what is the important subject in the frame. It exposes for average light levels plus some weighting to centre frame. It continuously adjusts to any change of light level in the frame.

■  Exposure can be achieved by a combination of f number, shutter speed and gain setting. Increasing gain will increase noise (see page 88). Shutter speed is dependent on subject content (e.g. the need for slow motion replay, shooting computer displays, etc., page 90). F number controls depth of field and is a subjective choice based on shot content (see below).

Depth of field

Choosing a lens aperture when shooting in daylight usually depends on achieving the required exposure. Depth of field is proportional to f number and if the infocus zone is a significant consideration in the shot composition (e.g. the need to have an out of focus background on an MCU of a person or alternatively, the need to have all the subject in focus) then the other factors affecting exposure may be adjusted to meet the required stop such as:

■  neutral density filters;

■  altering gain (including the use of negative gain);

■  altering shutter speed;

■  adding or removing light sources.

There is often more opportunity to achieve the required aperture when shooting interiors by the choice of lighting treatment (e.g. adjusting the balance between the interior and exterior light sources), although daylight is many times more powerful than portable lighting kits (see Lighting levels, page 176).

Sometimes there may be the need to match the depth of field on similar sized shots that are to be intercut (e.g. interviews). Lens sharpness may decrease as the lens is opened up but the higher lens specification required for digital cameras usually ensures that even when working wide open, the slight loss of resolution is not noticeable. Auto-focus and anti-shake devices usually cause more definition problems, especially when attempting to extend the zoom range with electronic magnification.

Contrast range

Every shot recorded by the camera/recorder has a variation of brightness contained within it. This variation of brightness is the contrast range of the scene. The ratio between the brightest part of the subject and the darkest part is the contrast ratio. The average exterior contrast ratio is approximately 150:1 but it can be as high as 1000:1. Whereas the contrast ratios of everyday locations and interiors can range from 20:1 to 1000:1, a video camera can only record a scene range of approximately 32:1. Peak white (100%) to black level (3.125%) is equivalent to five stops. The contrast range can be extended by compressing the highlights using a non-linear transfer characteristic when translating light into the television signal (see Electronic contrast control, page 80).

The result of recording a contrast range greater than the camera can handle is that highlights of the scene will appear a uniform white – details in them will be burnt out – and the darker tones of the scene will be a uniform black. The limiting factor for the reproduction of the acquired image is ultimately the display monitor on which it is viewed. The design, set-up and viewing conditions of the viewer’s display monitor and the design of the signal path to the viewer all affect the final contrast range displayed. The darkest black that can be achieved will depend on the amount of light falling on the screen. The viewer also has the ability to increase the contrast on their set which will distort any production preference of image contrast.

The majority of image impairment in discriminating between tones occurs in highlights such as windows, skies, etc., but whether such limitations matter in practice will depend on how important tonal subtlety is in the shot. Loss of detail in a white costume may be noticeable but accepted as a subjective expression of a hot sunny day. A sports arena where a stadium shadow places half the pitch in darkness and half in bright sunlight may cause continuous exposure problems as the play moves in and out of the shadow. Either detail will be lost in the shadows or detail in sunlight will be burnt out.

Often, exposure is adjusted to allow the contrast range of the scene to be accurately reproduced on the recording. The aim is to avoid losing any variation between shades and at the same time to maintain the overall scene brightness relationships. Achieving the correct exposure for this type of shot therefore requires reproducing the detail in the highlights as well as in the shadows of the scene. Additionally, if a face is the subject of the picture then the skin tones need to be set between 70 and 75 per cent of peak white (may be a wider variation depending on country and skin tones; see Exposure continuity, page 76).

Alternatively, many productions require images that create a visual impression. The ‘correct’ exposure is less a matter of accurately reproducing a contrast range than the technique of setting a mood. Selecting the exposure for this type of shot is dependent on choosing a limited range of tones that creates the desired atmosphere that is appropriate to the subject matter.

The eye

images

The eye perceives gradations of brightness by comparison. It is the ratio of one apparent brightness to another (and in what context) that determines how different or distinct the two appear to be. The just noticeable difference between the intensity of two light sources is discernible if one is approximately 8 per cent greater/lesser than the other, regardless of them both being of high or low luminous intensity (see Measurement of light, page 173). The amount of light entering the eye is controlled by an iris and it is also equipped with two types of cells; rods, that respond to dim light, and cones, receptor cells that respond to normal lighting levels. For a given iris opening, the average eye can accommodate a contrast range of 100:1, but visual perception is always a combination of eye and brain. The eye adapts fast to changing light levels and the brain interprets the eye’s response in such a way that it appears as if we can scan a scene with a very wide contrast range (e.g. 500:1), and see it in a single glance.

The lens

The aperture of a zoom lens is the opening through which light is admitted. The maximum aperture is limited by the design of the lens (see Ramping, page 49). Adjusting the aperture controls the amount of light that reaches the CCDs. Aperture is identified by f number, the ratio of the focal length of the lens to the diameter of the effective aperture and is an indication of the amount of light passing through the lens and therefore an exposure control. An aperture set to f 2 on a 50 mm lens would have an effective aperture of 25 mm. Increasing the f number (stopping down to f 4) reduces the amount of light entering the camera. A wider aperture (opening up to f 1.4) lets in more light (see Depth of field, page 48). Different designs of lenses may have variations in the configuration of lens elements and type of glass and may operate with identical f numbers but admit unequal amounts of light. The T number is a formula which takes into account the transmittance of light and therefore lenses with similar T numbers will give the same image brightness.

images

Exposure continuity

A ‘talking head’ is probably the most common shot on television. Because the audience is usually more critical in their judgement of correct reproduction of skin tones, video pictures of faces are the most demanding in achieving correct exposure and usually require exposure levels that are high but are free from burn-out in highlight areas. The reflectivity of the human face varies enormously by reason of different skin pigments and make-up. In general, Caucasian face tones will tend to look right when a ‘television white’ of 60 per cent reflectivity is exposed to give peak white. White nylon shirts, white cartridge paper and chrome plate for example have reflectivity of above 60 per cent which is TV peak white. Without highlight compression (see page 80), these materials would lack detail if the face was exposed correctly. Average Caucasian skin tones reflect about 36 per cent of the light. As a generalization, face tones are approximately one stop down on peak white. If a scene peak white is chosen that has a reflectivity of 100 per cent, the face tones at 36 per cent reflectivity would look rather dark. To ‘lift’ the face to a more acceptable level in the tonal range of the shot, a scene peak white of 60 per cent reflectivity is preferable. This puts the face tone at approximately half this value or one stop down on the peak white.

Continuity of face tones

An important consideration when shooting the same face in different locations or lighting situations is to achieve some measure of continuity in face tones. There is a small amount of latitude in the colour temperature of light on a face. When white balanced to daylight sources (5500K), a 400K variation can be tolerated without being noticeable. There is a smaller latitude of colour temperature (150K) when white balanced to tungsten sources (3200K). As the viewer is unaware of the ‘true’ colour of the setting, a greater variation is acceptable in changes in colour temperature in this type of wide shot.

Continuity of face tone can be achieved if it is exposed to same value in different shots. The problem comes with variation in background and a variation in what is chosen as peak white. For example, a person wearing dark clothing positioned beside a white marble fireplace in a panelled room in one shot could have an exposure that set the marble surround at or near peak white so that the face was one stop down on this with the surrounding panelling showing no detail. If a reverse shot was to follow of the same person now seen only against dark panelling, the exposure could be set to show detail in the panelling and ‘lift’ the picture but this would push the face to a lighter tone than the preceding shot. To maintain continuity of face tone, the same exposure setting for the face would be needed as in the previous shot, with no tone in the frame achieving peak white. It may be necessary to control or adjust the contrast range of the shot to preserve the priority of the face tone at an acceptable level. In the above example, a more acceptable picture would be to light the background panelling to a level that showed some detail and texture whilst maintaining face tone continuity.

The ‘film’ look

A cinema screen is a highly reflective surface and the audience watch the giant projected images in a darkened auditorium. A television set is a small light box that emits its picture usually into a well-lit room often provided by high intensity daylight. The initial brightness ratio between black and white tones of a subject before a video camera often exceeds the dynamic range of any display monitor. This problem is exacerbated (because of the regulations imposed on the design of the transmission path) by a lower contrast ratio handling ability than is theoretically possible and because television is viewed in less than favourable lighting conditions.

These are simply the differences in viewing conditions between film and video. There are also a number of inherent differences in technology and technique. When film is recorded on 35 mm emulsion it can achieve (if desired) a much higher resolution and contrast range (e.g. some film negative can handle 1000:1) than is possible with standard video broadcasting. Video systems such as 24P 1080 lines (see Widescreen, page 106) are attempting to provide a transparent match between film and video but, to date, HDTV systems are making slow progress with consumers.

Two of the key distinctions between film and video is the use of detail enhancement in video to compensate for lower resolution compared to film and video’s handling of highlights and overloads. Digital acquisition and processing allows more selective and subtle control of detail enhancement (see page 87) and CCDs with 600,000 pixels have such good resolution they hardly need contour correction. Digital acquisition allows manipulation of the soft shoulder of the video transfer characteristic to mimic the D log E (density versus the logarithm of exposure) curve of a film negative transfer characteristic. Digital video acquisition also allows the manipulation of gamma and linear matrix to customize the image (see Scene files, page 100). A ‘non-video’ look is attempted by techniques such as adjusting aperture correction, contour, auto knee, gamma, detail correction, and limited depth of field.

There are many attempts by video to imitate the film look. But which film? The deep focus and wide angle shots of Citizen Kane (1940)? The use of a long lens, shooting against the light with out of focus background and misty blobs of foreground colour of Une Homme et une Femme (1966)? The amber glow of Days of Heaven (1978), where cinematographer Nestor Almendros used the ‘magic hour’ (actually only 20–25 minutes) after the sun has set each day to produce a sky with light but no sun giving very soft-lit pictures without diffusion?

There are endless variations of style in the history of film making and contemporary fashion often dictates the ‘look’ at any particular time. Possibly there is a misunderstanding about the so-called ‘film look’ and the standard video ‘look’ that ignores the link between production budget and working techniques. Many feature films made for cinema release have a much larger budget than a video programme made for one or two transmissions. Money and customary film-making conventions, for example, allow a production technique that spends time staging action for a prime lens camera position developing a shot that is precisely lit, framed and with camera movement that is perfect or there is a retake. The stereotype multi-camera video production usually has ‘compromise’ written all over it, basically because television requires 24 hours to be filled day on day across a wide range of channels in the most cost-effective way possible. Zooming to accommodate unrehearsed action, compromise lighting for three/four camera shooting and a recording or transmission schedule that allows little or no time to seek production perfection. Plus a television content that has neither the high audience appeal or high profile performers presented with pace, tension and gloss achieved by editing over an extended post-production process. The ‘look’ of film may be more about the techniques of acquisition than the technology of acquisition.

Contrast control

The control of the contrast range can be achieved by several methods:

■  The simplest is by avoiding high contrast situations. This involves selecting a framing that does not exceed the 32:1 contrast range the camera can handle. This obviously places severe restrictions on what can be recorded and is frequently not practical.

■  A popular technique is to stage the participants of a shot against a background avoiding high contrast. In interiors, this may mean avoiding daylight, windows or closing curtains or blinds, or, in exteriors, avoiding shooting people in shadow (e.g. under a tree) with brightly lit backgrounds or against the skyline.

■  If luminaries or reflectors of sufficient power and numbers are available they can be used to lighten shadows, modify the light on faces (see Lighting a face, page 180), etc. Even a lamp mounted on a camera can be a useful contrast modifier at dusk on an exterior or in some interior situations.

Staging

The rule of thumb that claims you should expose for the highlights and let the shadows look after themselves may give bright colourful landscapes but becomes very limited advice when shooting a face lit by sunlight under a cloudless summer sky. There may be more than three to four stops difference between the lit and the unlit side of the face. Ways of controlling the contrast in a shot need to be found if there is to be no loss of detail in highlights or shadows. A simple but effective method is to frame the shot to avoid areas of high contrast. Stage people against buildings or trees rather than the sky if there are bright sunlight conditions. Avoid direct sunlight on faces unless you can lighten shadows. With interiors, use curtains or blinds to reduce the amount of light entering windows and position people to avoid a high-contrast situation.

The problem of a bright sky can be controlled by a ND graduated filter if the horizon allows and other important elements of the shot are not in the top of frame. Low contrast filters and soft contrast filters may also help (see Effects filters, page 62).

Avoid staging people, if possible, against an even white cloud base. Either the overcast sky is burnt out or the face is in semi-silhouette if exposure for detail in the clouds is attempted.

Methods of altering contrast range by additional lamps or reflector boards are discussed in Lighting a face, page 180.

Portable waveform monitor

images

Diagrammatic representation of waveform supered on picture.

A portable waveform test measurement device will allow a waveform signal to be superimposed on a monitor screen. For example, a particular face tone signal level can be marked by a cursor. When the same signal level is required for another shot with the same face, the exposure, lighting, etc., can be adjusted so that the signal level signifying the face tone matches up to the memory cursor.

Electronic contrast control

As we discussed in Charge-coupled devices (page 18), the CCDs in the camera respond to light and convert the variations of brightness into variations in the electrical signal output. There is a minimum light level required to produce any signal (see Gain, noise and sensitivity, page 88). Below this level the camera processing circuits produce a uniform black. This is called black clip (see figure opposite). As the intensity of light increases, the signal increases proportionally until a point is reached when the signal is limited and no further increase is possible even if the light intensity continues to increase. This point is called the white clip level and identifies the maximum allowable video level. Any range of highlight tones above this level will be reproduced as the peak white tone where the signal is set to be clipped. Variation in the brightness of objects will only be transferred into a video signal if they are exposed to fall between the black clip level and white clip level.

This straight line response to light is modified to allow a greater range of brightness to be accommodated by reducing the slope of the transfer characteristic at the top end of the CCD’s response to highlights (see figure opposite). This is called the knee of the response curve and the point at which the knee begins and the shape of the response above the knee alters the way the video camera handles highlights. By reducing the slope of the transfer characteristic a greater range of highlights can be compressed so that separation remains and they do not go into ‘overload’ above the white clip level and become one featureless tone. If the shape of this portion of the graph is more of a curve, the compression of highlights is non-linear and the transition to overload is more gradual.

Modifying this transfer slope provides the opportunity to alter the gamma of the camera (see Gamma and linear matrix, page 102) and a method of handling contrast scenes which exceed the 32:1 contrast range of the standard video camera response.

Exposing for highlights

A highlight part of the shot (e.g. white sheets on a washing line in bright sun) which may produce a signal five times peak white level can be compressed into the normal video dynamic range. With the above example this means that the darker areas of the picture can be correctly exposed whilst at the same time maintaining some detail in the sheets.

If someone was standing in a room against a window and it was necessary to expose for exterior detail and the face, without additional lighting or filtering the windows, it would not be possible to reproduce detail in both face and exterior. Using highlight compression, the highlights outside the window would be squashed and although their relative brightness to each other would not be faithfully reproduced, the compression would allow the reproduction of detail across a greater range to be recorded.

images

The ‘knee’ which is introduced in the camera head amplifiers progressively compresses highlights which otherwise would be lost in the peak white clipper. It extends the camera’s response to a high contrast range but with some loss of linearity. Many cameras also provide a black stretch facility which helps to reveal detail in the black areas, but will also introduce some extra noise.

Variable slope highlight control

images

Variable knee point highlight control

images

Transient highlights

One type of contrast control uses average feedback and avoids unnecessary compression by not responding to transient high intensity light such as car headlamps. If highlight compression is used with a normal contrast range scene (below 40:1) there is the risk that highlights will be distorted and the compression may result in a lower contrast reproduction than the original. Low contrast pictures have little impact and are visually less dynamic.

Adjusting exposure

There are various ways of deciding the correct exposure:

■  using the zebra exposure indicator in the viewfinder (see page 84);

■  manually adjusting the iris setting whilst looking at the viewfinder picture;

■  using the auto iris-exposure circuit built into the camera.

Many cameramen use a combination of all three and some cameramen use a light meter.

Manual adjustment

The simplest method is to look at the viewfinder picture (make certain that it is correctly set up – see page 70) and turn the iris ring on the lens to a larger f number (this reduces the amount of light reaching the CCDs) if the brightest parts of the scene have no detail in them, or to a smaller f number (increasing the amount of light reaching the CCDs) if there is no detail in important parts of the subject. In some situations there may be insufficient light even with the iris wide open to expose the shot. Check that no ND filter is being used and then either switch in additional gain (see page 88), change the shutter if it is set to a faster speed than 1/50th or 1/60th (depending on country) or add additional light (see Lighting topics, pages 172–85).

If you are uncertain about your ability to judge exposure with this method (and it takes time to become experienced in all situations) then confirm your exposure setting by depressing the instant auto-exposure button which gives the camera’s auto-exposure estimation of the correct f number. When the button is released you can either stay at the camera setting or return to your manual estimation.

The camera as light meter

A television camera has been called the most expensive light meter produced. If auto-exposure is selected, the feedback to the iris can be instantaneous and the auto circuit will immediately stop down the lens if any significant increase of scene brightness is detected. Auto-exposure works by averaging the picture brightness (see figures opposite) and therefore needs to be used intelligently. In some cameras, different portions of the frame can be monitored and the response rate to the change of exposure can be selected. In general, expose for the main subject of the shot and check that the auto-iris is not compensating for large areas of peak brightness (e.g. overcast sky) in the scene.

The lens iris is controlled by the highest reading from the red, green or blue channel and therefore the auto circuit reacts whenever any colour combination approaches peak signal. This auto decision making about exposure may have disadvantages as well as advantages. The rate of response of the auto-iris system needs to be fast enough to keep up with a camera panning from a bright to a dark scene, but not so responsive that it instantly over- or under-exposes the picture for a momentary highlight brightness (e.g. a sudden background reflection).

images

(a)

images

(b)

images

(c)

Auto-exposure averages the scene brightness and in the lighting conditions in Figure (a) has selected f 5.6 to expose 18 per cent reflectance mid-tone grey. If the light increases (Figure (b)) the auto-exposure circuit now selects f 11 to expose for a mid-tone grey. If the light decreases (Figure (c)), f 2.8 is chosen to expose for a mid-tone grey.

Auto-exposure problems

A common problem with auto-exposure occurs when an interview is being recorded and auto-exposure has been selected and left on. The interviewee may be correctly exposed at the start of the interview but if any highly reflective object enters the background of the frame then the auto-exposure circuit may be triggered and will stop down the iris to expose for detail. The interviewee’s face will be underexposed. Additionally, there may be a problem with changing light conditions such as intermittent sunlight requiring significant and rapid changes in face exposure which may be intrusive and visible if controlled by an auto-exposure rapid exposure. If using auto-exposure, check that the rate of pan is in step with the ability of the auto-iris to change exposure. Some cameras have switchable auto-iris response rates to suit the changing requirements of camera movement. If working with a camera for the first time check that the auto-iris is correctly aligned.

Zebra exposure indicators

The zebra pattern is a visual indicator in the viewfinder when areas of the picture have reached a certain signal level. If the zebra exposure indicator is switched on, those elements of the image that are above this pre-set level are replaced by diagonal stripes in the picture. The cameraman can respond by closing the iris to adjust the exposure until part or all of the zebra diagonals have been removed.

Onset of zebra level

The level at which the zebra indicator is triggered is obviously a critical factor in this method of assessing exposure and can be adjusted to suit particular operational preferences. Some camera designs have their zebra stripe indicator driven by the luminance signal. The zebra stripe warning is then only valid for nearly white subjects and exposure of strongly coloured areas may go into over-exposure without warning. Other systems use any of the red, green or blue outputs which exceed the selected signal level to trigger the zebra indicator.

Selecting zebra level

The exposure point at which the zebra indicator is triggered can be a personal operational preference but criteria to consider when setting that point are:

■  If there is a ‘pool’ of cameras in use then that point should be standard on all cameras.

■  The onset point should be close to full exposure but should warn before full burn-out occurs.

■  The zebra stripe indicator should not obscure important picture information such as the face but it should indicate when flesh tones are in danger of going into over-exposure.

Some UK zebra onset levels are 90–95 per cent for RGB-driven systems and 68–70 per cent for luminance systems, but the final limiting factor on exposure level is loss of detail, either to noise in the blacks or burnout in the peak white clipper. Both losses are irrecoverable.

Adjusting f number

Controlling the light through the lens can be by aperture or ND filter. The f number is defined as a ratio between focal length and the effective diameter of the lens aperture (see Depth of field, page 48).

The f number is not an accurate indication of the speed of the lens because the f number formula is based on the assumption that the lens transmits 100 per cent of the incident light. Because of the variation in the number of elements in the lens and the variation in lens design, different lenses may have different transmittance. Two lenses with the same f number may transmit different amounts of light to the prism block.

images

A nine-step wedge chart + a ‘super black’ in the centre. The peak white wedge has 60% reflectance. The wedge tones are graded as a % of peak white of the video signal and form equal changes in signal output when displayed on a waveform monitor. The background tone is 60% of peak white representing a tone with 18% reflectivity. The wedges are displayed with a gamma correction of 0.4.

images

With a zebra setting triggered by any part of the signal going above 95%, only nearly peak white picture areas such as the 100% wedge on the grey scale will display the diagonal pattern.

images

With a zebra setting triggered by any part of the signal falling between 70% (approximate reflectivity of the average Caucasian face tone) and 80%, the appropriate step wedge will display the diagonal pattern.

Using a properly lined-up and exposed camera on a grey scale will give a rough indication of how the zebra setting has been set up. The signal displayed on a waveform will allow more accurate measurement. On most cameras, the zebra setting can be adjusted to suit individual requirements.

If a grey scale with a 32:1 contrast is correctly exposed, a reduction in exposure by five stops will reduce the signal to almost zero, confirming the five-stop dynamic range of a camera with no electronic contrast control.

Production requirements

It is easy to chase after the ‘correct’ exposure for a shot and lose sight of the purpose of the production. News and factual programmes have the fairly simple visual requirement of ‘see it and hear it’. This usually requires a technique that produces a neutral record of the event with the least discernible influence of the cameraman’s attitude to the material. The choice of exposure is based on the aim of providing the clearest possible image of the action. The ‘correct’ exposure in these circumstances is the one that produces clarity of image. Other programme genres have more diverse production aims. It is not simply a question of finding the ‘correct’ exposure but of finding an exposure that reflects a mood, emotion or feeling.

The appearance of the image has an important influence on how the audience reacts to the visual message. The choice of lens, camera position and framing play a crucial part in guiding that response. The choice of exposure and the resultant contrast of tones is another powerful way to guide the viewer’s eye to the important parts of the frame.

The cameraman can manipulate contrast by the choice of exposure and gamma setting (see page 100), to produce a range of different images such as:

■  stark contrasty pictures suggesting a brutal realism;

■  pictures with no highlights or blacks – simply a range of greys;

■  low key pictures with a predominance of dark tones and heavy contrast;

■  high key pictures with a predominance of light tones, little depth and contrast.

If the production is aiming at a subjective impression of its subject, then the choice of the style of the image will require a high degree of continuity. The viewer will become aware if one shot does not match and may invest the content of that shot with special significance as apparently attention has been drawn to it.

A television engineer may have a preference for pictures that have a peak white with no highlight crushing, a good black with shadow detail and an even spread of tones throughout the image. Along with other correct engineering specifications, this is often called a technically acceptable picture. Using a low contrast filter, flares and filling the shot with smoke may lift the blacks to dark grey, eliminate any peak white, cause highlights to bloom and definition to be reduced. It may also be the exact requirement for the shot at that particular production moment. Remember that the term broadcast quality comes from the engineering requirements for the video signal, not the way the picture looks. There are no hard-and-fast rules to be followed in the creative use of exposure. Although resultant images may lack sparkle because of low contrast and fail to use the full contrast range that television is capable of providing, if the pictures create the required mood, then the aims of the production have been satisfied.

Image enhancement and contour correction

As we have discussed, the resolution of the video image is partly limited by the CCD design (although this is continuously improving), and partly by the constraints of the video signal system. In most camera/recorder formats, image enhancement is used to improve picture quality. One technique is to raise the contrast at the dark-to-light and light-to-dark transitions, to make the edges of objects appear sharper, both horizontally and vertically. This is done electronically by overshooting the signal at the transition between different tones to improve the rendering of detail. This edge enhancement is often applied to high-frequency transitions and can be controlled by adjusting various processing circuits such as aperture correction, contour, detail correction, etc.

It is important to remember that the degree of artificial enhancement is controllable with, for example, separate controls for vertical and horizontal enhancement. When overdone, artificial enhancement of picture resolution is often the main distinguishing characteristic between a video and a film image. Because an audience may connect this type of image quality with multi-camera coverage of sport or actuality events, an electronic image is often paradoxically considered more realistic and credible. The amount of enhancement is a subjective value and will vary with production genre and production taste. It is difficult to remove image enhancement in post-production although it may be added.

Detail enhancement and skin tone detail

The degree of electronic manipulation of edge detail is variable, but one limiting factor in the amount of enhancement that can be used is the adverse effect on faces. When pictures are ‘over-contoured’ skin detail can appear intrusive and unnatural; every imperfection is enhanced and becomes noticeable.

To overcome this problem, some cameras provide for selective reduction in skin detail to soften the appearance of faces. While variable electronic ‘sharpening’ or image enhancement may be applied to the overall shot, skin tone detail control allows for the separate handling of the specific degree of enhancement on any selected facial tones within that scene.

This is achieved by a circuit that separates facial skin colour from all other colours in a given shot, and its electronic detail level can be reduced without affecting other areas of the picture. The specific skin colour to be treated in this way is selectable and can be memorized to follow movement or recalled for subsequent shots. Some cameras have as many as three independent skin tone detail circuits.

Gain, noise and sensitivity

Camera sensitivity is usually quoted by camera manufacturers with reference to four interlinking elements:

1.  A subject with peak white reflectivity.

2.  Scene illumination.

3.  f number.

4.  Signal-to-noise ratio for a stated signal.

It is usually expressed as being the resulting f number when exposed to a peak white subject with 89.9 per cent reflectance lit by 2000 lux quoting the signal/noise ratio. For most current digital cameras this is at least f 8 or better with a signal/noise ratio of 60 dB. This standard rating is provided to allow different camera sensitivity to be compared and is not an indication of how much light or at what stop the camera should be used (see page 176).

Noise

The sensitivity of the camera could be increased by simply greater amplification of weak signals but this degrades the picture by adding ‘noise’ generated by the camera circuits. The signal/noise ratio is usually measured without contour or gamma correction. As manufacturers vary in the way they state camera sensitivity, comparison between different models often require a conversion of the specification figures. In general, with the same f number, the higher the signal/noise ratio and the lower the scene illuminance (lux), the more sensitive the camera.

Gain

The gain of the head amplifiers can be increased if insufficient light is available to adequately expose the picture. The amount of additional gain is calibrated in dBs. For example, switching in +6 dB of gain is the equivalent of opening up the lens by one f stop, which would double the amount of light available to the sensors. The precise amount of switched gain available differs from camera to camera. A camera may have a +9 dB and +18 dB switch with an additional +24 dB available from a pre-set inside the camera. Other camera designs allow a user pre-set to programme the value of each step of switchable gain. Some cameras allow a specific f number (aperture priority) to be selected and then automatically increase gain if the light level decreases. This may increase noise to an unacceptable level without the cameraman being aware of how much gain is switched in. Cameras may have a negative gain setting (i.e. a reduction in gain). This reduces noise and is a way of controlling depth of field without the use of filters. For example, an exposure is set for an aperture setting of f 2.8 with 0 dB gain. If 6 dB of negative gain is switched in, the aperture will need to be opened to f 2 to maintain correct exposure and therefore depth of field will be reduced.

Gain and stop comparison

+3 dB is equivalent to opening up 0.5 stop.

+6 dB is equivalent to opening up 1 stop.

+9 dB is equivalent to opening up 1.5 stops.

+12 dB is equivalent to opening up 2 stops.

+18 dB is equivalent to opening up 3 stops.

+24 dB is equivalent to opening up 4 stops.

The extra gain in amplification is a corresponding decrease in the signal-to-noise ratio and results in an increase in noise in the picture. For an important news story shot with insufficient light, this may be an acceptable trade-off.

Calculating the ASA equivalent for a video camera

A video broadcast camera/corder with a good auto-exposure system is in effect a very reliable light meter. Most cameramen use a combination of manual exposure, instant auto-exposure and/or the zebra exposure indicator (see pages 84–5), but some cameramen with a film background often feel more comfortable using a light meter to check exposure level. In order to achieve this, the sensitivity of the camera requires an equivalent ASA rating which is logarithmic, e.g. doubling ASA numbers allows a decrease of one stop. There are several methods to determine the rating, including the following formula:

■  The sensitivity of the video camera is quoted using a stated light level, signal-to-noise level, a surface with a known reflectance value and with the shutter set at 1/50 (PAL working).

■  Japanese camera manufacturers use a standard reflectance of 89.9% as peak white while UK television practice is to use a 60% reflectance value as peak white therefore an illuminance level of 3000 lux must be used when transposing a rating of 2000 lux with 89.9% reflectance to 60% peak white working.

The formula below is for 60% reflectance, 1/50th shutter speed.

images

*(10.76 lux = 1 foot candle, e.g. 3000 lux/10.76 = 278.81 ft candles)

Year-on-year video camera sensitivity has increased. In the last few years, a negative gain setting has begun to appear on cameras. Why not have the negative figure as 0 dB gain? One reason may be that manufacturers like to advertise their cameras as more sensitive than their competitors and a higher notional ‘0 dB’ allows a smaller stop. Another suggestion is that on a multi-camera shoot, lights can be set for a 0 dB exposure and thereafter, if a lower depth of field is required, negative gain can be switched in and the iris opened without resorting to the use of an ND filter.

Image intensifiers

When shooting in very low light levels (e.g. moonlight) image intensifiers can be fitted between camera and lens to boost sensitivity. The resultant pictures lack contrast and colour but produce recognizable images for news or factual programmes.

Electronic shutters

One complete frame of the standard PAL television signal is made up of two interlaced fields with a repetition rate of 50 fields per second (25 complete frames per second). The CCD scans the image 50 times a second which is the ‘normal’ shutter speed of a PAL video camera. CCDs can be adjusted to reduce the time taken to collect light from a field (see figure opposite), and reducing the length of the read-out pulse is equivalent to increasing the shutter speed. This electronic shutter reduces the time the CCDs are exposed to the image by switched steps, improving reproduction of motion but reducing sensitivity.

Movement blur

The standard shutter speed (PAL) is set to 1/50th second. A fast-moving subject in front of the camera at this shutter speed will result in a blurred image due to the movement of the subject during the 1/50th of a second exposure. Reducing the time interval of exposure by increasing the electronic shutter speed improves the image definition of moving subjects and is therefore particularly useful when slow motion replay of sporting events is required. But reducing the time interval also reduces the amount of light captured by the CCD and therefore increasing shutter speed requires the aperture to be opened to compensate for the reduction in light.

Shutter speeds

The shutter speed can be altered in discrete steps such as 1/60, 1/125, 1/500, 1/1000 or 1/2000 of a second or, on some cameras, continuously varied in 0.5 Hz steps. Often, when shooting computer displays, black or white horizontal bands appear across the computer display. This is because the scanning frequencies of most computer displays differ from the (50 Hz) frequency of the TV system (PAL). Altering the shutter speed in discrete steps allows the camera exposure interval to precisely match the computer refresh scanning frequency and reduce or even eliminate the horizontal streaking.

Pulsed light sources and shutter speed

Fluorescent tubes, HMI discharge lamps and neon signs do not produce a constant light output but give short pulses of light at a frequency dependent on the mains supply (see figure, page 69). Using a 625 PAL camera lit by 60 Hz mains fluorescent (e.g. when working away from the country of origin mains standard) will produce severe flicker. Some cameras are fitted with 1/60th shutter so that the exposure time is one full period of the lighting.

If a high shutter speed is used with HMI/MSR light sources, the duration of the pulsed light may not coincide with the ‘shutter’ open and a colour drift will be observed to cycle, usually between blue and yellow. It can be eliminated by switching the shutter off. FT sensors have a mechanical shutter which cannot be switched off and therefore the problem will remain.

Shutter pulse

images

Time-lapse controls

Time lapse is a technique where at specified intervals the camera is programmed to make a brief exposure. Depending on the type of movement in the shot, the time interval and the time the camera is recording, movement which we may not be aware of in normal perceptual time is captured. The classic examples are a flower coming into bloom with petals unfolding, clouds racing across the sky or car headlights along city streets at night shot from above, comet tailing in complex stop/start patterns. Time lapse can also be used as an animation technique where objects are repositioned between each brief recording. When the recording is played at normal speed, the objects appear to be in motion. The movement will be smooth if sufficient number of exposures and the amount of movement between each shot has been carefully planned.

The crucial decisions when planning a time-lapse sequence is to estimate how long the sequence will run at normal speed, how long the real-time event takes to complete the cycle that will be speeded up and the duration of each discrete ‘shot’. For example, do you shoot every minute, every hour, or once a day? These decisions will be influenced by the flexibility of the time-lapse facility on the camera in use. Some models will allow you to compile time-lapse sequences with shots lasting just an eighth of a second; others will only allow you to fit in three shots on each second of tape.

For example, a speeded-up sequence is required of an open air market being set up, then filled with shoppers and finally the market traders packing away to end on a deserted street. If you plan to have a normal running time of 5 seconds to set up, 5 seconds of shopping during the day and 5 seconds of clearing away you need to time how long the market traders take to open and close their stalls. If this takes 30 minutes in the morning and the same in the evening, and your camera will record 0.25 second frames, the total number of shots will be 15 seconds divided by 0.25 = 60 separate shots. The time lapse will be shot in three separate sequences. Sequence 1 (opening) will require 30 minutes divided by 20 shots equals a shot every minute and a half. Sequence 2 can be shot any time during the day when the market is crowded and will be the same ratio of 1 shot every 1.5 minutes for 30 minutes. The end sequence will be a repeat of sequence 1 taken at the end of the day. The light level may well change within and between sequences and can be compensated by using auto-iris or, if the change in light is wanted, setting a compromise exposure to reveal the changing light levels.

Timecode

Timecode enables every recorded frame of video to be numbered. A number representing hours, minutes, seconds and frame (television signal) is recorded. There are 25 frames per second in the PAL TV signal and so the frame (PAL television signal) number will run from 1 to 25 and reset at the end of every second. In a continuous recording, for example, one frame may be numbered 01:12:45:22, with the following frame numbered 01:12:45:23. This allows precise identification of each frame when editing. The camera operator arranges, at the start of a shoot, the method of frame numbering by adjusting the timecode controls which are usually situated on the side of the camera on the video recorder part of the unit (see figure opposite).

The choice is between numbering each frame with a consecutive number each time the camera records. This is called ‘record run’ timecode. Alternatively, the camera’s internal clock can be adjusted to coincide with the actual time of day and whenever a recording takes place, the time at that instant will be re-coded against the frame. This is usually called ‘time of day’ or ‘free run’ recording. The decision on which type of timecode to record will depend on editing and production requirements (see Timecode and production, page 94).

Historically there have been two methods of recording this identification number:

■  Longitudinal timecode: Longitudinal timecode (LTC) is recorded with a fixed head on a track reserved for timecode. It can be decoded at normal playback speed and at fast forward or rewind but it cannot be read unless the tape is moving as there is no replayed signal to be decoded.

■  Vertical interval timecode: Vertical interval timecode (VITC) numbers are time-compressed to fit the duration of one TV line and recorded as a pseudo video signal on one of the unused lines between frames. It is recorded as a variation in signal amplitude once per frame as binary digits. 0 equals black and 1 equals peak white. Although they are factory set, if needed, some cameras can be adjusted to insert VITC on two non-consecutive lines. Unlike longitudinal timecode, VITC timecode is recorded as a pseudo TV signal and can be read in still mode which is often required when editing. Longitudinal timecode is useful when previewing the tape at speed. For editing purposes, the two timecode recording methods have complemented each other.

Digital timecode

Digital signal processing has allowed timecode to be stored as digital data in the sub code track of DVCPro and Digital-S tape formats. Because this is written as data it can be read in still mode as well as fast forward/rewind. During editing, timecode can be read when shuttling through the tape to find a shot but can still be read in still mode.

Timecode track (DVCPro)

images

The sub-code area of the DVCPro track is used to record timecode and user-bit data. It can be read in still mode and during high-speed fast-forward and rewind of the tape.

CTL: control track

This is a linear track recorded on the edge of the tape at frame frequency as a reference pulse for the running speed of the replay VTR. It provides data for a frame counter and can be displayed on the camera’s LCD (liquid crystal display). It is important for editing purposes that the recorded cassette has a continuous control track and it is customary to reset to zero at the start of each tape.

When CTL is selected to be displayed, the numbers signifying hours, minutes, seconds and frames are a translation of the reference pulse into a convenient method of displaying tape-elapsed time. Although equivalent, this time is not a read-out of the recorded timecode and if CTL is reset to zero in mid-cassette and the tape rewound, the displayed numbers would count backwards with a minus sign. One of the main purposes of striping a tape for editing purposes is to record a continuous control track (see Editing technology, page 146). To ensure a continuous control track on acquisition, see page 97 for procedure when changing a battery or using a partially recorded cassette. Also be aware that if CTL is selected in mid-cassette and the Reset button is depressed, the control track will reset to zero and will no longer indicate tape-elapsed time.

Timecode and production

Timecode is an essential tool for editing a programme (see Camerawork and editing, page 144). If a shot log has been compiled on acquisition or in post-production review, the timecode identifies which shots are preselected and structures off-line editing. There are two types of timecode available to accommodate the great diversity in programme content and production methods. The cameraman should establish at the start of the shoot which method is required.

■  Record run: Record run only records a frame identification when the camera is recording. The timecode is set to zero at the start of the day’s operation and a continuous record is produced on each tape covering all takes. It is customary practice to record the tape number in place of the hour section on the timecode. For example, the first cassette of the day would start 01.00.00.00 and the second cassette would start 02.00.00.00. Record run is the preferred method of recording timecode on most productions.

■  Free run: In free run, the timecode is set to the actual time of day and when synchronized is set to run continuously. Whether the camera is recording or not, the internal clock will continue to operate. When the camera is recording, the actual time of day will be recorded on each frame. This mode of operation is useful in editing when covering day-long events such as conferences or sport. Any significant action can be logged by time as it occurs and can subsequently be quickly found by reference to the time of day code on the recording. In free run (time of day), a change in shot will produce a gap in timecode proportional to the amount of time that elapsed between actual recordings. Missing timecode numbers can cause problems with an edit controller when it rolls back from intended edit point and is unable to find the timecode number it expects there (i.e. the timecode of the frame to cut on, minus the pre-roll time).

Shot logging

An accurate log of each shot (also known as a dope sheet) with details of content and start and finish timecode is invaluable at the editing stage and pre-editing stage. It requires whoever is keeping the record (usually a PA, production assistant) to have visual access to the LCD display on the rear of the camera. As the timecode readout is often situated behind the cameraman’s head, it is often difficult for the PA to read although the Hold button will freeze the LCD readout without stopping the timecode. There are a number of repeat timecode readers which are either attached to the camera in a more accessible position, or fed by a cable from the timecode output socket away from the camera or fed by a radio transmitter which eliminates trailing cables. A less precise technique is sometimes practised when using time of day timecode. This requires the camera and the PA’s stopwatch to be synchronized to precisely the same time of day at the beginning of the shoot. The PA can then refer to her stopwatch to record timecode shot details. It is not frame accurate and requires occasional synchronizing checks between camera and watch, with the watch being adjusted if there has been drift.

Setting timecode

images

To set record run timecode

1  Set DISPLAY switch to TC.

2  Set REAL TIME switch to OFF.

3  Set F-RUN/R-RUN to SET position. The display will stop at its existing value, and the first numeral (representing hours) will start to flash.

4  Press RESET to zero counter if required.

5  Switch F-RUN/SET/R-RUN to R-RUN position.

6  The timecode will increase each time the camera records.
If you wish to use the hour digit to identify each tape (e.g. 1 hour equals first tape, 2 hours equals second tape, etc.), set the hour timecode with the SHIFT and ADVANCE button.

SHIFT: Press to make the hour digit blink.

ADVANCE: Press to increase the value of the blinking digit by one unit to equal the cassette tape in use.

If the F-RUN/R-RUN switch should be accidentally knocked to the SET position, the timecode will not increase during a recording and only the static number will be recorded. Also, if display is switched to CTL and SET is selected, timecode will be displayed and ADVANCE and SHIFT will alter its value but leave the CTL value unaffected. CTL can only be zeroed, otherwise it will continue to increase irrespective of tape changes.

Real time

1  Set DISPLAY switch to TC.

2  Set REAL TIME switch to OFF.

3  Set F-RUN/R-RUN to SET.

4  Press RESET to zero counter.

5  Set the time of day with the SHIFT and ADVANCE buttons until the timecode reads a minute or so ahead of actual time. The numeral that is selected to be altered by the SHIFT button will blink in the SET position:

SHIFT: Press to make the desired digit blink.

ADVANCE: Press to increase the value of the blinking digit by one unit.

6  When real time equals ‘timecode time’, switch to F-RUN and check that timecode counter is increasing in sync with ‘real’ time. If you switch back to SET the display stops, and does not continue until you return to F-RUN (i.e. the clock has been stopped and is no longer in step with the time of day it was set to).

User-bit: If any USER-BIT information is required, then always set up user-bit information first. Wait approximately 20 seconds after camera is turned on.

Timecode lock

So far we have discussed setting the timecode in one camera but there are many occasions when two or more camcorders are on the same shoot. If each camera simply recorded its own timecode there would be problems in editing when identifying and synchronizing shots to intercut. The basic way of ensuring the timecode is synchronized in all cameras in use is by cables connected between the TC OUT socket on the ‘master’ camera to the TC IN socket on the ‘slave’ camera. Another cable is then connected to the TC OUT of the first ‘slave’ camera and then connected to the TC IN of the second ‘slave’ camera and so on although in a multicamera shoot, it is preferable to genlock all the cameras to a central master sync pulse generator. This is essential if, as well as each camera recording its own output, a mixed output selected from all cameras is also recorded.

The TC OUT socket provides a feed of the timecode generated by the camera regardless of what is displayed on the LCD window. A number of cameras can be linked in this way but with the limitation of always maintaining the ‘umbilical’ cord of the interconnecting cables. They must all share the same method of timecode (i.e. free run or record run) with one camera generating the ‘master’ timecode and the other cameras locked to this. The procedure is:

■  Cable between cameras as above.

■  Switch on the cameras and select F-Run (free run) on the ‘slave’ cameras.

■  Select SET on the ‘master’ camera and enter the required timecode information (e.g. zero the display if that is the production requirement). Then switch to record-run and begin recording.

■  In turn start recording on the first ‘slave’ camera, and then the second and so on. Although recording can start in any order on the ‘slave’ cameras, the above sequence routine can identify any faulty (or misconnected!) cables.

■  All the cameras should now display the same timecode. Check by audibly counting down the seconds on the ‘master’ whilst the other cameramen check their individual timecode read-out.

■  During any stop/start recordings, the ‘master’ camera must always run to record before the ‘slave’ cameras. If there are a number of recording sessions, the synchronization of the timecode should periodically be confirmed.

Often it is not practical and severely restricting to remain cable connected and the cameras will need to select free run after the above synchronization. Some cameras will drift over time but at least this gives some indication of the time of recording without providing ‘frame’ accurate timecode. Alternatively, a method known as ‘jam sync’ provides for a ‘rough’ synchronization over time without cables (see opposite).

Jam sync

The set-up procedure is:

■  From the ‘master’ TC OUT socket connect a cable with a BNC connector to the ‘slave’ TC IN.

■  Power up the cameras and select free run on the ‘slave’ camera.

■  On the ‘master’ camera select SET and enter ‘time of day’ some seconds ahead of the actual time of day.

■  When actual time coincides with LCD display switch to free run.

■  If both cameras display the same timecode, disconnect the cable.

■  Because of camera drift, the above synchronization procedure will need to be repeated whenever practical to ensure continuity of timecode accuracy.

Pseudo jam sync

Pseudo jam sync is the least accurate timecode lock, but often pressed into service as a last resort if a BNC connector cable is damaged and will not work. The same time of day in advance of actual time of day is entered into all cameras (with no cable connections), and with one person counting down from their watch, when actual time of the day is reached, all cameras are simultaneously (hopefully) switched to F-RUN. This is obviously not frame accurate and the timecode error between cameras is liable to increase during the shoot.

Reviewing footage

Broadcast camcorders are equipped with a memory, powered by a small internal battery similar to the computer EPROM battery. It will retain some operational values for several days or more. There should be no loss of timecode when changing the external battery but precautions need to be taken if the cassette is rewound to review recorded footage or the cassette is removed from the camera and then reinserted.

After reviewing earlier shots, the tape must be accurately reset to the point immediately after the last required recorded material. On many cameras, timecode will be set by referring to the last recorded timecode. Reset to the first section of blank tape after the last recorded material, and then use the return or edit search button on the lens or zoom control so that the camera can roll back and then roll forward to park precisely at the end of the last recorded shot. This is an edit in-camera facility. CTL will not be continuous unless you record from this point. There will be a gap in record-run timecode if you over-record material as the timecode continues from the last recorded timecode (even if this is now erased), unless you reset timecode. CTL cannot be reset to follow on continuously but can only be zeroed. Allow 10 seconds after the start of the first recording to ensure CTL stability in post-production.

Recording on a partly recorded cassette

Insert the cassette and replay to the point at which the new recording is to start. Note the timecode at this point before you stop the replay, otherwise, with the tape stopped, the timecode will read the last timecode figure from the previous tape/shot. Select SET and enter a timecode that is a few seconds in advance of the noted timecode so it will be obvious in editing that timecode has been restarted. CTL cannot be adjusted other than zeroed. To ensure edit stability, postproduction requires a 10-second run-up on the next recorded shot before essential action. (See also page 99.)

Battery changes when timecode locked

A battery change on the ‘master’ camera requires all cameras to stop recording if synchronous timecode is an essential production requirement. A battery change on a ‘slave’ camera may affect any camera that is being supplied by its timecode feed. If synchronous timecode is critical, either all cameras use mains adaptors if possible, or arrange that a battery change occurs on a recording break at the same time on all cameras. Check timecode after re-powering up.

Mini DV cameras

DV timecode circuitry is less sophisticated than many broadcast formats and if a cassette is parked on blank tape in mid-reel, the camera assumes that it is a new reel and resets the timecode to 00:00:00. There may not be an edit search/return facility and so after reviewing footage, park the tape on picture of the last recorded material and leave an overlap for the timecode to pick up and ensure continuous timecode. In all formats after rewinding and reviewing shots, inserting a partially used cassette, changing batteries or switching power off, use the edit search/return facility to check at what point the tape is parked. Reset timecode and zero CTL as required.

Menus

Digital signal processing allows data to be easily manipulated and settings memorized. In nearly all digital camera/recorder formats, all the electronic variables on the camera can be stored as digital values in a memory and can be controlled via menu screens displayed in the viewfinder. These values can be recalled as required or saved on a removable storage file. This provides for greater operational flexibility in customizing images compared to analogue work. Menus therefore provide for a greater range of control with the means to memorize a specific range of setting. There are, however, some disadvantages compared to mechanical control in day-to-day camerawork. Selecting a filter wheel position is a simple mechanical operation on many cameras. If the selection is only achievable through a menu (because the filter wheel position needs to be memorized for a set-up card), time is taken finding the required page and then changing the filter wheel setting.

Adjustment

Access to the current settings of the electronic values is by way of menus which are displayed, when required, on the viewfinder screen. These menus are accessed by using the menu switch on the camera. Movement around the menus is by button or toggle switch that identifies which camera variable is currently selected. When the menu system is first accessed, the operation menu pages are usually displayed. A special combination of the menu controls allows access to the user menu which, in turn, provides access to the other menus depending on whether or not they have been unlocked for adjustment. Normally only those variables associated with routine recording (e.g. gain, shutter, etc.) are instantly available. Seldom used items can be deleted from the user menu to leave only those menu pages essential to the required set-up procedure. Menu pages are also available on the video outputs. The values that can be adjusted are grouped under appropriate headings listed in a master menu.

Default setting

With the opportunity to make adjustments that crucially affect the appearance of the image (e.g. gamma, matrix, etc.), it is obviously necessary that the only controls that are adjusted are ones the cameraman is familiar with. As the cliché goes, if it ain’t broke, don’t fix it. If you have the time and are not under recording pressure, each control can be tweaked in turn and its effect on the picture monitored. This may be a valuable learning experience which will help you customize an image should a special requirement occur. There is obviously the need to get the camera back to square one after experimenting. Fortunately there is a safety net of a factory setting or default set of values so that if inadvertently (or not) a parameter is misaligned and the image becomes unusable, the default setting can be selected and the camera is returned to a standard mode of operation.

A typical set of sub-menus would provide adjustment to:

■  Operational values: The items in this set of menus are used to change the camera settings to suit differing shooting conditions under normal camera operations. They would normally include menu pages which can alter viewfinder display, viewfinder marker aids such as safety zone and centre mark, etc., gain, shutter selection, iris, format switching, monitor out, auto-iris, auto-knee, auto set-up, diagnosis.

■  Scene file: These can be programmed to memorize a set of operational values customized for a specific camera set-up and read to a removable file.

■  Video signal processing: This menu contains items for defining adjustments to the image (e.g. gamma, master black level, contour correction, etc.) and requires the aid of a waveform monitor or other output device to monitor the change in settings.

■  Engineering: The engineering menu provides access to all of the camera setup parameters, with only selected parameters available in the master menu to avoid accidental changes to the settings.

■  Maintenance: This menu is mainly for initial set-up and periodic maintenance, and normally not available via the master menu.

■  Reference file (or system configuration): This file contains factory settings or initial customization of reference settings to meet the requirements of different users. It is the status quo setting for a standard operational condition. This menu is not usually accessible via the master menu and should never be adjusted on location except by qualified service personnel. Never try to adjust camera controls if you are unsure of their effect and if you have no way of returning the camera set-up to a standard operational condition.

Scene files

A scene file is a method of recording the operational settings on a digital camera. In use it is like a floppy disk on a computer and can be removed from the camera with the stored values and then, when required, loaded back into the camera to provide the memorized values. The operational variables on a camera such as filter position, white balance, gain, speed of response of auto-exposure, shutter speed, electronic contrast control, the slope of the transfer characteristic (gamma), and its shape at the lower end (black stretch) or the upper end (highlight handling and compression), and the matrix, all affect the appearance of the image. The same shot can change radically when different settings of several or many of the above variables are reconfigured. If for production reasons these variables have been adjusted differently from their standard settings (e.g. a white balance arranged to deliberately warm up the colour response), it may be necessary, for picture continuity, to replicate the customized appearance over a number of shots recorded at different locations, or on different days. The scene file allows an accurate record to be kept of a specific set of operational instructions.

An additional useful feature of a removable record of a camera set-up occurs when a number of cameras (of the same model) are individually recording the same event and their shots will be edited together (see also Timecode, page 92). Normal multi-camera coverage provides for each camera’s output to be monitored, matched and adjusted before recording or transmission. Individual camcorders, if they not aligned, could produce very noticeable mismatched pictures when intercut. To avoid this, all cameras can be configured to the same set-up values by transferring a file card and adjusting each camera with the same stored values. A file card can be compiled, for example, that allows an instant set-up when moving between location and a tungsten-lit studio.

Programming the camera

The flexibility of memorized values has led to the creation of a range of software cards for specific makes of cameras which provide a set ‘look’ instantly. Among the choices available, for example, are sepia, night scenes or the soft image of film. Other cards produce a warm ambience or a colder-feeling atmosphere. Scene files that duplicate the appearance of front-of-lens filters are also available and these electronic ‘gels’ provide a quick way of adding an effect. There are also pre-programmed scene files that help to ‘normalize’ difficult lighting conditions such as shooting under fluorescent lighting or scenes with extreme contrast. Low-contrast images can also be produced with the option to selectively control contours in areas of skin, offering more flattering rendition of close-ups. Fundamentally altering the look of an image at the time of acquisition is often irreversible and unless there is the opportunity to monitor the pictures on a high-grade monitor in good viewing conditions, it may be prudent to leave the more radical visual effects to post-production.

Image stability

Many DV format cameras have an electronic image stabilization facility. This is intended to smooth out unintended camera shake and jitter when the camera is operated hand-held. Usually these devices reduce resolution. Another method of achieving image stability, particularly when camera shake is the result of consistent vibration, is by way of optical image stabilization.

The optical image stabilizer on the CanonXL1 is effected by a vari-angle prism formed from two glass plates separated by a high-refracted-index liquid. A gyro sensor in the camera detects vibration and feeds data to the prism, which reacts by changing shape. This bends the rays of light to keep the image stable when it reaches the CCDs. The CCD image is examined to check for any low-frequency variations that have not been compensated for by the gyro. This data is fed back to the prism to further reduce vibration. Other devices can be ‘tuned’ to a consistent vibration such as produced when mounting a camera in a helicopter. Some forms of image stabilization add a slight lag to intended camera movement giving an unwanted floating effect.

Other equipment uses a variety of techniques to track a moving target such as off-shore power boat racing from a moving camera to keep the main subject in the centre of frame. The camera can be locked-on to a fast-moving subject like a bobsleigh and hold it automatically framed over a designated distance.

A schematic showing the principles of image stabilization

images

Gamma and linear matrix

After the image manipulation discussed on the previous pages, the picture the viewer will finally see depends on the characteristics of their TV set. The cathode ray display tube, however, has certain limitations. The television image is created by a stream of electrons bombarding a phosphor coating on the inside face of the display tube. The rate of change of this beam and therefore the change in picture brightness does not rise linearly, in step with changes in the signal level corresponding to the changes in the original image brightness variations.

As shown in graph (a) opposite, the beam current, when plotted against the input voltage, rises in an exponential curve. This means that dark parts of the signal will appear on the tube face much darker than they actually are, and bright parts of the signal will appear much brighter than they should be. The overall aim of the television system is to reproduce accurately the original image and therefore some type of correction needs to be introduced to compensate for the non-linear effect of the cathode ray tube beam. The relationship between the input brightness ratios and the output brightness ratios is termed the gamma of the overall system. To achieve a gamma of 1 (i.e. a linear relationship between the original and the displayed image – a straight line in graph (b)), a correcting signal in the camera must be applied to compensate for the distortion created at the display tube. Uncorrected, the gamma exponent of the TV system caused by the display tube characteristics is about 2.4. Thus the camera’s gamma to compensate for the non-linearity of the TV system is about 0.44/0.45. This brings an overall gamma of approximately 1.1 (2.4 × 0.45) slightly above a linear relationship to compensate for the effect of the ambient light falling on the viewer’s display tube. There is the facility to alter the amount of gamma correction in the camera for production purposes. The application of gamma correction to the signal in the camera also helps to reduce noise in the blacks. Some cameras have a multi matrix facility which allows a user to select a small part of the colour spectrum and adjust its hue and saturation without affecting the rest of the picture.

Linear matrix

As detailed on page 12, all hues in the visible spectrum can be matched by the mixture of the three colours, red, green, and blue. In the ideal spectrum characteristics of these colours, blue contains a small proportion of red and a small negative proportion of green. Green contains a spectral response of negative proportions of both blue and red. It is not optically possible to produce negative light in the camera but these negative light values cannot be ignored if faithful colour reproduction is to be achieved. The linear matrix circuit in the camera compensates for these values by electronically generating and adding signals corresponding to the negative spectral response to the R, G and B video signals. This circuit is placed before the gamma correction so that compensation does not vary due to the amount of gamma correction, i.e. at the point where the signals are ‘linear’ – a gamma of 1.

Gamma correction

images

(a) Gamma due to tube characteristic

images

(b) Gamma correction

Matrix

images

Aspect ratios

At the same moment that we perceive the identity of an object within a frame, we are also aware of the spatial relationship between the object and the frame. These frame ‘field of forces’ exert pressure on the objects contained within the frame and all adjustment to the composition of a group of visual elements will be arranged with reference to these pressures. Different placement of the subject within the frame’s ‘field of forces’ can therefore induce a perceptual feeling of equilibrium, of motion or of ambiguity (see figure opposite).

The closed frame compositional technique is structured to keep the attention only on the information that is contained in the shot. The open frame convention allows action to move in and out of the frame and does not disguise the fact that the shot is only a partial viewpoint of a much larger environment.

Frames within frames

The ratio of the longest side of a rectangle to the shortest side is called the aspect ratio of the image. The aspect ratio of the frame and the relationship of the subject to the edge of frame has a considerable impact on the composition of a shot. Historically, film progressed from the Academy aspect ratio of 1.33:1 (a 4 × 3 rectangle) to a mixture of Cinemascope and widescreen ratios. TV inherited the 4:3 screen size and then, with the advent of digital production and reception, took the opportunity to convert to a TV widescreen ratio of 1.78:1 (a 16 × 9 rectangle).

Film and television programmes usually stay with one aspect ratio for the whole production but often break up the repetition of the same projected shape by creating compositions that involve frames within frames. The simplest device is to frame a shot through a doorway or arch which emphasizes the enclosed view, or by using foreground masking an irregular ‘new’ frame can be created which gives variety to the constant repetition of the screen shape. The familiar over-the-shoulder two shot is in effect a frame within a frame image as the back of the foreground head is redundant information and is there to allow greater attention on the speaker and the curve of the head into the shoulder gives a more visually attractive shape to the side of the frame.

There are compositional advantages and disadvantages in using either aspect ratio. Widescreen is good at showing relationships between people and location. Sports coverage benefits from the extra width in following live events. Composing closer shots of faces is usually easier in the 4:3 aspect ratio, but as in film during the transition to widescreen framing during the 1950s, new framing conventions are being developed and old 4:3 compositional conventions that do not work are abandoned. The shared priority in working in any aspect ratio is knowing under what conditions the audience will view the image.

images

A field of forces can be plotted, which plots the position of rest or balance (centre and midpoint on the diagonal between corner and centre) and positions of ambiguity (?) where the observer cannot predict the potential motion of the object and therefore an element of perceptual unease is created. Whether the object is passively attracted by centre or edge or whether the object actively moved on its own volition depends on content.

The awareness of motion of a static visual element with relation to the frame is an intrinsic part of perception. It is not an intellectual judgement tacked on to the content of an image based on previous experience, but an integral part of perception.

Widescreen

The world-wide change-over period from mass viewing on a 4:3 analogue set to mass viewing on a 16:9 digital monitor, and therefore mass programme production for 16:9 television, will take many years. The transition period will require a compromise composition (see opposite) and many broadcasters are adopting an interim format of 14:9 to smooth the transition from 4:3 to full 16:9. But the compositional problems do not end there. The back-library of 4:3 programmes and films is enormous and valuable and will be continued to be transmitted across a wide range of channels in the future. The complete image can be viewed on a 16:9 screen if black bars are displayed either side of the frame (see (a) opposite). They can be viewed by filling the full width of the 16:9 display at the cost of cutting part of the top and bottom of the frame (see (c) opposite) or, at the viewer’s discretion, they can be viewed by a non-linear expansion of picture width, progressively distorting the edges of the frame to fill the screen (see (b) opposite).

Viewfinder set-up

As many broadcast organizations have adopted the 14:9 aspect ratio as an interim standard, cameramen shooting in 16:9 follow a ‘shoot and protect’ framing policy. The viewfinder is set to display the full 16:9 picture with a graticule superimposed showing the border of a 14:9 frame and a 4:3 frame. Significant subject matter is kept within the 14:9 border or, if there is a likelihood of the production being transmitted in 4:3, within the smaller 4:3 frame. The area between 16:9 and 14:9 must be still usable for future full digital transmissions and therefore must be kept clear of unwanted subject matter. Feature film productions that were shot in 4:3 but were intended to be projected in the cinema with a hard matte in widescreen can sometimes be seen in a TV transmission with booms, etc., in the top of the frame that would not have been seen in the cinema. ‘Shoot and protect’ attempts to avoid the hazards of multi-aspect viewing by centring most of the essential information. This does of course negate the claimed advantages of the widescreen shape because for the transitional period the full widescreen potential cannot be used. For editing purposes, it is useful to identify within the colour bars the aspect ratio in use. Some cameramen in the early days of widescreen video shooting would frame up a circular object such as a wheel or lens cap to establish in post-production if the correct aspect ratio was selected.

The same size camera viewfinders used for 4:3 aspect ratio are often switched to a 16:9 display. This in effect gives a smaller picture area if the 14:9 ‘shoot and protect’ centre of frame framing is used and makes focus and following distant action more difficult. Also, video cameramen are probably the only monochrome viewers still watching colour TV pictures. Colour is not only essential to pick up individuals in sports events such as football where opposing team shirts may look identical in monochrome, but in all forms of programme production, colour plays a dominant role in composition.

Protect and save

images

Composition problems will continue while 16:9 and 4:3 simultaneous productions are being shot during the analogue/digital changeover. They neither take full advantage of the width of 16:9 nor do they fit comfortably with the old 4:3 shape. After many years of dual format compromise composition, the resultant productions will continue to be transmitted even though 16:9 widescreen will be the universal format. The only safe solution is the ‘protect and save’ advice of putting essential information in the centre of frame but that is a sad limitation on the compositional potential of the widescreen shape.

Viewing 4:3 pictures on a 16:9 display

(a)

images

4:3 aspect ratio picture displayed on 16:9 set with black borders.

(b)

images

Complete 4:3 aspect ratio picture displayed on 16:9 set with progressive rate of expansion towards the vertical edges of the screen. With this system people change shape as they walk across the frame.

(c)

images

4:3 aspect ratio picture displayed on 16:9 set. The 4:3 aspect ratio picture has been ‘zoomed’ in to fill the frame cropping the top and bottom of the frame.

Widescreen composition

The growth of the cinema widescreen format in the 1950s provoked discussion on what changes were required in the standard 4:3 visual framing conventions that had developed in cinema since its beginnings. The initial concern was that the decrease in frame height meant that shots had to be looser and therefore had less dramatic tension. Another problem was that if artistes were staged at either side of the screen, the intervening setting became more prominent. Compositional solutions were found but the same learning curve is being experienced in television as the move is made to widescreen images. If anything, television is more of a ‘talking heads’ medium than cinema but the advent of a large, wider aspect screen has tended to emphasize the improvement in depicting place and setting.

One of the main compositional conventions with 4:3 television framing is the search for ways of tightening up the overall composition. If people are split at either end of the frame, either they are restaged or the camera is repositioned to ‘lose’ the space between them. Cinema widescreen compositions relied less on the previous fashion for tight, diagonal, dynamic groupings in favour of seeing the participants in a setting. Initially, directors lined the actors up across the frame but this was quickly abandoned in favour of masking off portions of the frame with unimportant bland areas in order to emphasize the main subject. Others simply grouped the participants in the centre of the frame and allowed the edges to look after themselves – in effect ‘protect and save’. There were other directors who balanced an off-centre artiste with a small area of colour or highlight on the opposite side of the frame. This type of widescreen composition is destroyed if the whole frame is not seen.

Many directors exploited the compositional potential of the new shape. They made big bold compositional widescreen designs knowing that they would be seen in the cinema as they were framed. Their adventurous widescreen compositions were later massacred on TV with pan and scan or simply being shown in 4:3 with the sides chopped off. The problem with the video compositional transition to widescreen is the inhibition to use the full potential of 16:9 shape because the composition has to be all things to all viewers. It must fit the 14:9 shape but also satisfy the 4:3 viewer. It is difficult to know when the full potential of the widescreen shape can be utilized because even if the majority of countries switch off analogue transmissions at some time in the first decade of the century, there will probably be billions of TV sets world-wide that will still be analogue.

A conventional TV single can cause problems in staging and bits of people tend to intrude into the edge of frame. Headroom has tended to be smaller than 4:3 framing and there are some problems in editing certain types of GVs (general views). Wide shots need to be sufficiently different in their distribution of similar objects to avoid jump cuts in editing. A good cut needs a change in shot size to be invisible.

Viewing distance

There is a further consideration in the format/composition debate which concerns the size of the screen. Someone sitting in a front row cinema seat may have as much as 58° of their field of view taken up by the screen image. This can be reduced to as little as 9.5° if he/she views the screen from the back row. The average television viewer typically sees a picture no wider than 9.2°. Dr Takashi Fujio at the NHK research laboratories carried out research on viewers’ preference for screen size and aspect ratio and his findings largely formed the justification for the NHK HDTV parameters. His conclusion was that maximum involvement by the viewer was achieved with a 5:3 aspect ratio viewed at 3 to 4 picture height distance. Normal viewing distance (in Japan) was 2 to 2.5 metres which suggested an ideal screen size of between 1 m × 60 cm and 1.5 m × 90 cm. With bigger room dimensions in the USA and Europe, even larger screen sizes may be desirable. Sitting closer to a smaller screen did not involve the viewer in the action in the same way.

Estimating the compositional effect of space and balance in a widescreen frame is complicated by the concept that framing for cinema widescreen and television widescreen compositions is different because of the screen size. The accepted thinking is that what works on a large cinema screen may be unacceptable with TV. Cinema screens are very large compared to television sets but for someone sitting at the back of a cinema the size of the screen in their field of view (e.g. 9.2°) may be the same as someone watching a 36-inch TV screen at normal domestic viewing distance. It is a mistake to confuse screen size with actual perpetual size which is determined by viewing distance.

Transfer between film and television

Many television productions are shot on film and there is a continuous search for a world-wide video standard to allow a transparent transfer of film to video and video to film. There have also been many proposals for a high-definition television format. Since the early 1970s, when NHK (the Japan Broadcasting Corporation) first began research into a high definition television system, there has been international pressure for agreement on a standard HDTV system. One solution suggested is to have a HDTV acquisition format of 24 frame, 1080 line, progressive scanning video system that would allow high-definition video productions suitable for transfer to film for cinema presentation. It would be the video equivalent of 35 mm film and allow a seamless translation to all other standard definition (SD) video formats.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset