CHAPTER   4

The Video
Camera

 

With the development and proliferation of low-cost portable camcorders, video cameras seem to have become omnipresent in American culture. No longer are professional video producers the only ones with access to video production tools; cameras appear to be everywhere.

The main function of the video camera is to translate the optical image captured by the camera’s lens into an electrical signal that can be recorded and transmitted. Although all video cameras perform this basic function, there are significant differences in the ways in which cameras are configured and the markets to which they are targeted.

Camera Equipment Quality
and Target Markets

Video equipment makers produce a wide range of equipment that is designed to meet the needs of consumer users, video enthusiasts, and production professionals. (See Figure 4.1.) Video equipment varies greatly in terms of features, quality, and cost.

Consumer and Prosumer Equipment

At the low end of the scale is consumer equipment, which is marketed to home video users for their personal use. Consumer camcorders can be bought for fewer than five hundred dollars, and the quality of the images that they produce and record is remarkable given their relatively low cost. This type of equipment tends not to be very durable, and many of the basic camera control functions have been automated to eliminate operator errors.

image

Consumer Camcorder

image

Professional Camcorder

FIGURE 4.1 Consumer and Professional Camcorders

Prosumer equipment is a step up from consumer equipment in cost, quality, and feature sets, but this equipment generally is not used in a professional environment. A serviceable prosumer camcorder might cost in the neighborhood of one thousand to three thousand dollars and will provide the operator with a bit more durable quality and more controls that feature manual as well as automatic settings.

Professional/Broadcast Equipment

At the high end of the scale is equipment that is designed for the professional/broadcast market. This equipment is the most rugged and contains a wider set of controls for the professional operator. Prices for professional cameras and camcorders vary widely, depending on how they are to be used. High-quality camcorders used in electronic news gathering can easily cost between ten thousand and twenty thousand dollars; high-end cameras Configurations and camcorders designed for high-definition television production may cost up to one hundred thousand dollars.

It is generally important to match the quality level of production equipment to the situation at hand. Most broadcast outlets expect programs to meet technical standards that are more easily achieved with professional quality equipment than with consumer- or prosumer-level equipment.

Camera Configurations

Camera versus Camcorder

A video camera can be configured either as a video camera with no built-in recording capability or as a camcorder—a video camera with a video recorder that is either built in or attached to the camera. Consumer-level camera equipment is generally available only in the camcorder configuration; professional/broadcast equipment may be configured as either a camera or a camcorder.

Video Camera: Studio Configuration

A video camera designed for studio use may be a fairly large camera mounted on a heavy pedestal or tripod with a set of external controls that allow the camera operator to control principal camera functions such as zooming the lens to change the angle of view and focusing the lens. (See Figure 4.2.) The camera is connected to the video control room with a cable that delivers power to the camera and carries the video signal from the camera to the control room. A large viewfinder will typically be mounted on top of the camera to allow the camera operator to easily see the image that the camera is producing as he or she stands behind the camera and controls it.

image

FIGURE 4.2
Studio Camera

This type of camera may be used in a multicamera studio environment or on a large remote production. For example, telecasts of sporting events typically involve the use of multiple cameras whose signals are all routed back to a central control room, where a director calls the shots and where the program is either transmitted live or recorded for later broadcast.

Convertible Studio/Field Cameras

Convertible cameras are designed to be used either in the studio or in the field with several others in a typical multicamera configuration or as a single-camera portable unit in the field. These cameras are generally smaller and cheaper than their full-size studio counterparts, may be equipped with either an eyepiece or a studio-type viewfinder, and may be operated with conventional AC power or with battery power. (See Figure 4.3.) Some convertible cameras have the capability to be converted into camcorders by removing the back end of the camera and adding a dockable video-recording device.

image

Field Configuration

image

Studio Configuration

FIGURE 4.3   Convertible Camera

One-Piece Camcorders

A camcorder combines the video signal–producing camera component of a video camera with a video recording device. Unlike dockable camcorders, in which the camera head and video recorder are separate components that can be disconnected from each other, in one-piece camcorders the components that produce and record the video signal are integrated into one solid piece of equipment. Like other equipment, camcorders vary widely in price and features depending on the target market for which they have been produced.

Parts of a Video Camera

A video camera system is composed of four principal parts: the camera itself (sometimes called the camera head), lens, camera control unit, and viewfinder.

Camera Head

The camera head contains the optical and electronic components that produce the video signal. This consists of the imaging device, the beam splitting-system, and the camera’s internal electronics. (See Figure 4.4.)

IMAGING DEVICE One of the main functions of a video camera is to change the light reflected off the scene in front of the camera into electrical energy—the video signal. This is the principal function of the camera’s imaging device. The imaging device in most modern cameras is a chargecoupled device (CCD), a small silicon chip inside the camera head. (Increasingly, complementary metal oxide semiconductors, or CMOS, are used as camera imaging devices as well.) CCDs and CMOS chips are referred to as optical video transducers because they convert light focused onto them by the camera lens into electrical energy. An electronic shutter regulates the length of time the light is exposed to the imaging device.

image

CCD Sensor (½-inch)

image

Prism Block

FIGURE 4.4   CCD Sensor and Prism Block

The surface of the CCD or CMOS chip contains a grid of pixels, or picture elements, arranged in a precise series of horizontal rows, or lines, and vertical columns. (See Figure 4.5.) The camera lens focuses the scene before it on this array of pixels, each of which is responsible for reproducing one tiny part of the picture. Each pixel contains a semiconductor that converts the incoming light into an electrical charge. The strength of the charge is proportional to the brightness of the light hitting the pixel and the amount of exposure time. After the incoming light is converted into an electrical charge, it is transferred and stored in another layer of the chip, and then the information is read out one pixel at a time in a line-by-line sequence in conformity with normal television scanning rates.

image

FIGURE 4.5
CCD Sensor

CCDs and CMOS chips are popular as imaging devices because they are rugged, cheap, and small and use very little electrical power. They are wafer thin and are manufactured in various sizes: 1/6, 1/4, 1/3, 1/2, or 2/3 inch, measured diagonally. Consumer cameras tend to use the smaller chips (1/3 inch or smaller); professional cameras tend to use the larger chips. In addition, consumer cameras typically use one CCD chip to create the signal, whereas professional cameras use three.

BEAM SPLITTER Before the light that is focused onto the imaging device can be changed into electrical energy, it needs to be broken down into its essential color components. Color video cameras work by breaking down the incoming light into the additive primary colors of light: red, green, and blue. (See Figure 4.6 and CP-3 and CP-4.)

In professional- and prosumer-quality cameras this is done by passing the light through a prism block, a glass prism equipped with filters that assist in the color separation process. Three CCDs are employed to generate the video signal; one CCD is assigned to each of the principal color components: red, green, and blue.

In consumer-quality cameras a single CCD is charged with the responsibility of creating the entire color signal. Light hits the CCD and is sequentially broken down by a stripe filter into its red, green, and blue components. As you may imagine, the quality of the signal that is produced by single-chip systems and the resulting video image are not as good as the signal and image produced by three-chip systems.

image

FIGURE 4.6   Prism Block System

Camera Control Unit and
Camera Internal Electronics

Video cameras contain complex electronic circuitry to process, store, and transmit the video signal. Various controls that affect the signal are located in the camera control unit (CCU). These include controls to adjust the size of the lens iris or aperture, controls to adjust the amplification or gain of the video signal, and controls to adjust the white balance and black balance of the video signal. (These are discussed in greater detail in subsequent chapters.) Depending on the type of camera, the CCU controls may be integrated into the camera head, as is the case with field-use-only camcorders, or they may be located in a remote control unit located in the control room, as in the case for cameras used in a multicamera studio or field production environment. (See Figure 4.7.)

image

FIGURE 4.7   Camera Control Unit (CCU)

The Camera Lens

Aside from the camera head itself, the camera lens is the most critical part of the camera system when it comes to identifying components that have the greatest effect on image quality.

ZOOM LENS Color video cameras are equipped with a zoom lens. A zoom lens is a variable focal length lens that allows the camera operator to change the angle of view without changing the lens. A point of comparison here is fixed focal length lenses, which are often used by still photographers and in feature film production. A fixed focal length lens produces only one angle of view. To make the image appear closer or farther away, the camera needs to be physically moved closer to the subject or farther away from it. With a zoom lens, all the camera operator has to do is zoom in or zoom out to change the size of the subject.

STUDIO LENSES AND EFP LENSES Studio cameras and cameras used for multicamera sports and event telecasts are typically equipped with very large, heavy, studio-type zoom lenses. (See Figure 4.8.) These large lenses often have great magnification power that enables the camera operator to get very high-quality close-up shots even from a great distance.

image

Studio Lens

image

EFP Lens

FIGURE 4.8   Studio Lens versus EFP Lens

Lenses that are used on camcorders for electronic field production (EFP) and electronic news gathering (ENG) need to be lighter so that the camera operator can easily carry the camera. And because cameras that are used in EFP and ENG are usually placed fairly close to the action (e.g., an interview), they do not need to have the magnification power of lenses used in large studios or in large-scale event production. Therefore these lenses are significantly smaller than their full-size studio counterparts. (See Figure 4.8.)

Viewfinder

The camera viewfinder is a small video monitor that displays the image the camera is producing. Three types of viewfinder systems are in use: studio viewfinders, eyepiece viewfinders, and LCD viewfinders. (See Figure 4.9.)

image

Studio Viewfinder

image

Eyepiece Viewfinder

image

LCD Viewfinder

FIGURE 4.9   Camera Viewfinder Systems

Studio viewfinders are mounted on top of the camera. They are typically 5 to 7 inches in diagonal. The camera operator stands behind the camera as he or she operates the camera.

Eyepiece viewfinders are used on EFP/ENG cameras and camcorders. Typically, they are mounted on the left side of the camera (to accommodate right-handed camera operators). A small rubber eyepiece is attached to a diopter that is positioned above a very small (usually 1- to 1½ inch) video monitor. The camera operator holds the camcorder on his or her shoulder and presses one eye to the viewfinder. Many eyepiece view-finders contain a focus control that allows the camera operator to focus the image for his or her vision. This allows camera operators who wear eyeglasses to shoot without using their glasses.

LCD viewfinders are found on an increasing number of consumer and professional cameras and camcorders. Typically, these displays are mounted on the side of the camera and flip out for easy viewing. LCD viewfinder systems allow the camera operator to see the viewfinder image while the camcorder is held at arm’s length, making it possible to record from difficult or unusual angles that might not be possible with a conventional eyepiece viewfinder system.

Viewfinders on professional and prosumer cameras may contain a zebra-stripe exposure indicator. In this system, a series of black-and-white lines appears over the brightest portion of the picture when the maximum video level has been reached. In some camera view-finders the zebra stripes are set to appear when the correct exposure for skin tones is achieved.

image

FIGURE 4.10   Cathode Ray Tube (CRT) System

Video Monitors

Video monitors reverse the process of the CCD: Their job is to take the electrical video signal and turn it back into a picture, in the form of light energy displayed on a screen. Several different video display systems are now in use.

Cathode-ray tube (CRT) systems are based on analog-type vacuum tube technology. An electron beam scans the inside face of the monitor screen, which is coated with a series of red, green, and blue phosphors, which glow when they are hit by the electrical charge of the electron beam. (See Figure 4.10 and CP-2.)

image

FIGURE 4.11
Plasma Display Panel

Liquid crystal display (LCD) systems are flat-panel screens that are widely used in laptop computers and increasingly as video display panels. They are based on liquid crystal material that emits light when stimulated with an electrical charge.

Plasma display panel (PDP) systems are similar to LCDs in that they are flat-panel screens, but instead of liquid crystal they use neon gas compressed between two layers of glass embedded with crisscrossing horizontal and vertical electrodes to excite the red, green, and blue phosphors in the screen. (See Figure 4.11.)

How the Video Camera
Makes a Picture:
Analog and Digital Standards

The process of transducing light energy into electrical energy is accomplished by the camera’s imaging device—the CCD or CMOS chips—as we discussed above. To produce a video signal that can be displayed on any video monitor, camera and video recorders must adhere to a set of technical standards that govern how the video signal is created and displayed. Currently, two sets of standards are in use in the United States: the original NTSC (National Television System Committee) standards adopted for analog television by the Federal Communications Commission in 1941 and the ATSC (Advanced Television System Committee) standards for digital television adopted by the FCC in December 1996. These standards cover a wide range of technical specifications for how the video signal is constructed and transmitted. For the purpose of this discussion the important elements of the standards on which we will focus include the aspect ratio of the image, frame and line rates, and scanning patterns.

NTSC Video:
The Analog Standard

The NTSC video standards were developed in the United States in the 1930s and have been in use since over-the-air broadcast television was initiated in the United States in the early 1940s. Anyone reading this book who grew up in the United States has been watching television that was based on the NTSC standard.

ANALOG SIGNAL In the NTSC system the video signal is an analog signal. In cameras an analog signal is one in which the recorded signal varies continuously in proportion to the light that produced it. The video signal can be viewed on a piece of monitoring equipment called a waveform monitor. The waveform that is produced by an analog video camera corresponds to the bright and dark areas of the scene the camera is recording. (See Figure 4.12.)

image

FIGURE 4.12
Camera Display and Waveform

ASPECT RATIO The most noticeable feature of the NTSC standard is the shape of the video screen, also known as its aspect ratio. In NTSC video the screen shape is a standard ratio of 4:3 (width:height). No matter how large or small the screen is, it will retain these standard rectangular proportions in which the screen is only slighter wider than it is tall. (See Figure 4.13.)

FRAME AND LINE RATES The video image is composed of a number of frames and lines that play out at a specified rate. In the NTSC system cameras produce 30 frames of video information per second, and each one of those frames is composed of 525 lines of information. Furthermore, each of the individual scanning lines is composed of a series of about 700 light-sensitive pixels.

In reality, the precise frame rate for NTSC video has been 29.97 frames per second since the color television standards were developed in the early 1950s. And of the 525 actual lines, only 480 are actively used to transmit picture information. However, this text follows the widely adopted convention of referring to the NTSC standard as 30 frames per second and 525 lines per frame.

INTERLACED SCANNING In the NTSC system each television frame is constructed through a process called interlaced scanning. When the video signal is output through the CCD, instead of sending out lines 1–525 sequentially, each frame is broken up into two fields: One is composed of the odd-numbered lines, the other of the even-numbered lines. First the odd-numbered lines are read out, and the even-numbered lines follow. At the receiver, the image is constructed field by field and line by line. (See Figure 4.14.)

image

FIGURE 4.13
Aspect Ratio

image

The Odd Lines Are Scanned First

Then the Even Lines Are Scanned

image

Starting with Line 1, Each Line Is Scanned Progressively to the Bottom of the Frame

FIGURE 4.14   Interlaced and Progressive Scanning

This somewhat complex system was developed to overcome the technical limitations of early television display and transmission systems. Monitors of the time used CRTs coated with phosphorescent material that would glow when they were scanned with an electron beam.

Unfortunately, if the 525 scanning lines that composed the picture were scanned sequentially from top to bottom, the top of the screen would fade out before the picture was complete. This created a flickering effect in the image. By splitting the frame into two fields, scanning the image one field at a time, and interlacing the lines, the problem of flickering was solved, and the picture maintained its brightness throughout the program.

The interlaced scanning system also solved a signal transmission problem. By breaking each frame into two fields, each with only half of the picture information, the bandwidth that was assigned to each television station was able to accommodate the signal.

Of course, one of the consequences of interlaced scanning is reduced picture quality. Because only half of the lines of each frame are displayed at one time, the picture resolution is significantly poorer than it would be if all of the lines were displayed at once.

ATSC Video: The Digital Standard

In 1996 the Federal Communications Commission adopted a new set of video standards that had been developed by the ATSC. Adoption of these standards was the culmination of over a decade of discussions centered on converting the existing analog television system to one based on a digital standard. ATSC differs from NTSC in many significant ways. One of the most significant differences is that unlike NTSC, in which aspect ratio, frame rate, and line rate all meet a single standard, ATSC is an open standard that is able to accommodate signals with various line and frame rates and two different aspect ratios. (See Table 4.1.)

DIGITAL SIGNAL In the ATSC system the video signal is digital. In digital camera systems light hits the CCD and creates an electrical charge. That electrical charge is then sampled and converted into a binary digital numerical code consisting of bits made up of a pair of off/on (0/1) pulses. This is called quantizing. (See Chapters 5 and 10 for a more detailed discussion of digital audio and video.)

In comparison to an analog signal, which varies continuously in relation to the phenomenon that produced it (e.g., variations in the amount of light hitting the CCD), in a digital system only selected points are sampled. As you may imagine, the higher the sampling rate and the more bits assigned to each sampling point, the more accurate the digital code will be.

TABLE 4.1 The ATSC Digital Television Scanning Formats

Vertical
Lines
Horizontal
Pixels
Aspect
Ratio
Picture
Rate
*
HDTV/
SDTV
1,080 1,920 16:9 60I 30P 24P HDTV
   720 1,280 16:9 60P 30P 24P HDTV
   480    704 16:9 and 4:3 60I 60P 30P 24P SDTV
   480    640 4:3 60I 60P 30P 24P SDTV

* “I” means interlaced scan, and “P” means progressive scan.

HDTV = High-definition television.

SDTV = Standard-definition television.

Source: Advanced Television System Committee.

ASPECT RATIO Early in the discussions that led to the development of the ATSC standard one of the goals was to develop a new set of standards for high-definition television (HDTV)—television with an increased line rate that could deliver substantially improved picture quality. Another one of the issues that the standards makers had to confront was the fact that all of the feature films of the day were produced in a variety of wide-screen formats. Fitting those films onto a television screen means cutting off parts of the picture so that it will fit onto the 4:3 frame. ATSC settled on a 16:9 aspect ratio for HDTV to better accommodate the transmission of feature films. Because so much program material had already been developed in the standard 4:3 aspect ratio, the ATSC standard can accommodate that as well.

FRAME AND LINE RATES Just as the ATSC standard can accommodate various aspect ratios, it is also flexible in its ability to transmit pictures with various numbers of lines. In addition to the 480 (active) line × 704 pixel standard used by NTSC (now called standard-definition television, or SDTV), the ATSC standard contains two HDTV standards with either 1,080 lines and 1,920 pixels per line or 720 lines and 1,280 pixels per line, as well as the 480 line × 640 pixel variant that is equivalent to the super VGA standard used to display graphics on computer screens.

The ATSC standard includes the 60 field per second (30 frame per second) interlaced scanning standard on which NTSC is based, and also can accommodate signals based on a standard of 24 or 25 frames per second. Twenty-four frames per second is the standard projection rate for feature films. Twenty-five frames per second is the standard used by European broadcasters and in other countries throughout the world that are not based on NTSC.

PROGRESSIVE AND INTERLACED SCANNING Improvements in video displays have eliminated the technical issues that required inter laced scanning. If you have ever sat in front of a computer monitor, you have viewed an image in which each of the lines is scanned progressively. In progressive scanning, each of the lines in the frame is scanned successively. In the 720-line system, scanning begins with line 1 and continues to line 720; then the process repeats. (See Figure 4.14.)

The ATSC standard accommodates progressive scanning as well as interlaced scanning. For example, one of the popular HDTV formats uses 720 scanning lines, scanned progressively. Another popular HDTV format relies on 1080 scanning lines, interlaced.

Production equipment has been produced to accommodate some of these standards, and more is in development. Many of the new cameras that are available to professional video producers have selectable frame rates and aspect ratios, allowing the cameras to be used to provide program material in a variety of the digital formats. Even many inexpensive camcorders allow for aspect ratio changes from 4:3 to 16:9, and some are capable of 24-frame-per-second progressive scanning as well.

Camera Operational Controls

Most cameras and camcorders have a standard set of operational controls with which you should be familiar with if you are going to be the camera operator. (See Figure 4.15.)

image

FIGURE 4.15
Camera Operational Controls

Power Switch

All cameras and camcorders run on electrical power supplied either by a battery (DC—direct current) or by regular electrical power (AC—alternating current). The power switch may be a simple ON–OFF switch, or it may be a three position ON–OFF–STANDBY switch. The standby mode is used to conserve power, particularly on battery-operated camcorders. When the camcorder is in the standby mode, the camera electronics are on, but the image sensor and viewfinder are not activated. This not only conserves battery power, but also prolongs the life of the image sensor. When the switch is moved to the ON position, the viewfinder and image sensor are activated, and the camera becomes fully operational.

Color Bars/Camera Switch

The color bars/camera switch allows the camera operator to select the signal that the camera will produce. In the CAMERA position the camera output is the video signal generated by the image sensor. In the COLOR BARS position the camera generates a standard pattern of yellow, cyan, green, magenta, red, and blue bars that are used as a standard color reference by video engineers. (See Figure CP-5 in the color insert.) It is standard practice to record color bars at the beginning of each studio production and on each tape recorded in the field.

Color Correction Filters

Most professional and prosumer cameras and camcorders contain a series of filters that are used to compensate for differences in the color temperature of the light falling on the scene. (See Figure 4.16.) The two most essential settings are 3,200°K (equivalent to the color temperature of tungsten halogen lights used in television studio production) and 5,600°K (equivalent to the color temperature of daylight). On consumer equipment these switches may be labeled INDOOR and OUTDOOR.

Professional cameras are likely as well to have one or more neutral density filters. These filters reduce the amount of light that reaches the image sensors without effecting the color temperature of the light. The neutral density filter is typically used in conjunction with the lens iris when shooting outdoors in very bright conditions to control depth of field. Under very bright outdoor lighting conditions the lens will generally produce an image with great depth of field. To shorten the depth of field, the neutral density filter is engaged, which then requires the lens iris to open up to let in more light. As the iris gets larger, the depth of field gets shorter. This allows the camera operator to exercise a degree of aesthetic control over the portions of the image that are in focus and out of focus.

image

FIGURE 4.16
Color Correction Filters

White and Black Balance

Because the color temperature of light varies so much, all color video cameras contain electronic circuitry designed to correct the white balance of the camera. White balance adjusts the relative intensity of the red, green, and blue color channels to allow the camera to produce an accurate white signal in the particular light in which the camera is recording. Once the camera is set to produce white accurately, it can produce all of the other colors in the scene accurately as well (see CP-7).

There are three common white balance modes. Automatic white balance continuously and automatically regulates the camera’s electronic color circuitry even if the color temperature of the light changes while the camera is recording the scene. Preset white balance controls allow the camera operator to manually choose from a variety of preset settings, such as 3,200°K (tungsten halogen) or 5,600°K (daylight), or from a menu of various light sources, such as incandescent, fluorescent, daylight, and overcast lighting conditions. Manual white balance controls require the camera operator to set the white balance for the existing light conditions by focusing the camera on a white card and activating the manual white balance switch. If the lighting conditions change or if the camera changes position, the white balance procedure needs to be repeated.

Professional cameras may include a black balance control in addition to white balance. Black balance adjusts the video signal to produce black at a preset point in the video waveform.

Gain Boost/Sensitivity

The function of the gain boost switch is to amplify the video signal. This is most often used in low-light situations to electronically brighten the picture when not enough light is falling on the scene to produce an acceptable image. However, the benefit is not without a cost; increased brightness may come at the expense of picture quality, which may suffer from an increase of visible electronic noise.

Electronic Shutter

The electronic shutter controls the amount of time that light passing through the camera lens hits the CCD image sensors. The normal shutter speed for cameras operating on the NTSC standard is one sixtieth of a second, which corresponds to the exposure time for one field of video information. In recording situations in which the camera image tends to flicker or become blurred because of high-speed movement of the subject in front of the camera, shutter speed can be increased to reduce these picture artifacts and improve the sharpness of the image. Shutter speed can also be adjusted in conjunction with the camera iris to increase or decrease depth of field.

Shutter speeds commonly range from 1/60 to 1/10,000 of a second, with incremental settings of 1/100, 1/250, 1/1,000, 1/2,000, 1/4,000, and 1/8,000. In addition, some cameras feature a variable shutter speed, which can be used to eliminate the screen flicker that becomes visible when a computer screen or video monitor appears in a shot.

Camera Supports

Although both consumer- and professional-level camcorders are light enough to be shoulder carried, or hand carried as in the case of many of the smaller consumer models, a variety of camera supports are used to provide stability and control to studio and field camera systems.

Studio Pedestal

The combined weight of a large studio camera equipped with a full-size lens can be quite significant, and to provide adequate support, a camera pedestal may be used. (See Figure 4.17.) In addition to supporting the weight of the camera, viewfinder, and lens, the pedestal has two other advantages: The base of the pedestal is equipped with wheels that make it easy to move the camera on a smooth studio floor, and the height of the camera on the pedestal can be raised or lowered, allowing for high- and low-angle shots.

image

FIGURE 4.17
Studio Pedestal

Tripod

Tripod mounts—or sticks, as they are often called because of the wooden legs on some models designed for use in the field—are widely used as camera supports. Tripods are lighter and more portable than pedestals. (See Figure 4.18.)

Tripods equipped with a wheeled dolly are often used in studio applications. (See Figure 4.19.) The wheels allow the camera and tripod to be moved from one place to another but generally do not provide the kind of fluid movement that can be achieved with a pedestal.

Tripods that are used in field production situations may be equipped with a spreader, a device that works well on a flat surface and holds the tripod legs firmly in place. Tripods usually contain telescoping legs that allow the height of the camera to be adjusted, but in contrast to the studio pedestal, height adjustments cannot be used as a production technique— once the height of the camera is set, the tripod legs are securely fastened.

image

FIGURE 4.18
Tripod

Some camera operators prefer the use of a monopod, a telescoping rod that is attached to the base of the camcorder and is inserted into a special belt pouch. Monopods work best with lightweight cameras and camcorders.

image

FIGURE 4.19
Studio Tripod Dolly

Gyroscopic Camera
Support Systems

A number of manufacturers produce camera-mounting systems that incorporate gyroscopes in their design. They are often called by the trade name of the product produced by the best-known manufacturer of these systems: Steadicam®.(See Figure 4.20.) Depending on the size of the camera and the corresponding size of the mounting system, it may be worn by the camera operator, as in the case of large cameras, or hand-held, as in the case of small consumer-level camcorders. In either case, the use of these gyroscopically stabilized systems allows for extremely fluid camera motion, even when the camera operator walks or runs with the camera. The effect is almost as if the camera is floating through the scene.

image

FIGURE 4.20   Gyroscopic Camera Support System (Steadicam®)

Jib

A jib is a fulcrum-mounted camera support system that allows the camera to be raised high above the scene to be recorded. (See Figure 4.21.) The camera is attached to the end of the jib arm, which is raised and lowered by the camera operator. A series of remote controls govern the zoom, pan, tilt, and focus functions. On some systems the length of the jib arm can be extended; on others it is a fixed length. Jibs are widely used in studio and field applications because they provide a relatively simple way to achieve very dramatic high and low camera angles.

Robotic Camera Systems

Studio cameras used for the production of television news may be operated by robotic remote control rather than by individual camera operators. In robotic systems control of the cameras and their supports is automated. A computer system is used to program the individual shots and camera moves for each of the cameras. One technician can operate multiple cameras from the control room instead of committing one camera operator to each camera. For many television stations the use of robotic cameras is extremely cost-effective because the expense of the robotic systems is quickly recouped from savings in crew salaries.

image

FIGURE 4.21
Camera Jib

The Zoom Lens

Most modern video cameras are equipped with a zoom lens. (See Figure 4.22.) The zoom lens is used to change the angle of view of the image the camera produces. By adjusting the zoom control, the operator can zoom in (make an object appear larger and closer) or zoom out (make the object seem to decrease in size or move farther away). (See Figure 4.23.)

Focal Length and Zoom Ratio

Technically speaking, a zoom lens is a lens with a continuously variable focal length. Focal length is defined as the distance from the optical center of the lens to the point where the image is in focus—the surface of the CCD. Lens focal length is measure in millimeters (mm). Because the zoom lens is capable of being adjusted to a variety of focal lengths, these lenses are often described by their zoom ratio: a ratio between the lens at its narrow (close-up) angle to its wide (long-shot) angle. For example, a zoom lens with a wide-angle focal length of 8 mm and a zoom ratio of 20:1 has an effective focal length of 160 mm when it is zoomed in all the way (160:8 is a ratio of 20:1).

image

FIGURE 4.22
Zoom Lens

image

Lens Zoomed Out: Wide Angle of View

image

Lens Zoomed In: Narrow Angle of View

FIGURE 4.23
Focal Length and Zoom Ratio

A zoom lens with a zoom ratio of 20:1 or higher is particularly useful for outdoor shooting. Distant objects can be magnified simply by zooming in on them. In a small studio environment a lens with a zoom ratio between 10:1 and 15:1 may well be sufficient.

Operating the Zoom Lens

There are three principal operational controls for the zoom lens: focus control, zoom control, and aperture control. Each of these plays a major role in controlling the quality of the image that is delivered to the camera’s image sensor. On studio cameras, the focus and zoom controls are mounted on the pan handles, which are attached to the pedestal or tripod. (See Figure 4.24.) This allows the camera operator to change the angle of view and focus the camera while standing behind the camera and looking into the viewfinder. On camcorders that are designed for EFP/ENG and equipped with an eyepiece viewfinder, the camera is held on the camera operator’s shoulder and the camera operator adjusts the zoom and focus controls directly on the lens housing. (See Figure 4.22.)

image

Camera Operator in the Studio

image

FIGURE 4.24
Zoom and Focus Controls on a Studio Camera

FOCUS CONTROL The image is in focus when the important parts of the image are seen in sharp detail, and it is out of focus when it is fuzzy and unclear. The focus on all professional cameras is adjusted manually by the camera operator, either by adjusting the focus ring at the end of lens on EFP/ENG cameras or by adjusting the remote control on the pan handle of a studio camera. Rotating the focus control clockwise or counterclock wise brings the image into and out of focus. To keep an image in focus throughout the zoom range, zoom in to the tightest shot possible on the subject you are shooting and focus the lens. Then zoom out. The image should stay in focus, and as long as the distance between the camera and the subject doesn’t change, the image will be in focus when the camera zooms back into its close-up shot.

If the lens is unable to focus an image at the widest angle of the zoom (when the lens is zoomed out all the way), the camera’s back focus needs to be adjusted. On some cameras there is a back focus adjustment on the lens housing. On other cameras a technician may need to make this focus adjustment.

Most consumer grade camcorders are equipped with an automatic focus (autofocus) mechanism that focuses the lens by emitting a beam of infrared (invisible) light or ultrasound (inaudible sound). The beam bounces off the object being recorded and travels back to the camcorder. A servomechanism then automatically adjusts the lens for correct focus.

Autofocus can be helpful, particularly for inexperienced camera operators, because it guarantees sharp focus in shooting situations that may be difficult for the novice to control. For example, autofocus will keep the image in focus in low-light situations in which it may be hard to see the image in the viewfinder, and it will keep the image in focus if you zoom in or zoom out.

ZOOM CONTROL Modern lenses contain a motor-driven zoom lens. The zoom lens control (mounted on the lens on EFP/ENG cameras and on the pan handle of studio cameras) has a zoom-in position at one end and a zoom-out position at the other end. These are often labeled “T,” to indicate the tight, telephoto, zoomed-in position, and “W,” for the wide, zoomed-out position. (See Figure 4.25.)

Some zoom lens controls have one or two preset speeds that allow the operator to set the speed selector switch for a fast or slow zoom. More sophisticated zoom lenses have continuous variable-speed motors hooked up to the lens. The zoom speed is controlled by the amount of pressure that is put on the control—the zoom speed increases with increased pressure and decreases with decreased pressure. This guarantees extremely smooth zooming over the whole zoom range of the lens.

image

FIGURE 4.25
Motor-Driven Zoom Lens

Many professional-quality ENG/EFP lenses also contain a manual zoom control. This disengages the zoom lens motor and allows the camera operator to manually control the speed of the zoom by rotating the zoom ring on the barrel of the lens.

APERTURE CONTROL The aperture, or iris, of the lens regulates the exposure of the image by controlling the amount of light that passes through the lens and hits the CCD image sensors. The iris of the lens works like the iris of the eye. In low-light situations the iris needs to be opened up to allow more light to hit the image sensor. In bright situations, the size of the iris needs to be reduced to decrease the amount of light hitting the sensors.

Its f-stop number, a standard calibration of the size of the aperture opening, describes the size of the aperture or iris opening; f-stop numbers usually vary from about 1.4 to 22. Typical f-stop settings on a lens are f-1.4, f-2, f-2.8, f-4, f-5.6, f-8, f-16, and f-22. Small f-stop numbers correspond to large aperture openings; large f-stop numbers correspond to small aperture openings. (See Figure 4.26.) In addition, most lenses have an iris setting in which the aperture can be completely closed to prevent any light from hitting the image sensors during those times when the camera is not in use.

OTHER LENS COMPONENTS Efp/Eng camcorders and convertible cameras typically include a soft rubber lens hood attached to the end of the lens. The lens hood works like the visor on a hat to prevent un wanted light from hitting the lens and causing lens flare. Lens flare is an optical aberration that is caused when light bounces off the elements within a lens. Most often, lens flare is caused by pointing the camera directly at the sun or a studio lighting instrument. (See Figure 4.27.)

image

FIGURE 4.26   F-Stop Numbers and Aperture Size

The lens cap is a cover that is attached to the end of the lens to protect the lens class from damage when the camera is not in operation and to prevent light from hitting the image sensors when the camera is off. The lens cap should always be used when the camera is not in operation; in field recording situations it should always be used when the camera is being transported from one location to another.

image

FIGURE 4.27   Lens Flare

A macro lever is a device that converts the lens to a macro lens, allowing the camera to take extreme close-up shots of very small objects by moving them very close to the lens. In the normal zoom lens mode most lenses are not capable of focusing on objects that are closer than two or three feet from the end of the lens. When the macro lens function is activated, the camera can focus on objects that are only inches away from the lens and magnify their size so that they fill the video frame.

A variety of lens filters are available for use with studio and field lenses. Filters may be used to change the quality or amount of light passing through the lens, or they may be used to achieve special effects. Fog filters and star filters are popular special effects lens filters.

Aperture Control
and Depth of Field

Depth of field refers to the portion of the scene that is in focus in front of the camera. Depth of field can be very long or very short, depending on the subject and camera, and the zoom lens setting. Table 4.2 shows these relationships. In addition, let’s consider two typical situations that illustrate how these variables work together.

TABLE 4.2 Depth of Field Relationships

image

Example 1:
Low Light in the Studio

Let’s assume that you have set up a romantic scene in the studio. You are trying to create the illusion that it is evening in a cozy living room. To create the illusion, you have brought down the overall light level on the set, and you have worked hard to reduce the amount of light falling on the walls of the living room. Although you are still within the baselight range for the camera, you are at the lower end of it. As a result, you may need to open the aperture to achieve the correct exposure.

Because this is a romantic scene focused on two individuals, you decide to use a lot of close-up camera shots. To achieve these close-ups, you move your cameras close to the actors and have your camera operators zoom in on your subjects. When they do this, you notice that the background quickly falls out of focus. Why? Because all of the elements have conspired to reduce the depth of field. The low light level caused you to open the camera iris. As you can see in Table 4.2, when you increase the size of the aperture, depth of field decreases. Moving the cameras close to the subjects and zooming in, which increases the focal length of the lens, also contributes to shortening the depth of field.

Example 2:
Daylight Outdoors

Contrast the studio scene in Example 1 with the following field production situation. You have been assigned to photograph a parade at noon on a bright, beautiful summer day. To be able to show the magnitude of the event, you position yourself at the top of the reviewers’ grandstand along the parade route. This puts you quite a distance from the parade. When your camera is on a wide shot (lens zoomed out), you notice that everything in front of the camera is in focus as far as you can see. Again, look at Table 4.2. You have great depth of field because there is a lot of light on the scene, causing the aperture size to decrease. You have decreased the focal length of the lens by zooming out, and the camera is a significant distance from the main subject. All of these elements contribute to increase the depth of field.

If you change only one of these elements—the focal length of the lens—you immediately see a dramatic change in the depth of field. When you zoom in to get a close up of the action, you notice again that the people and objects behind the subject go out of focus, as they did in the studio. By zooming in, you increased the focal length of the lens, and this has resulted in shortening the depth of field. (See Figure 4.28.)

image

Long Depth of Field: Lens Zoomed Out

image

Short Depth of Field: Lens Zoomed In

FIGURE 4.28
Long and Short Depth of Field

Camera Movement

In addition to zooming the lens to change the angle of view on the subject, there are several other types of camera movement that are commonly used in video production.

Panning and Tilting

When the camera is mounted on a tripod or pedestal, the camera can smoothly be panned or tilted. A pan is a horizontal movement of the camera head only. The word pan is short for panorama, and the purpose of the pan is to reveal a scene with a sweeping horizontal motion of the camera head. A tilt is a vertical (up-and-down) movement of the camera head.

Professional tripods and pedestal heads contain controls to help control panning and tilting. Pan and tilt drag controls introduce varying degrees of resistance to the head so that the camera operator can pan and tilt smoothly.

Cameras mounted on tripods shoot from a fixed height. The camera can be raised or lowered by manually extending or retracting the tripod legs. Needless to say, this cannot be done while the camera is shooting the picture.

High-quality studio pedestals, on the other hand, often have the ability to raise or lower the camera smoothly while the camera is photographing the scene. This allows the camera operator and video director to introduce dynamic motion effects into the program.

Dolly, Truck, and Arc

Movement of the camera and its support unit (either a pedestal or tripod with a three-wheel dolly attached to the legs) produces the movement of dolly, truck, and arc. A dolly in or out is the movement of the entire camera toward or away from the scene. A truck left or right is the horizontal movement of the camera and its support in front of the scene. An arc is a semicircular movement of the camera and its support around the scene.

Of course, to achieve these effects, you must have not only a wheeled pedestal or tripod, but also a smooth surface to move the camera across. This is an advantage that studios provide over field shooting.

When dynamic trucking or dollying shots are needed in the field, a set of rigid tracks can be constructed for a camera dolly to roll on. This practice gave rise to the term tracking shots.

Hand-Held Camera Movement

Effective camera movement can be achieved with a hand-held camera, but it takes practice to master the technique. Novice camera operators tend to move the camera too quickly. This applies to panning and tilting as well as to zooming. Remember that one of the essential qualities of a successful production is the quality of control. Camera movement, like lighting and other production elements, needs to be controlled to be effective.

Because the ground needs to be smooth for tripod dollies to be effective and track systems take a considerable time to construct, increasingly, when smooth camera movement is called for in field production, a gyroscopic camera mount system like the Steadicam® system described earlier in this chapter will be used. With it the camera seems to float effortlessly through the scene as the camera operator walks or even runs along with the action.

Operating the Video Camera

In the Studio

The camera operator’s call to action in the studio often begins with the director saying, “Cameras to headphones” over the studio talkback system. This is the camera operator’s cue to go to his or her camera, uncap the camera, put on the headsets, and listen to the director’s instructions.

Before the program begins, all of the cameras will need to be white-balanced. This requires focusing the camera on a white card that is positioned on the set in the studio light. Once the camera is white-balanced, the camera operator can refer to the director’s calls and the camera shot sheet—a list of each of the shots for each camera—for direction as to which part of the scene to focus on.

Whether the show is fully scripted or not, the camera operator’s duty is to be alert, to pay attention to what is happening on the set, and to have a usable shot ready for the director at all times. Camera operators should make sure that the zoom lens focus is set by zooming in to the tightest shot possible on the subject, focusing the lens, and then zooming out. Assuming that the distance between the subject and camera does not change, the shot should hold its focus as the camera zooms in and out.

At the end of the production the camera operator should replace the lens cap on the camera and return the camera to its normal position in the studio. The pedestal or tripod head tilt and drag controls should be securely locked, and the camera cable should neatly be coiled.

In the Field

In multicamera field productions the field camera operator functions much as he or she would in the studio environment, listening for headset instructions from the director about shot selection and framing. In single-camera field productions the camera operator often does not have access to a headset. Rather, the program may be recorded shot by shot, with the director giving instructions for the setup of each shot before it is recorded.

When shooting with a portable camcorder, you will need to pay attention to a number of additional details:

1. Remember to bring enough batteries because most productions shot with a single camcorder in the field rely on battery power.

2. Remember to bring enough videotape.

3. White balance the camera whenever you move to a different location or if the lighting conditions change in the location you are shooting in.

4. Monitor audio levels closely when you are recording. Most camcorders will display audio levels in the viewfinder.

5. To stop or start recording, simply push the record button (on the lens housing or at the back of the camcorder) and release it. Make sure to record five to ten seconds of extra material at the head and tail of each shot. As we will discuss in the chapter on editing, this will help to make the editing process easier and will help to protect the head or tail of your shot from being accidentally trimmed off in the camcorder.

6. Label your field tapes clearly, preferably before you put a new tape into the camcorder. When you remove a tape, move the record safety switch on the cassette to the position that will prevent you from accidentally rerecording onto the tape.

7. When you have finished shooting, make sure that all equipment is stowed safely in its travel bags.

Video Connectors

When working in the studio or in the field, you may need to connect several pieces of video equipment together—for example, to display the camera output on a monitor or to transfer the video signal from one VCR to another. To work efficiently, you should know the differences between the principal types of video connectors that are now in use. (See Figure 4.29.)

Much professional video equipment uses the BNC bayonet connector for video inputs and outputs. This is a twist lock connector that snaps securely into place when it is properly connected.

image

FIGURE 4.29
Video Connectors

Another type of connector that is used for video is the RCA/phono connector. This type is most often found on consumer-quality digital video and VHS machines. It is the same time of connector that is often used to connect audio inputs and outputs on home stereo systems.

The cable television connection to your home receiver is made with a cable that ends in an F-connector. This cable connector has a small threaded sleeve, and a thin copper wire protrudes from the center of the cable.

Consumer and prosumer camcorders with S-Video inputs and outputs use a special four-in connector that is used to make direct S-Video connections (separate luminance and chrominance signals) between camcorders, VCRs, and video monitors that are equipped with similar connectors.

Digital video (DV) equipment may use a six-pin digital video connector known as FireWire, or IEEE 1394, to move digital video (as well as audio, time code, and machine control commands) from one DV VCR to another or from a DV camcorder or VCR directly into a computer equipped with a FireWire input. On Sony equipment this connector is called iLink.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset