4
The Video Camera

With the development and proliferation of low-cost portable camcorders and video-capable smartphones, video seems to have become omnipresent in American culture. No longer are professional video producers the only ones with access to video production tools; cameras appear to be everywhere.

The main function of the video camera is to translate the optical image captured by the camera’s lens into an electrical signal that can be recorded and transmitted. Although all video cameras perform this basic function, there are significant differences in the ways in which cameras are configured and the markets to which they are targeted.

Camera Equipment Quality and Target Markets

Video equipment makers produce a wide range of equipment that is designed to meet the needs of consumer users, video enthusiasts, and production professionals. (See Figure 4.1.) Video equipment varies greatly in terms of features, quality, and cost.

■ Consumer Equipment

At the low end of the scale is consumer equipment, which is marketed to home video users for their personal use. Although consumer camcorders and smartphones can be bought for only several hundred dollars, the quality of the images that they produce and record is remarkable given their relatively low cost. This type of equipment tends to be less durable than professional equipment, and many of the basic camera control functions have been automated to eliminate operator errors.

■ Professional/Broadcast Equipment

At the high end of the scale is professional/broadcast equipment. This equipment is the most rugged and contains a wider set of controls for the professional operator. Prices for professional cameras and camcorders vary widely, depending on how they are to be used. A serviceable, inexpensive professional camcorder might cost in the neighborhood of one thousand to three thousand dollars and will provide the operator with a camera with durable quality and controls that feature manual as well as automatic settings. High-quality camcorders used in electronic news gathering can easily cost between ten thousand and twenty thousand dollars; high-end cameras and camcorders designed for high definition television production may cost up to one hundred thousand dollars.

It is generally important to match the quality level of production equipment to the situation at hand. Most broadcast outlets expect programs to meet technical standards that are more easily achieved with professional-quality equipment than with consumer-level equipment.

Camera Configurations

■ Camera versus Camcorder

A video camera can be configured either as a video camera with no built-in recording capability or as a camcorder—a video camera with a video recorder that is built in to the camera. Consumer-level camera equipment is generally available only in the camcorder configuration; professional/broadcast equipment may be configured as either a camera or a camcorder.

Figure 4.1
Figure 4.1
Figure 4.1

■ Video Camera: Studio Configuration

A video camera designed for studio use may be a fairly large camera mounted on a heavy pedestal or tripod with a set of external controls that allow the camera operator to control principal camera functions such as zooming the lens to change the angle of view and focusing the lens. (See Figure 4.2.) The camera is connected to the video control room with a cable that delivers power to the camera and carries the video signal from the camera to the control room. A large viewfinder will typically be mounted on top of the camera to allow the camera operator to easily see the image that the camera is producing as he or she stands behind the camera and controls it.

This type of camera may be used in a multicamera studio environment or on a large remote production. For example, telecasts of sporting events typically involve the use of multiple cameras whose signals are all routed back to a central control room, where a director calls the shots and where the program is either transmitted live or recorded for later broadcast.

■ Convertible Studio/Field Cameras

Convertible cameras are designed to be used either in the studio or in the field with several others in a typical multicamera configuration or as a single-camera portable unit in the field. These cameras are generally smaller and cheaper than their full-size studio counterparts, may be equipped with either an eyepiece or a studio-type viewfinder, and may be operated with conventional AC power or with battery power. (See Figure 4.3.) Some convertible cameras have the capability to be converted into camcorders by removing the back end of the camera and adding a dockable video recording device, or they may be attached directly to a hard disk or memory card recorder.

■ One-Piece Camcorders

A camcorder combines the video signal–producing camera component of a video camera with a video recording device. In one-piece camcorders the components that produce and record the video signal are integrated into one solid piece of equipment. Like other equipment, camcorders vary widely in price and features depending on the target market for which they have been produced.

■ DSLR and Mirrorless Cameras, Smartphones, and Computer Tablets

Another video camera option is the DSLR (digital single lens reflex) camera. Originally designed for still photography, most modern DSLRs are now also capable of recording a digital video signal. Like DSLRs compact mirrorless cameras feature interchangeable lenses and large image sensors, but in a camera body that is much smaller in size. In addition, most smartphones and computer tablets contain built in cameras with the ability to record still images or moving video images. (See Figure 4.4.)

■ Action Cameras

Action cameras are small digital video cameras capable of recording still and moving video images. The most recognizable brand is GoPro. (See Figure 4.5.) They were invented by a California surfer who wanted a camera that could capture dynamic images of surfing from the perspective of the surfer—or the surfboard! These cameras have become the favorite of producers for point-of-view action photography. The cameras are small, rugged, and often waterproof. Depending on the camera model, they are capable of recording in a variety of video formats, including high definition video and, in some cases, 4K video.

Figure 4.3

Parts of a Video Camera

A video camera system is composed of four principal parts: the camera itself (sometimes called the camera head), lens, camera control unit, and viewfinder.

■ Camera Head

The camera head contains the optical and electronic components that produce the video signal. This consists of the imaging device, the beam splitting system, and the camera’s internal electronics. (See Figure 4.6.)

Figure 4.6

IMAGING DEVICE One of the main functions of a video camera is to change the light reflected off the scene in front of the camera into electrical energy—the video signal. This is the principal function of the camera’s imaging device. The imaging device in modern cameras is either a charge coupled device (CCD), a small silicon chip inside the camera head, or a complementary metal oxide semiconductor, or CMOS chip. CCD and CMOS chips are referred to as optical video transducers because they convert light focused onto them by the camera lens into electrical energy. An electronic shutter regulates the length of time the light is exposed to the imaging device.

The surface of the CCD or CMOS chip contains a grid of pixels, or picture elements, arranged in a precise series of horizontal rows, or lines, and vertical columns. (See Figure 4.7.) The camera lens focuses the scene before it on this array of pixels, each of which is responsible for reproducing one tiny part of the picture. Each pixel contains a semiconductor that converts the incoming light into an electrical charge. The strength of the charge is proportional to the brightness of the light hitting the pixel and the amount of exposure time. After the incoming light is converted into an electrical charge the information is read out one pixel at a time in a line-by-line sequence in conformity with normal television scanning rates.

CCDs and CMOS chips are popular as imaging devices because they are rugged, cheap, and small and use very little electrical power. They are wafer thin and are manufactured in various sizes: 1/6, 1/4, 1/3, 1/2, or 2/3 inch, measured diagonally. Consumer cameras tend to use the smaller chips (1/3 inch or smaller); professional cameras tend to use the larger chips. In addition, consumer cameras typically use one CCD or CMOS chip to create the signal, whereas professional cameras use three.

DSLR cameras typically use one large CMOS sensor to create the image. The highest quality images are generally produced by cameras with full frame (24mm by 36mm) sensors. The image sensors in these cameras is the same size as one frame of 35mm film.

BEAM SPLITTER Before the light that is focused onto the imaging device can be changed into electrical energy, it needs to be broken down into its essential color components. Color video cameras work by breaking down the incoming light into the additive primary colors of light: red, green, and blue. (See Figure 4.8.) (Don’t confuse these with the primaries of subtractive color (red, blue, and yellow) which are concerned with the mixing of pigments in paint, ink, and dyes.)

Figure 4.8

In professional-quality cameras incoming light passes through a prism block, a glass prism equipped with filters that assist in the color separation process. Three image sensors are employed to generate the RGB signal; one sensor is assigned to each of the principal color components: red, green, and blue.

In consumer-quality cameras a single sensor is charged with the responsibility of creating the entire color signal. Light hits the sensor and is sequentially broken down by a stripe filter into its red, green, and blue components. On some cameras equipped with a single CMOS sensor the color processing may be done on the sensor itself. The sensor is composed of millions of pixels, each of which is capable of converting the light that hits it into digital data that includes the principal components of the image (brightness, color, detail, etc.). Generally the quality of the signal that is produced by single-chip systems and the resulting video image are not as good as the signal and image produced by three-chip systems.

■ Camera Control Unit and Camera Internal Electronics

Video cameras contain complex electronic circuitry to process, store, and transmit the video signal. Various controls that affect the signal are located in the camera control unit (CCU). These include controls to adjust the size of the lens iris or aperture, controls to adjust the amplification or gain of the video signal, and controls to adjust the white balance of the video signal. (These are discussed in greater detail further on in the text.) Depending on the type of camera, the CCU controls may be integrated into the camera head, as is the case with field-use-only camcorders, or they may be located in a remote control unit located in the control room, as in the case for cameras used in a multicamera studio or field production environment. (See Figure 4.9.)

■ The Camera Lens

Aside from the camera head itself, the camera lens is the most critical part of the camera system when it comes to identifying components that have the greatest effect on image quality.

ZOOM LENS Most color video cameras are equipped with a zoom lens. A zoom lens is a variable focal length lens that allows the camera operator to change the angle of view without changing the lens. A point of comparison here is fixed focal length lenses, which are often used by still photographers and in feature film production and are found on many DSLR and mirrorless cameras.

FIXED FOCAL LENGTH LENSES A fixed focal length lens, also called a prime lens, produces only one angle of view. To make the image appear closer or farther away, the camera needs to be physically moved closer to the subject or farther away from it or the lens needs to be replaced with a lens with a different focal length. For example, you can get a close shot by fitting the camera with a telephoto lens, which will magnify the image without moving the camera. On the other hand, if the camera is equipped with a zoom lens, all the camera operator has to do is zoom in or zoom out to change the size of the subject.

While fixed focal length lenses are limited in terms of their ability to vary the angle of view (they can’t zoom), they have several advantages over zoom lenses: the image quality they produce is generally better; they typically have larger apertures which allows recording with lower light levels, and they are smaller and lighter than zoom lenses. (See Figure 4.4.)

STUDIO LENSES AND EFP LENSES Studio cameras and cameras used for multicamera sports and event telecasts are typically equipped with very large, heavy, studio-type zoom lenses. (See Figure 4.10.) These large lenses often have great magnification power that enables the camera operator to get very high-quality close-up shots even from a great distance.

Lenses that are used on camcorders for electronic field production (EFP) and electronic news gathering (ENG) need to be lighter so that the camera operator can easily carry the camera. And because cameras that are used in EFP and ENG are usually placed fairly close to the action (e.g., an interview), they do not need to have the magnification power of lenses used in large studios or in large-scale event production. Therefore these lenses are significantly smaller than their full-size studio counterparts. (See Figure 4.10.)

■ Viewfinder

The camera viewfinder is a small video monitor that displays the image the camera is producing. Three types of viewfinder systems are in use: studio viewfinders, eyepiece viewfinders, and LCD viewfinders. (See Figure 4.11.)

Figure 4.10
Figure 4.11
Figure 4.11

Studio viewfinders are mounted on top of the camera. They are typically five to seven inches in diagonal. The camera operator stands behind the camera as he or she operates the camera.

Eyepiece viewfinders are used on EFP/ENG cameras and camcorders. Typically, they are mounted on the left side of the camera (to accommodate right-handed camera operators). A small rubber eyepiece is attached to a diopter that is positioned above a very small (usually 1 to 1-1/2 inches) video monitor. The camera operator holds the camcorder on his or her shoulder and presses one eye to the viewfinder. Many eyepiece viewfinders contain a focus control that allows the camera operator to focus the image for his or her vision. This allows camera operators who wear eyeglasses to shoot without using their glasses.

LCD viewfinders are found on an increasing number of consumer and professional cameras and camcorders, smartphones, and DSLRs. Typically on camcorders, these displays are mounted on the side of the camera and flip out for easy viewing. LCD viewfinder systems allow the camera operator to see the viewfinder image while the camcorder is held at arm’s length, making it possible to record from difficult or unusual angles that might not be possible with a conventional eyepiece viewfinder system. Unfortunately, LCDs have a tendency to wash out in bright light, making it very difficult to see the image they are reproducing.

Viewfinders on professional cameras may contain a zebra-stripe exposure indicator. In this system, a series of black-and-white lines appears over the brightest portion of the picture when the maximum video level has been reached. In some camera viewfinders the zebra stripes are set to appear when the correct exposure for skin tones is achieved.

■ Video Monitors

Video monitors reverse the process of the camera image sensor: Their job is to take the electrical video signal and turn it back into a picture, in the form of light energy displayed on a screen. Today almost all of the old analog type cathode ray tube (CRT) screens have been replaced by LCD flat panel displays. (See Figure 4.12.)

Liquid crystal display (LCD) screens are widely used in laptop computers, smartphones, and computer tablets as well as video display panels. The omnipresence of LCD screens makes it possible to watch television almost anywhere, anytime. In the words of an advertising campaign for a leading cell phone company: “Any screen is a TV.” And all of those screens are based on LCD technology. Light Emitting Diode (LED) displays are LCDs which use light emitting diodes to generate the back light used to illuminate the picture, instead of the fluorescent sources used in conventional LCDs.

Plasma display panel (PDP) systems are similar to LCDs in that they are flat panel screens. They use neon gas compressed between two layers of glass embedded with crisscrossing horizontal and vertical electrodes to excite the red, green, and blue phosphors in the screen. Although plasma displays produce an extremely high-quality picture, they were discontinued around 2015 in favor of LCD screens, which are thinner, lighter, more energy efficient, and cheaper to manufacture.

Figure 4.12

Yet another type of screen is based on OLED (Organic Light Emitting Diode) technology. These screens produce a higher quality image than traditional LEDs, and are also thinner and lighter.

Video Standards: Standard Definition and High Definition Television

The process of transducing light energy into electrical energy is accomplished by the camera’s imaging device—the CCD or CMOS chips—as we discussed earlier. To produce a video signal that can be displayed on any video monitor, camera and video recorders must adhere to a set of technical standards that govern how the video signal is created and displayed.

The original NTSC (National Television System Committee) standards were adopted for standard definition analog television (SDTV) by the Federal Communications Commission in 1941. They were replaced by the ATSC (Advanced Television System Committee) standards for digital television, including high definition television (HDTV) adopted by the FCC in December 1996.

These standards cover a wide range of technical specifications for how the video signal is constructed and transmitted. For the purpose of this discussion the important elements of the standards on which we will focus include the aspect ratio of the image, frame and line rates, and scanning patterns, and whether the signal is analog or digital.

■ Standard Definition Television

Standard definition television has the following characteristics:

ANALOG SIGNAL In cameras an analog signal is one in which the recorded signal varies continuously in proportion to the light that produced it. This is different from the digital signal which characterizes HDTV. The video signal can be viewed on a piece of monitoring equipment called a waveform monitor. The waveform corresponds to the bright and dark areas of the scene the camera is recording. (See Figure 4.13.)

4:3 ASPECT RATIO The shape of the image is a standard ratio of 4:3 (width:height). No matter how large or small the screen is, it will retain these standard rectangular proportions in which the screen is only slighter wider than it is tall. (See Figure 4.14.)

FRAME AND LINE RATE In the standard definition system television cameras produce 30 frames of video information per second, and each one of those frames is composed of 525 lines of information (but only 480 of the lines are used for actual picture information).

SCANNING PATTERN—INTERLACED Each television frame is constructed through a process called interlaced scanning. Instead of scanning lines 1–525 sequentially, each frame is broken up into two fields: one is composed of the odd-numbered lines, the other of the even-numbered lines. (See Figure 4.15.)

Although the standard definition format is no longer used for professional program production, some video cameras still have the capability to produce a standard definition picture by selecting that scanning standard in the camera’s menu. And you can still see examples of programs and commercials produced in standard definition when you watch TV—just look for the 4:3 aspect ratio.

■ High Definition Television

Unlike NTSC, in which aspect ratio, frame rate, and line rate all meet a single standard, ATSC is an open standard that is able to accommodate signals with various line and frame rates and two different aspect ratios. (See Table 4.1.) High Definition Television (HDTV) became the standard for television production after the adoption of the ATSC standards in 1996. HDTV has the following characteristics:

DIGITAL SIGNAL In the ATSC system the video signal is digital. In digital camera systems light hits the image sensor and creates an electrical charge. That electrical charge is then sampled and converted into a binary digital numerical code consisting of bits made up of a pair of off/on (0/1) pulses. This is called quantizing. (See Chapters 5 and 10 for a more detailed discussion of digital audio and video.)

Table 4.1
The ATSC digital television scanning formats

Vertical Lines

Horizontal Pixels

Aspect Ratio

Picture Rate*

HDTV/SDTV

1,080

1,920

16:9

60I 30P 24P

HDTV

720

1,280

16:9

60P 30P 24P

HDTV

480

704

16:9 and 4:3

60I 60P 30P 24P

SDTV

480

640

4:3

60I 60P 30P 24P

SDTV

Notes: *“I” means interlaced scan, and “P” means progressive scan

HDTV = High definition television

SDTV = Standard definition television

16:9 ASPECT RATIO One of the issues that the standards makers had to confront was the fact that all of the feature films of the day were produced in a variety of widescreen formats. Fitting those films onto a television screen means cutting off parts of the picture so that it would fit onto the 4:3 frame. ATSC settled on a 16:9 aspect ratio for HDTV to better accommodate the transmission of feature films.

FRAME AND LINE RATES There are two HDTV formats based on different frame and line rates:

1080 lines per frame x 1920 pixels per line (60i, 60p, 30p, 24p)
720 lines per frame x 1280 pixels per line (60p, 30p, 24p)

The scanning rate may be 24, 30, or 60 frames per second. Thirty frames per second is the traditional video scanning rate. Sixty frames per second yields a higher quality image and has become a standard for HDTV production. Twenty-four frames per second is the standard projection rate for feature films. Video producers who want to achieve a “film look” in their productions often choose to shoot at 24 frames per second.

A term that is used to describe the arrangement of the lines and pixels that form the video image is raster. There are essentially two raster dimensions for HD video images: 1080 x 1920, and 720 x 1280—where the first number represents the number of lines and the second number the number of pixels per line.

PROGRESSIVE OR INTERLACED SCANNING The HDTV standards include progressive scanning as well as interlaced scanning. In progressive scanning, each of the lines in the frame is scanned successively. For example, one of the popular HDTV formats uses 720 scanning lines, scanned progressively: scanning begins with line 1 and continues to line 720; then the process repeats. (See Figure 4.14.)

Another popular HDTV format (interlaced) relies on 1080 scanning lines. And yet another relies on 1080 lines scanned progressively. These scanning patterns are referred to as 720p, 1080i, and 1080p.

You may notice while looking at Table 4.1 that there is no designation for 1080p (1080 lines scanned progressively at 60 frames per second) even though there are cameras that are capable of producing such an image. The reason for the absence of this format from the chart is that the 1080p format contains too much information to be transmitted through the bandwidth allocated to broadcast television stations. Programs may be produced in this format, but it will have to be converted to one of the other formats for transmission.

Many of the new cameras that are available to professional video producers have selectable frame rates and aspect ratios, allowing the cameras to be used to provide program material in a variety of the digital formats. Even many inexpensive camcorders allow for aspect ratio changes from 4:3 to 16:9, and many are capable of 24-frames-per-second progressive scanning as well. And many smartphones contain flexible recording settings. For example, the iPhone 7 is capable of high definition recording in 720p at 30 frames per second, 1080p at 30 and 60 frames per second. In addition it can record 4K video (see next) at 30 frames per second.

■ 4K and 8K Digital Video

Beyond HDTV, two new UHD (Ultra High Definition) video formats are now in use:

4K cameras produce an image with 2160 lines by 3840 pixels per horizontal line (raster = 2160 x 3840).

8K cameras produce even more resolution with a scanning matrix of 4,320 lines by 7,680 pixels per line (raster = 4320 x 7680). This results in about 16 times more resolution than the best HDTV systems.

Currently the principal application for 4K and 8K production is for large-screen Digital Cinema projection. (Most movie theaters in the U.S. now project their images from digital files, not from film.) As well, several U.S. satellite and Internet TV services provide 4K signals. Some TV producers work in the 4K format and then downconvert their finished programs to one of the conventional HDTV formats for distribution. (See Table 4.1.)

■ Monitoring the Color Video Signal

Today all professional video cameras produce a color video signal. The two principal components of the color video signal are luminance and chrominance. Luminance refers to the brightness information in the signal. Chrominance is the color information and includes two components: hue and saturation. Hue refers to the color itself: red, green, blue, and so on. Saturation refers to the amount or intensity of the color. For example, a very light pink and a very vivid or deep red both have the same hue (red) but differ in terms of how saturated they are.

Two different types of monitoring equipment are used to view the brightness and color components of the video signal: the waveform monitor displays the relative brightness of the image being produced; it is used to monitor brightness which is represented as the overall level of the video signal. The vectorscope shows which colors are present in the signal and how much of each one is present. (See Figure 4.13.) It is used to help set the white balance of the camera, and in multicamera productions it is used to help match the color quality of the cameras so that the color values they reproduce are the same for each camera shooting the scene.

Of course in addition to monitoring the technical aspects of the signal with the waveform monitor and vectorscope, the quality of the video image can be assessed by looking at it on a high-quality picture monitor.

Camera Operational Controls

Most cameras and camcorders have a standard set of operational controls with which you should be familiar with if you are going to be the camera operator. (See Figure 4.16.)

■ Power Switch

All cameras and camcorders run on electrical power supplied either by a battery (DC—direct current) or by regular electrical power (AC—alternating current). The power switch may be a simple ON–OFF switch, or it may be a three position ON–OFF–STANDBY switch. The standby mode is used to conserve power, particularly on battery-operated camcorders. When the camcorder is in the standby mode, the camera electronics are on, but the image sensor and viewfinder are not activated. This not only conserves battery power, but also prolongs the life of the image sensor. When the switch is moved to the ON position, the viewfinder and image sensor are activated, and the camera becomes fully operational.

■ Color Bars/Camera Switch

The color bars/camera switch allows the camera operator to select the signal that the camera will produce. In the CAMERA position the camera output is the video signal generated by the image sensor. In the COLOR BARS position the camera generates a standard pattern of yellow, cyan, green, magenta, red, and blue bars that are used as a standard color reference by video engineers. (See Figure 4.17.) It is standard practice to record color bars at the beginning of each studio production and at the beginning of each recording made in the field.

■ Color Correction Filters

Most professional cameras and camcorders contain a series of filters that are used to compensate for differences in the color temperature of the light falling on the scene. These filters may be controlled in a menu setting within the camera, or by a series of switches or filter wheel on the exterior of the camera. (See Figure 4.18.) The two most essential settings are 3,200°K (equivalent to the color temperature of tungsten halogen, LED, or fluorescent lights used in television studio production) and 5,600°K (equivalent to the color temperature of daylight). On consumer equipment, these switches may be labeled INDOOR and OUTDOOR.

Professional cameras are likely as well to have one or more neutral density filters. These filters reduce the amount of light that reaches the image sensors without effecting the color temperature of the light. The neutral density filter is typically used in conjunction with the lens iris when shooting outdoors in very bright conditions to control depth of field. Under very bright outdoor lighting conditions the lens will generally produce an image with great depth of field. To shorten the depth of field, the neutral density filter is engaged, which then requires the lens iris to open up to let in more light. As the iris gets larger, the depth of field gets shorter. This allows the camera operator to exercise a degree of aesthetic control over the portions of the image that are in focus and out of focus.

■ White and Black Balance

Because the color temperature of light varies so much, all color video cameras contain electronic circuitry designed to correct the white balance of the camera. White balance adjusts the relative intensity of the red, green, and blue color channels to allow the camera to produce an accurate white signal in the particular light in which the camera is recording. Once the camera is set to produce white accurately, it can produce all of the other colors in the scene accurately as well (see Figure 4.19).

There are three common white balance modes. Automatic white balance continuously and automatically regulates the camera’s electronic color circuitry even if the color temperature of the light changes while the camera is recording the scene. Preset white balance controls allow the camera operator to manually choose from a variety of preset settings, such as 3,200°K (tungsten halogen) or 5,600°K (daylight), or from a menu of various light sources, such as incandescent, fluorescent, daylight, and overcast lighting conditions. Manual white balance controls require the camera operator to set the white balance for the existing light conditions by focusing the camera on a white card and activating the manual white balance switch. If the lighting conditions change or if the camera changes position, the white balance procedure needs to be repeated.

Correctly setting the white balance control is extremely important to maintaining accurate color values. If the white balance controls are incorrectly set the picture may appear to be too red or too blue, depending on the error that was made in selecting the white balance setting. (See Figure 4.19.)

Professional cameras may include a black balance control in addition to white balance. Black balance adjusts the video signal to produce the color black at a preset point in the video waveform.

■ Gain Boost/Sensitivity

The function of the gain boost switch is to amplify the video signal. This is most often used in low-light situations to electronically brighten the picture when not enough light is falling on the scene to produce an acceptable image. However, the benefit is not without a cost; increased brightness may come at the expense of picture quality, which may suffer from an increase of visible electronic noise, also called snow.

■ Electronic Shutter

The electronic shutter controls the amount of time that light passing through the camera lens hits the camera image sensors. The normal shutter speed for cameras operating on the 30 frames/second or 60 frames/second video standards is one sixtieth of a second, which corresponds to the exposure time for one field or frame of video information, depending on the recording frame rate that has been selected. (Video recorded at 30i yields 60 fields per second: 30 frames per second × 2 fields per frame = 60 fields per second.)

Figure 4.19
Figure 4.19
Figure 4.19

In recording situations in which the camera image tends to flicker or become blurred because of high-speed movement of the subject in front of the camera, shutter speed can be increased to reduce these picture artifacts and improve the sharpness of the image. Shutter speed can also be adjusted in conjunction with the camera iris to increase or decrease depth of field.

Shutter speeds commonly range from 1/60 to 1/10,000 of a second, with incremental settings of 1/100, 1/250, 1/1,000, 1/2,000, 1/4,000, and 1/8,000. In addition, some cameras feature a variable shutter speed, which can be used to eliminate the screen flicker that becomes visible when a computer screen or video monitor appears in a shot.

Camera Supports

Although both consumer- and professional-level camcorders are light enough to be carried on one’s shoulder with the use of a built-in shoulder mount, or hand carried as in the case of many of the smaller consumer models, a variety of camera supports are used to provide stability and control to studio and field camera systems.

■ Studio Pedestal

The combined weight of a large studio camera equipped with a full-size lens can be quite significant, and to provide adequate support, a camera pedestal may be used. (See Figure 4.20.) In addition to supporting the weight of the camera, viewfinder, and lens, the pedestal has two other advantages: the base of the pedestal is equipped with wheels that make it easy to move the camera on a smooth studio floor, and the height of the camera on the pedestal can be raised or lowered, allowing for high- and low-angle shots.

■ Tripod

Tripod mounts—or sticks, as they are often called because of the wooden legs on some models designed for use in the field—are widely used as camera supports. Tripods are lighter and more portable than pedestals. (See Figure 4.21.)

Tripods equipped with a wheeled dolly are often used in studio applications. (See Figure 4.22.) The wheels allow the camera and tripod to be moved from one place to another but generally do not provide the kind of fluid movement that can be achieved with a pedestal.

Tripods that are used in field production situations may be equipped with a spreader, a device that works well on a flat surface and holds the tripod legs firmly in place. Tripods usually contain telescoping legs that allow the height of the camera to be adjusted, but in contrast to the studio pedestal, height adjustments cannot be used as a production technique—once the height of the camera is set, the tripod legs are securely fastened.

The tripod may also be equipped with a quick-release plate which allows the camera operator to quickly mount or detach the camera from the tripod.

Some camera operators prefer the use of a monopod, a telescoping rod that is attached to the base of the camcorder and is inserted into a special belt pouch. Monopods work best with lightweight cameras and camcorders.

■ Gyroscopic Camera Support Systems

A number of manufacturers produce camera-mounting systems that incorporate gyroscopes in their design. They are often called by the trade name of the product produced by the best-known manufacturer of these systems: Steadicam®. (See Figure 4.23.) Depending on the size of the camera and the corresponding size of the mounting system, it may be worn by the camera operator, as in the case of large cameras, or hand-held, as in the case of smartphones or small camcorders. In either case, the use of these gyroscopically stabilized systems allows for extremely fluid camera motion, even when the camera operator walks or runs with the camera. The effect is almost as if the camera is floating through the scene.

Figure 4.23

■ Jib

A jib is a fulcrum-mounted camera support system that allows the camera to be raised high above the scene to be recorded. (See Figure 4.24.) The camera is attached to the end of the jib arm, which is raised and lowered by the camera operator. A series of remote controls govern the zoom, pan, tilt, and focus functions. On some systems the length of the jib arm can be extended; on others it is a fixed length. Jibs are widely used in studio and field applications because they provide a relatively simple way to achieve very dramatic high and low camera angles.

■ Robotic Camera Systems

Studio cameras used for the production of television news may be operated by robotic remote control rather than by individual camera operators. In robotic systems control of the cameras and their supports is automated. A computer system is used to program the individual shots and camera moves for each of the cameras. One technician can operate multiple cameras from the control room instead of committing one camera operator to each camera. For many television stations, the use of robotic cameras is extremely cost-effective because the expense of the robotic systems is quickly recouped from savings in crew salaries.

■ Drones

Video production drones (see Figure 4.25) are small, remote-controlled, unmanned aerial vehicles with a small camera mounted on them. Drones provide an inexpensive alternative to using helicopters and light planes for aerial photography. They are highly maneuverable, relatively inexpensive, and easy to operate. Because they operate in U.S. airspace, they are now regulated by the Federal Aviation Agency, so before you go out and buy a drone you should check the FAA guidelines for drone use in your area.

The Zoom Lens

Most modern video cameras are equipped with a zoom lens. (See Figure 4.26.) The zoom lens is used to change the angle of view of the image the camera produces. By adjusting the zoom control, the operator can zoom in (make an object appear larger and closer) or zoom out (make the object seem to decrease in size or move farther away). (See Figure 4.27.)

Professional-quality cameras use optical zoom lenses in which image magnification is achieved by the action of the internal optical elements of the lens. Consumer-quality cameras may also be equipped with a digital zoom feature, which electronically enlarges the image. However, image quality may suffer with too much digital enhancement.

■ Focal Length and Zoom Ratio

Technically speaking, a zoom lens is a lens with a continuously variable focal length. Focal length is defined as the distance from the optical center of the lens to the point where the image is in focus—the surface of the image sensor. Lens focal length is measured in millimeters (mm). Because the zoom lens is capable of being adjusted to a variety of focal lengths, these lenses are often described by their zoom ratio: a ratio between the lens at its narrow (close-up) angle to its wide (long-shot) angle. For example, a zoom lens with a wide-angle focal length of 8 mm and a zoom ratio of 20:1 has an effective focal length of 160 mm when it is zoomed in all the way (160:8 is a ratio of 20:1).

A zoom lens with a zoom ratio of 20:1 or higher is particularly useful for outdoor shooting. Distant objects can be magnified simply by zooming in on them. In a small studio environment a lens with a zoom ratio between 10:1 and 15:1 may well be sufficient.

■ Operating the Zoom Lens

There are three principal operational controls for the zoom lens: focus control, zoom control, and aperture control. Each of these plays a major role in controlling the quality of the image that is delivered to the camera’s image sensor. On studio cameras, the focus and zoom controls are mounted on the pan handles, which are attached to the pedestal or tripod. (See Figure 4.28.) This allows the camera operator to change the angle of view and focus the camera while standing behind the camera and looking into the viewfinder. On camcorders that are designed for EFP/ENG and equipped with an eyepiece viewfinder, the camera is held on the camera operator’s shoulder and the camera operator adjusts the zoom and focus controls directly on the lens housing. (See Figure 4.28.)

FOCUS CONTROL The image is in focus when the important parts of the image are seen in sharp detail, and it is out of focus when it is fuzzy and unclear. The focus on all professional cameras is adjusted manually by the camera operator, either by adjusting the focus ring at the end of the lens on EFP/ENG cameras or by adjusting the remote control on the pan handle of a studio camera. Rotating the focus control clockwise or counterclockwise brings the image into and out of focus. To keep an image in focus throughout the zoom range, zoom in to the tightest shot possible on the subject you are shooting and focus the lens. Then zoom out. The image should stay in focus, and as long as the distance between the camera and the subject doesn’t change, the image will be in focus when the camera zooms back into its close-up shot.

If the lens is unable to focus an image at the widest angle of the zoom (when the lens is zoomed out all the way), the camera’s back focus needs to be adjusted. On some cameras there is a back focus adjustment on the lens housing. On other cameras a technician may need to make this focus adjustment.

Figure 4.28
Figure 4.28

Most consumer-grade camcorders and smartphones are equipped with an automatic focus (autofocus) mechanism that focuses the lens by emitting a beam of infrared (invisible) light or ultrasound (inaudible sound). The beam bounces off the object being recorded and travels back to the camcorder and automatically adjusts the lens for correct focus.

Autofocus can be helpful, particularly for inexperienced camera operators, because it guarantees sharp focus in shooting situations that may be difficult for the novice to control. For example, autofocus will keep the image in focus in low-light situations in which it may be hard to see the image in the viewfinder, and it will keep the image in focus if you zoom in or zoom out.

ZOOM CONTROL Modern lenses contain a motor-driven zoom lens. The zoom lens control (mounted on the lens on EFP/ENG cameras and on the pan handle of studio cameras) has a zoom-in position at one end and a zoom-out position at the other end. These are often labeled “T,” to indicate the tight, telephoto, zoomed-in position, and “W,” for the wide, zoomed out position. (See Figure 4.29.)

Some zoom lens controls have one or two preset speeds that allow the operator to set the speed selector switch for a fast or slow zoom. More sophisticated zoom lenses have continuous variable-speed motors hooked up to the lens. The zoom speed is controlled by the amount of pressure that is put on the control—the zoom speed increases with increased pressure and decreases with decreased pressure. This guarantees extremely smooth zooming over the whole zoom range of the lens.

Many professional-quality ENG/EFP lenses also contain a manual zoom control. This disengages the zoom lens motor and allows the camera operator to manually control the speed of the zoom by rotating the zoom ring on the barrel of the lens.

APERTURE CONTROL The aperture, or iris, of the lens regulates the exposure of the image by controlling the amount of light that passes through the lens and hits the image sensors. The iris of the lens works like the iris of the eye. In low-light situations the iris needs to be opened up to allow more light to hit the image sensor. In bright situations, the size of the iris needs to be reduced to decrease the amount of light hitting the sensors. This can be accomplished manually by turning the aperture ring, or automatically by using the automatic aperture controls.

Its f-stop number, a standard calibration of the size of the aperture opening, describes the size of the aperture or iris opening; f-stop numbers usually vary from about 1.4 to 22. Typical f-stop settings on a lens are f-1.4, f-2, f-2.8, f-4, f-5.6, f-8, f-16, and f-22. Small f-stop numbers correspond to large aperture openings; large f-stop numbers correspond to small aperture openings. (See Figure 4.30.) In addition, most lenses have an iris setting in which the aperture can be completely closed to prevent any light from hitting the image sensors during those times when the camera is not in use.

OTHER LENS COMPONENTS EFP/ENG camcorders and convertible cameras typically include a soft rubber lens hood attached to the end of the lens. The lens hood works like the visor on a hat to prevent unwanted light from hitting the lens and causing lens flare. Lens flare is an optical aberration that is caused when light bounces off the elements within a lens. Most often, lens flare is caused by pointing the camera directly at the sun or a studio lighting instrument. (See Figure 4.31.)

The lens cap is a cover that is attached to the end of the lens to protect the lens glass from damage when the camera is not in operation and to prevent light from hitting the image sensors when the camera is off. The lens cap should always be used when the camera is not in operation; in field recording situations, it should always be used when the camera is being transported from one location to another.

A macro lever is a device that converts the lens to a macro lens, allowing the camera to take extreme close-up shots of very small objects by moving them very close to the lens. In the normal zoom lens mode, most lenses are not capable of focusing on objects that are closer than two or three feet from the end of the lens. When the macro lens function is activated, the camera can focus on objects that are only inches away from the lens and magnify their size so that they fill the video frame.

A variety of lens filters are available for use with studio and field lenses. Filters may be used to change the quality or amount of light passing through the lens, or they may be used to achieve special effects. Fog filters and star filters are popular special effects lens filters.

Aperture Control and Depth of Field

Depth of field refers to the portion of the scene that is in focus in front of the camera. Depth of field can be very long or very short, depending on the amount of available light, the aperture setting, the distance between the subject and camera, and the zoom lens setting. Table 4.2 shows these relationships. In addition, let’s consider two typical situations that illustrate how these variables work together.

■ Example 1: Low Light in the Studio

Let’s assume that you have set up a romantic scene in the studio. You are trying to create the illusion that it is evening in a cozy living room. To create the illusion, you have brought down the overall light level on the set, and you have worked hard to reduce the amount of light falling on the walls of the living room. Although you are still within the baselight range for the camera, you are at the lower end of it. As a result, you may need to open the aperture to achieve the correct exposure.

Because this is a romantic scene focused on two individuals, you decide to use a lot of close-up camera shots. To achieve these close-ups, you move your cameras close to the actors and have your camera operators zoom in on your subjects. When they do this, you notice that the background quickly falls out of focus. Why? Because all of the elements have conspired to reduce the depth of field. The low-light level caused you to open the camera iris. As you can see in Table 4.2, when you increase the size of the aperture, depth of field decreases. Moving the cameras close to the subjects and zooming in, which increases the focal length of the lens, also contributes to shortening the depth of field. This is an effective technique if you want to achieve selective focus: drawing attention to a part of the shot that is in focus while letting the rest of the shot go out of focus.

■ Example 2: Daylight Outdoors

Contrast the studio scene in Example 1 with the following field production situation. You have been assigned to photograph a parade at noon on a bright, beautiful summer day. To be able to show the magnitude of the event, you position yourself at the top of the reviewers’ grandstand along the parade route. This puts you quite a distance from the parade. When your camera is on a wide shot (lens zoomed out), you notice that everything in front of the camera is in focus as far as you can see. Again, look at Table 4.2. You have great depth of field because there is a lot of light on the scene, causing the aperture size to decrease. You have decreased the focal length of the lens by zooming out, and the camera is a significant distance from the main subject. All of these elements contribute to increase the depth of field.

If you change only one of these elements—the focal length of the lens—you immediately see a dramatic change in the depth of field. When you zoom in to get a close-up of the action, you notice again that the people and objects behind the subject go out of focus, as they did in the studio. By zooming in, you increased the focal length of the lens, and this has resulted in shortening the depth of field. (See Figure 4.32.)

Camera Movement

In addition to zooming the lens to change the angle of view on the subject, there are several other types of camera movement that are commonly used in video production.

■ Panning and Tilting

When the camera is mounted on a tripod or pedestal, the camera can smoothly be panned or tilted. A pan is a horizontal movement of the camera head only. The word pan is short for panorama, and the purpose of the pan is to reveal a scene with a sweeping horizontal motion of the camera head. A tilt is a vertical (up-and-down) movement of the camera head.

Figure 4.32

Professional tripods and pedestal heads contain controls to help control panning and tilting. Pan and tilt drag controls introduce varying degrees of resistance to the head so that the camera operator can pan and tilt smoothly. Tripods may be equipped with a fluid head to allow for smooth movement of the camera head (panning and tilting), or a less expensive friction head that doesn’t provide as much control.

Cameras mounted on tripods shoot from a fixed height. The camera can be raised or lowered by manually extending or retracting the tripod legs. Needless to say, this cannot be done while the camera is shooting the picture.

High-quality studio pedestals, on the other hand, often have the ability to raise or lower the camera smoothly while the camera is photographing the scene. This allows the camera operator and video director to introduce dynamic motion effects into the program.

■ Dolly, Truck, and Arc

Movement of the camera and its support unit (either a pedestal or tripod with a three-wheel dolly attached to the legs) produces the movement of dolly, truck, and arc. A dolly in or out is the movement of the entire camera toward or away from the scene. A truck left or right is the horizontal movement of the camera and its support in front of the scene. An arc is a semicircular movement of the camera and its support around the scene.

Of course, to achieve these effects, you must have not only a wheeled pedestal or tripod, but also a smooth surface to move the camera across. This is an advantage that studios provide over field shooting.

When dynamic trucking or dollying shots are needed in the field, a set of rigid tracks can be constructed for a camera dolly to roll on. This practice gave rise to the term tracking shots.

■ Hand-Held Camera Movement

Effective camera movement can be achieved with a hand-held camera, but it takes practice to master the technique. Novice camera operators tend to move the camera too quickly. This applies to panning and tilting as well as to zooming. Remember that one of the essential qualities of a successful production is the quality of control. Camera movement, like lighting and other production elements, needs to be controlled to be effective.

Because the ground needs to be smooth for tripod dollies to be effective and track systems take a considerable time to construct, increasingly, when smooth camera movement is called for in field production, a gyroscopic camera mount system like the Steadicam® system described earlier in this chapter will be used. With it the camera seems to float effortlessly through the scene as the camera operator walks or even runs along with the action.

Operating the Video Camera

■ In the Studio

The camera operator’s call to action in the studio often begins with the director saying, “Cameras to headphones” over the studio talkback system. This is the camera operator’s cue to go to his or her camera, uncap the camera, put on the headsets, and listen to the director’s instructions.

Before the program begins, all of the cameras will need to be white balanced. This requires focusing the camera on a white card that is positioned on the set in the studio light. Once the camera is white balanced, the camera operator can refer to the director’s calls and the camera shot sheet—a list of each of the shots for each camera—for direction as to which part of the scene to focus on.

Whether the show is fully scripted or not, the camera operator’s duty is to be alert, to pay attention to what is happening on the set, and to have a usable shot ready for the director at all times. Camera operators should make sure that the zoom lens focus is set by zooming in to the tightest shot possible on the subject, focusing the lens, and then zooming out. Assuming that the distance between the subject and camera does not change, the shot should hold its focus as the camera zooms in and out.

At the end of the production the camera operator should replace the lens cap on the camera and return the camera to its normal position in the studio. The pedestal or tripod head tilt and drag controls should be securely locked, and the camera cable should neatly be coiled.

■ In the Field

In multicamera field productions the field camera operator functions much as he or she would in the studio environment, listening for headset instructions from the director about shot selection and framing. In single-camera field productions the camera operator often does not have access to a headset. Rather, the program may be recorded shot by shot, with the director giving instructions for the setup of each shot before it is recorded.

When shooting with a portable camcorder, you will need to pay attention to a number of additional details:

  1. Remember to bring enough batteries because most productions shot with a single camcorder in the field rely on battery power. An AC adapter can be used to power the camcorder from an electrical wall outlet.
  2. Remember to bring enough storage media (SD cards, memory sticks, etc.).
  3. White balance the camera whenever you move to a different location or if the lighting conditions change in the location you are shooting in.
  4. Monitor audio levels closely when you are recording. Most camcorders will display audio levels in the viewfinder. Also remember that you will always get a higher quality audio recording by using an external microphone rather than the built-in camera microphone.
  5. To stop or start recording, simply push the record button—also called the VR trigger—(on the lens housing or at the back of the camcorder) and release it. Make sure to record five to ten seconds of extra material at the head and tail of each shot. As we will discuss in the chapter on editing, this will help to make the editing process easier.
  6. Label your media cards clearly, preferably before you put a new card into the camcorder. When you remove a card, move the write-protect switch or record safety lock on the card to the position that will prevent you from accidentally rerecording onto the card.
  7. When you have finished shooting, make sure that all equipment is stowed safely in its travel bags.

Video Connectors

When working in the studio or in the field, you may need to connect several pieces of video equipment together—for example, to display the camera output on a monitor. To work efficiently, you should know the differences between the principal types of video connectors that are now in use. (See Figure 4.33.) The process of connecting audio and video inputs and outputs is called patching.

Figure 4.33

The following connectors are widely used with professional digital video equipment:

  • BNC bayonet connector. Most professional digital video equipment uses this connector for video inputs and outputs. This is a twist-lock connector that snaps securely into place when it is properly connected.
  • Thunderbolt and HDMI connectors. These connectors are used to connect various video, smartphone and/or computer components. HDMI is an abbreviation for High Definition Multimedia Interface.
  • Ethernet connector. This connecter is used to connect various pieces of production and distribution equipment or to connect video recorders and servers to the Internet.

There are a number of connectors still in use with “legacy” equipment—older equipment that has not yet been replaced by modern digital equipment, and on consumer-level audio and video equipment. These include the following:

  • RCA/phono connector. It is the same type of connector that is often used to connect audio inputs and outputs on home stereo systems.
  • F-connector. The cable television connection to your home receiver may be made with a cable that ends in an F-connector. This cable connector has a small threaded sleeve, and a thin copper wire protrudes from the center of the cable.
  • S-Video connector. A four-pin connector that is used to connect standard definition camcorders, video recorders, and video monitors that are equipped with similar connectors.
  • Firewire. An older connector used to connect digital devices.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset