Camerawork and editing

It is part of broadcasting folklore that the best place to start to learn about camerawork is in the edit booth. Here, the shots that have been provided by the cameraman have to be previewed, selected and then knitted together by the editor into a coherent structure to explain the story and fit the designated running time of the item in the programme. Clear story telling, running time and structure are the key points of editing and a cameraman who simply provides a endless number of unrelated shots will pose problems for the editor. A cameraman returning from a difficult shoot may have a different version of the edit process. A vital shot may be missing, but then the editor was not there to see the difficulties encountered by the news cameraman. And how about all the wonderful material that was at the end of the second cassette that was never used? With one hour to transmission there was no time to view or to cut it, claims the editor.

In some areas of news and magazine coverage this perennial exchange is being eliminated by the gradual introduction of portable field editing. It is no longer the case of handing over material for someone else ‘to sort out’. Now the cameraman is the editor or the editor is the cameraman. This focuses under ‘one hat’ the priorities of camerawork and the priorities of editing. The cameraman can keep his favourite shot if he can convince himself, as the editor, that the shot is pertinent and works in the final cut.

Selection and structure

Editing is selecting and coordinating one shot with the next to construct a sequence of shots which form a coherent and logical narrative. There are a number of standard editing conventions and techniques that can be employed to achieve a flow of images that guide the viewer through a visual journey. A programme’s aim may be to provide a set of factual arguments that allows the viewer to decide on the competing points of view; it may be dramatic entertainment utilizing editing technique to prompt the viewer to experience a series of highs and lows on the journey from conflict to resolution; or a news item’s intention may be to accurately report an event for the audience’s information or curiosity.

The manipulation of sound and picture can only be achieved electronically, and an editor who aims to fully exploit the potential of television must master the basic technology of the medium. To the knowledge of technique and technology must be added the essential requirement of a supply of appropriate video and audio material. As we have seen in the section on camerawork, the cameraman, director or journalist need to shoot with editing in mind. Unless the necessary shots are available for the item, an editor cannot cut a cohesive and structured story. A random collection of shots is not a story, and although an editor may be able to salvage a usable item from a series of ‘snapshots’, essentially editing is exactly like the well-known computer equation which states that ‘garbage in equals garbage out’.

Camerawork priorities and editing priorities

images

images

A topical news story (for example an item on rail safety) is required for the early evening bulletin on the day it becomes news. The camera crew and reporter have a very limited time to shoot the material and then get it back to base for editing (e.g. a total of less than 6 hours). The editor with the reporter is also constrained for time in the afternoon (e.g. 3 hours) before the item is transmitted. Time management is important for both groups.

The shooting order depends on availability of material, such as interviews, driving time between locations, and access to locations. Then the cassettes must be returned to base.

In the example story, the interviews (1 and 2) are shot first because the essential participants are available, and also because their comments may provide developments to the story.

Visual support for the comments and the reporter’s eventual voice-over are then shot (3–7) at two locations before recording the reporter’s ‘piece to camera’ for the ending (8) (see page 147).

The editor and reporter rearrange the material ensuring there are no visual jumps and the shots follow the points being made in the interviews, and the ‘piece to camera’. The archive material of a rail crash (9) is found and inserted to support the main thrust of the item which is rail safety.

The audio is very often just as complex as the reordering of the visuals and needs careful planning and the provision of the right points and length.

Sound and vision need to be provided which will allow the editor to cut to the required time slot in the running order in the time he/she has available to edit the material.

Editing technology

Video tape editing started with a primitive mechanical cut and join system on 2-inch tape. As tape formats changed more sophisticated electronic methods were devised but the proliferation of recording formats (including the introduction of disk) meant that nearly all formats have their individual designated editing systems or they need to be dubbed to the preferred editing format. In news programmes and in many other types of programming there is also the need to use library material which may be originated on a superseded and/or obsolete format. The lack of the simple standardization of film (i.e. it is either 35 mm or 16 mm with the complication of changing aspect ratios) has meant that a number of different video editing systems have been devised, mostly using the same basic editing methods but taking place in a variety of recording formats. To add to the variety of contemporary formats and the back catalogues of material are the conversion and compression problems posed by the transition from analogue to digital television. An edit suite or field edit system is therefore defined by its analogue or digital format and whether the shot selection is achieved in a linear or non-linear form.

Selective copying

Recorded video material from the camera almost always requires rearrangement and selection before it can be transmitted. Selective copying from this material onto a new recording is the basis of the video editing craft. Selecting the required shots, finding ways to unobtrusively cut them together to make up a coherent, logical, narrative progression takes time. Using a linear editing technique (i.e. tape-to-tape transfer), and repeatedly re-recording the analogue material exposes the signal to possible distortions and generation losses. Some digital VTR formats very much reduce these distortions. An alternative to this system is to store all the recorded shots on disk or integrated circuits to make up an edit list detailing shot order and source origin (e.g. cassette number, etc.) which can then be used to instruct VTR machines to automatically dub across the required material, or to instruct storage devices to play out shots in the prescribed edit list order.

On tape, an edit is performed by dubbing across the new shot from the originating tape onto the out point of the last shot on the master tape. Simple non-linear disk systems may need to shuffle their recorded data in order to achieve the required frame to frame whereas there is no re-recording required in random access editing, simply an instruction to read frames in a new order from the storage device.

Off-line editing allows editing decisions to be made using low-cost equipment to produce an edit decision list, or a rough cut, which can then be conformed or referred to in a high-quality on-line suite. A high-quality/high-cost edit suite is not required for such decision making, although very few off-line edit facilities allow settings for DVEs, colour correctors or keyers. Low-cost off-line editing allows a range of story structure and edit alternatives to be tried out before tying up a high-cost on-line edit suite to produce the final master tape.

Insert and assembly editing

There are two types of linear editing:

■  Insert editing records new video and audio over existing recorded material (often black and colour burst) on a ‘striped’ tape. Striped tape is prepared (often referred to as blacking up a tape) before the editing session by recording a continuous control track and timecode along its complete length. This is similar to the need to format a disk before its use in a computer. This pre-recording also ensures that the tape tension is reasonably stable across the length of the tape. During the editing session, only new video and audio is inserted onto the striped tape leaving the existing control track and timecode already recorded on the tape, undisturbed. This minimizes the chance of any discontinuity in the edited result. It ensures that it is possible to come ‘out’ of an edit cleanly, and return to the recorded material without any visual disturbance. This is the most common method of video tape editing and is the preferred alternative to assemble editing.

■  Assemble editing is a method of editing onto blank (unstriped) tape in a linear fashion. The control track, timecode, video and audio are all recorded simultaneously and joined to the end of the previously recorded material. This can lead to discontinuities in the recorded timecode and especially with the control track if the master tape is recorded on more than one VTR.

Limitation of analogue signal

The analogue signal can suffer degradation during processing through the signal chain, particularly in multi-generation editing where impairment to the signal is cumulative. This loss of quality over succeeding generations places a limit to the amount of process work that can be achieved in analogue linear editing (e.g. multi-pass build-ups of special effects). This limitation can be reduced by coding the video signal into a 4:2:2 digital form (see page 28).

Compression and editing

As we discussed in Compression (page 26), passing on only the difference between one picture and the next means that at any instant in time, an image can only be reconstructed by reference to a previous ‘complete’ picture. Editing such compressed pictures can only occur on a complete frame.

Provided the digital signal is uncompressed, there is no limit to how many generations of the original are ‘rerecorded’ as each new digital generation of the original material is a clone rather than a copy. Imperfections introduced in the editing chain are not accentuated except where different compression systems are applied to the signal in its journey from acquisition to the viewer. Nearly all digital signals from the point of acquisition are compressed. Care must be taken when editing compressed video to make certain that the edit point of an incoming shot is a complete frame, and does not rely (during compression decoding) on information from a preceding frame (see page 29).

images

The technical requirements for an edit

■  Enter the replay timecode in and out points into the edit controller.

■  Enter the record tape timecode in-point.

■  Preview the edit.

■  Check sync stability for the pre-roll time when using time-of-day timecode.

■  Make the edit.

■  Check the edit is technically correct in sound and vision and the edit occurs at the required place.

■  When two shots are cut together, check that there is continuity across the cut and the transition (to an innocent eye) is invisible.

Timecode and editing

Editing requires the required shot to be quickly located on the appropriate tape (linear editing) or by code or description on a disk (non-linear editing). Identification of shots becomes of paramount importance especially in high-speed turnaround operations such as news. Many non-news productions use a log (dope sheet) which records details of shots and the start/finish timecode. News editing usually has to rely on previewing or pre-selection (if there is time) by a journalist. Video editing can only be achieved with precision if there is a method of uniquely identifying each frame. Usually at the point of origination in the camera (see page 92), a timecode number identifying hour, minute, second and frame is recorded on the tape against every frame of video. This number can be used when the material is edited, or a new series of numbers can be generated and added before editing. A common standard is the SMPTE/EBU which is an 80-bit code defined to contain sufficient information for most video editing tasks.

There are two types of timecode: record run and free run.

Record run

Record run only records a frame identification when the camera is recording. The timecode is set to zero at the start of the day’s operation, and a continuous record is produced on each tape covering all takes. It is customary practice to record the tape number in place of the hour section on the timecode. For example, the first cassette of the day would start 01.00.00.00, and the second cassette would start 02.00.00.00. Record run is the preferred method of recording timecode on most productions.

Free run

In free run, the timecode is set to the actual time of day, and when synchronized, is set to run continuously. Whether the camera is recording or not, the internal clock will continue to operate. When the camera is recording, the actual time of day will be recorded on each frame. In free run (time-of-day), a change in shot will produce a gap in timecode proportional to the amount of time that elapsed between actual recordings. These missing timecode numbers can cause problems with the edit controller when it rolls back from an intended edit point, and is unable to find the timecode number it expects there (i.e. the timecode of the frame to cut on, minus the pre-roll time).

images

Code word: Every frame contains an 80-bit code word which contains ‘time bits’ (8 decimal numbers) recording hours, minutes, seconds, frames and other digital synchronizing information. All this is updated every frame but there is room for additional ‘user bit’ information.

User-bit: User-bit allows up to 9 numbers and an A to F code to be programmed into the code word which is recorded every frame. Unlike the ‘time bits’, the user-bits remain unchanged until re-programmed. They can be used to identify production, cameraman, etc.

Problems with DV format timecode

DV recording format cameras are intended for the non-broadcast consumer market and their timecode facility is often less complex than standard broadcast formats. When recording a new shot, the timecode circuit picks up the new code from the previous shot. If the previous shot has been previewed and then the tape has been parked on blank tape (a prudent practice to avoid over-recording existing material), the timecode will read the ‘blank tape’ and assume it is at the start of a new reel. The code will revert to 00.00.00.00. This can cause problems in editing as there will be more than one shot with the same timecode. Standard broadcast format cameras have an edit search button to butt the next recording hard up against the existing recording. This provides a seamless cut between the two shots but DV format cameras may have a break in timecode if either a gap is left between two shots or the cassette has been removed and replaced in the camera.

Continuity editing

As well as the technological requirements needed to edit together two shots, there are subtle but more important basic editing conventions to be satisfied if the viewer is to remain unaware of shot transition. It would be visually distracting if the audience’s attention was continually interrupted by every change of shot.

Moving images in film or television are created by the repetition of individual static frames. It is human perception that combines the separate images into a simulation of movement. One reason this succeeds is that the adjacent images in a shot are very similar. If the shot is changed and new information appears within the frame (e.g. what was an image of a face is now an aeroplane), the eye/brain takes a little time to understand the new image. The greater the visual discrepancy between the two shots, the more likely it is that the viewer will consciously notice the change of shot.

A basic editing technique is to find ways of reducing the visual mismatch between two adjacent images. In general, a change of shot will be unobtrusive:

■  if the individual shots (when intercutting between people) are matched in size, have the same amount of headroom, have the same amount of looking space if in semi-profile, if the lens angle is similar (i.e. internal perspective is similar) and if the lens height is the same;

■  if the intercut pictures are colour matched (e.g. skin tones, background brightness, etc.) and if in succeeding shots the same subject has a consistent colour (e.g. grass in a stadium);

■  if there is continuity in action (e.g. body posture, attitude) and the flow of movement in the frame is carried over into the succeeding shot;

■  if there is a significant change in shot size or camera angle when intercutting on the same subject or if there is a significant change in content;

■  if there is continuity in lighting, in sound, props and setting and continuity in performance or presentation.

As we have already identified, the basis of all invisible technique employed in programme production and specifically in continuity editing is to ensure that:

■  shots are structured to allow the audience to understand the space, time and logic of the action so each shot follows the line of action to maintain consistent screen direction to make the geography of the action completely intelligible;

■  unobtrusive camera movement and shot change directs the audience to the content of the production rather than the mechanics of production;

■  continuity editing creates the illusion that distinct, separate shots (possibly recorded out of sequence and at different times) form part of a continuous event being witnessed by the audience.

Cutaways and cut-in

images

(a) Antiques expert talking about the detail in a piece of pottery

images

(b) Listener reacting to expert

images

(c) close-up of the vase

images

(d) close-up of base of vase showing manufacturer’s mark

A cutaway literally means to cut away from the main subject (a) either as a reaction to the event, e.g. cutting to a listener reacting to what a speaker is saying (b), or to support the point being made.

A cut-in usually means to go tighter on an aspect of the main subject. In the above example, the antiques expert talking in mid-shot (a) about the detail in a piece of pottery would require a cut-in close shot of the vase (c) to support the comment, and an even closer shot (d) to see a manufacturer’s mark on the base of the piece for the item to make sense to the viewer. Without these supporting shots, the item relies on what is being said to the viewer (similar to a ‘radio’ description) rather than what the viewer can see for themselves.

Perennial techniques

The skills and craft employed by the film editor to stitch together a sequence of separate shots persuades the audience that they are watching a continuous event. The action flows from shot to shot and appears natural and obvious. Obviously the craft of editing covers a wide range of genres up to and including the sophisticated creative decisions that are required to cut feature films. However, there is not such a wide gap between different editing technique as first it would appear.

Rearranging time and space

When two shots are cut together the audience attempts to make a connection between them. For example, a man on a station platform boards a train. A wide shot shows a train pulling out of a station. The audience makes the connection that the man is in the train. A cut to a close shot of the seated man follows, and it is assumed that he is travelling on the train. We see a wide shot of a train crossing the Forth Bridge, and the audience assumes that the man is travelling in Scotland. Adding a few more shots would allow a shot of the man leaving the train at his destination with the audience experiencing no violent discontinuity in the depiction of time or space. And yet a journey that may take two hours is collapsed to 30 seconds of screen time, and a variety of shots of trains and a man at different locations have been strung together in a manner that convinces the audience they have followed the same train and man throughout a journey.

Basic editing principles

This way of arranging shots is fundamental to editing. Space and time are rearranged in the most efficient way to present the information that the viewer requires to follow the argument presented. The transition between shots must not violate the audience’s sense of continuity between the actions presented. This can be achieved by:

■  Continuity of action: action is carried over from one shot to another without an apparent break in speed or direction of movement (see figures opposite).

■  Screen direction: each shot maintains the same direction of movement of the principal subject (see the cut from (a) to (b)).

■  Eyeline match: the eyeline of someone looking out of frame should be in the direction the audience believes the subject of the observation is situated. If they look out of frame with their eyeline levelled at their own height, the implication is that they are looking at something at that height.

There is a need to cement the spatial relationship between shots. Eyeline matches are decided by position (see Crossing the line, page 134), and there is very little that can be done at the editing stage to correct shooting mismatches except flipping the frame to reverse the eyeline which alters the continuity of the symmetry of the face and other left/right continuity elements in the composition such as hair partings, etc.

Screen direction

images

In a medium shot, for example (a), someone is wrapping a small cheese and placing it on the work surface. A cut to a closer shot of the table (b) shows the cheese just before it is laid on the table. Providing the cheese position relative to the table and the speed of its movement in both shots is similar, and there is continuity in the table surface, lighting, hand position, etc., then the cut will not be obtrusive. A cut on movement is often the preferred edit convention. A close shot that crosses the line (c) will not cut.

Matching shots

Matching visual design between shots

The cut between two shots can be made invisible if the incoming shot has one or more similar compositional elements as the preceding shot. The relationships between the two shots may relate to similar shape, similar position of dominant subject in the frame, colours, lighting, setting, overall composition, etc. Any similar aspects of visual design that are present in both shots (e.g. matching tone, colour or background) will help smooth the transition from one shot to the next. There is, however, a critical point in matching identical shots to achieve an unobtrusive cut (e.g. cutting together the same size shot of the same individual), where the jump between almost identical shots becomes noticeable. Narrative motivation for changing the shot (e.g. ‘What happens next? What is this person doing?’ etc.) will also smooth the transition.

Matching temporal relationships between shots

The position of a shot in relation to other shots (preceding or following) will control the viewers’ understanding of its time relationship to surrounding shots. Usually a factual event is cut in a linear time line unless indicators are built in to signal flashbacks, or, very rarely, flash-forwards. The viewer assumes the order of depicted events is linked to the passing of time. The duration of an event can be considerably shortened to a fraction of its actual running time by editing if the viewers’ concept of time passing is not violated. The standard formula for compressing space and time is to allow the main subject to leave frame, or to provide appropriate cutaways to shorten the actual time taken to complete the activity. While they are out of shot, the viewer will accept that greater distance has been travelled than is realistically possible.

Matching spatial relationships between shots

Editing creates spatial relationships between subjects which need never exist in reality. A common example is a shot of an interviewee responding to an out-of-frame question followed by a cut to the questioner listening and nodding attentively. This response, filmed possibly after the interviewee has left the location, is recorded for editing purposes in order to shorten an answer or to allow a change of shot size on the guest. The two shots are normally accepted by the viewer as being directly connected in time and the attentive ‘nodding’ is perceived as a genuine response to what the guest is saying. Cause and effect patterns occur continuously in editing.

Any two subjects or events can be linked by a cut if there is an apparent graphic continuity between shots framing them, and if there is an absence of an establishing shot showing their physical relationship.

Disguising the join between two shots can be achieved by:

■  matching by visual design (i.e. shot composition);

■  matching spatial relationships;

■  matching rhythm relationships;

■  matching temporal relationships;

■  cutting on action;

■  cutting on dialogue or sound.

Basic visual transitions include:

■  A cut, the simplest switch between shots. One image is instantaneously replaced by another image.

■  Dissolve or mix (also known as a cross-fade) allows the incoming shot to emerge from the outgoing shot until it replaces it on screen. Sometimes both images are held on screen (a half-mix) before completing the transition. The time taken for the dissolve to make the transition from one image to the next can vary depending on content and the dramatic point the dissolve is making. The proportion of each image present at any point in the mix can be varied, with one image being held as a dominant image for most of the dissolve.

■  Fade is similar to a mix except only one image is involved and either appears from a blank/black screen (fade-in) or dissolves into a blank/black screen (fade-out). A fade-in is often used to begin a sequence whilst a fade-out marks a natural end to a sequence.

■  Superimposition is when one image (often text, such as the name of the speaker in shot) is superimposed on top of another image. Name-super text is usually faded-in or wiped-in, held so that it can be read, and then faded-out, cut-out or wiped-out.

■  Wipes and pattern wipes provide an edge that moves across the screen between the outgoing image and the incoming image. The edge may be soft or bordered (a soft-wipe, or a border-wipe) to add to the transition visually.

■  Split screen is when two different images are held on screen separated by a hard or soft edge wipe.

■  Digital video effect (DVE): When a picture is digitalized the image is formed by millions of separate parts called pixels. These pixels can be endlessly rearranged to produce a variety of random and mathematically defined transitions such as geometric wipes, spins, tumbles, squeezes, squashing, and transitions from one image to another simulating the page of a book, for example, being turned to introduce the next image.

■  Colour synthesizers: A method of producing coloured captions and other effects from a monochrome source. The synthesizers rely on an adjustable preset video level to operate a switch, and usually two or three levels can be separated and used to operate colour generators and produce different colours. The switching signal is usually derived from a caption generator.

■  Chroma key: A method of combining two images to achieve the appearance of a single image. This technique requires a switch to be inserted in the signal chain which will electronically select the appropriate image. Blue is commonly chosen as the colour to be used as the separation key but other colours can be employed.

Rhythm

The editor needs to consider two types of rhythm when cutting together shots; the rhythm created by the rate of shot change, and the internal rhythm of the depicted action. Each shot will have a measurable time on screen. The rate at which shots are cut creates a rhythm which affects the viewer’s response to the sequence. For example, in a feature film action sequence, a common way of increasing the excitement and pace of the action is to increase the cutting rate by decreasing the duration of each shot on screen as the action approaches a climax. The rhythms introduced by editing are in addition to the other rhythms created by artiste movement, camera movement, and the rhythm of sound. The editor can therefore adjust shot duration and shot rate independent of the need to match continuity of action between shots; this controls an acceleration or deceleration in the pace of the item. By controlling the editing rhythm, the editor controls the amount of time the viewer has to grasp and understand the selected shots. Many productions exploit this fact in order to create an atmosphere of mystery and confusion by ambiguous framing and rapid cutting which deliberately undermines the viewer’s attempt to make sense of the images they are shown.

Another editing consideration is maintaining the rhythm of action carried over into succeeding shots. Most people have a strong sense of rhythm as expressed in running, marching, dancing, etc. If this rhythm is destroyed, as, for example, cutting together a number of shots of a marching band so that their step becomes irregular, viewers will sense the discrepancies, and the sequence will appear disjointed and awkward. When cutting from a shot of a person running, for example (see figure opposite), care must be taken that the person’s foot hits the ground with the same rhythm as in the preceding shot, and that it is the appropriate foot (e.g. after a left foot comes a right foot). Sustaining rhythms of action may well override the need for a narrative ‘ideal’ cut at an earlier or later point.

Alternatives to ‘invisible technique’

An alternative editing technique, such as, for example, pop promotions, use hundreds of cuts, disrupted continuity, ambiguous imagery, etc., to deliberately visually tease the audience, and to avoid clear visual communication. The aim is often to recreate the ‘rave’ experience of a club or concert. The production intention is to be interpretative rather than informative (see Alternative styles, page 130). There are innovations and variations on basic technique, but the majority of television programme productions use the standard editing conventions to keep the viewer’s attention on the content of the programme rather than its method of production. Standard conventions are a response to the need to provide a variety of ways of presenting visual information coupled with the need for them to be unobtrusive in their transition from shot to shot. Expertly used, they are invisible and yet provide the narrative with pace, excitement, and variety.

Cutting to rhythm

images

In the example illustrated, the editor will need to solve several basic editing problems when cutting together a sequence of shots following a cross-country runner in a race:

■  The running time of the item will be of a much shorter duration than the actual event, and if shot on one camera, there will be gaps in the coverage. Ways have to be found to cut together a compressed account of the race.

■  The editor must ensure that standard visual conventions of continuity of action and screen direction are carried over from one shot to the next without an apparent break in speed or direction of movement so that the viewer understands what is happening (e.g. no instant reversal of direction across the screen).

■  The editor will have to maintain the continuity of arm and leg movement between shots, avoiding any jump in the rhythm of movement.

■  Possibly the rate of cutting may be increased towards the end of the race to inject pace and tension.

■  Added to all of the above will be the need to match the continuity of background so that the designated runner appears to have run through a ‘logical’ landscape.

Cutting to music (see also page 139)

It helps when cutting to music to have an appreciation of music form, but the minimum skill that is required to be developed by an editor is a feel for pace and tempo, and an understanding of bar structure. Most popular music is created around a 16 or 32 bar structure and the cutting rate and time of the cut happening will relate to this bar structure. Listen to a wide range of music and identify the beat and changes in phrase and melody. A shot change on the beat (or on the ‘off’ beat) will lift the tempo of the item. Shot changes out of sync with the tempo, mood or structure of the music will neither flow nor help the images to meld together.

Types of edit

There are a number of standard editing techniques that are used across a wide range of programme making. These include:

■  Intercutting editing can be applied to locations or people. The technique of intercutting between different actions that are happening simultaneously at different locations was discovered as early as 1906 to inject pace and tension into a story. Intercutting on faces in the same location presents the viewer with changing viewpoints on action and reaction.

■  Analytical editing breaks a space down into separate framings. The classic sequence begins with a long shot to show relationships and the ‘geography’ of the setting followed by closer shots to show detail, and to focus on important action.

■  Contiguity editing follows action through different frames of changing locations. The classic pattern of shots in a ‘western’ chase sequence is where one group of horsemen ride through the frame past a distinctive tree to be followed later, in the same framing, by the pursuers riding through shot past the same distinctive tree. The tree acts as a ‘signpost’ for the audience to establish location, and as a marker of the duration of elapsed time between the pursued and the pursuer.

■  Point-of-view shot. A variant of this, which establishes the relationship between different spaces, is the point-of-view shot. Someone on-screen looks out of one side of the frame. The following shot reveals what the person is looking at. This can also be applied to anyone moving and looking out of frame, followed by their moving point-of-view shot.

Previewing

The restraints of cutting a story to a specific running time, and having it ready for a broadcast transmission deadline, is a constant pressure on the television editor. Often there is simply not enough time to preview all the material in ‘real’ time. Usually material is shuttled through at a fast-forward speed stopping only to check vital interview content. The editor has to develop a visual memory of the content of a shot and its position in the reel. One of the major contributions an editor can make is the ability to remember a shot that solves some particular visual dilemma. If two crucial shots will not cut together because of continuity problems, is there a suitable ‘buffer’ shot that could be used? The ability to identify and remember the location of a specific shot, even when spooling and shuttling, is a skill that has to be learnt in order to speed up editing.

Solving continuity problems is one reason why the location production unit need to provide additional material to help in the edit. It is a developed professional skill to find the happy medium between too much material that cannot be previewed in the editing time available, and too little material that gives the edit no flexibility if structure, running time, or story development changes between shooting and editing the material.

Cutting on movement

A change of shot requires a measurable time for the audience to adjust to the incoming shot. If the shot is part of a series of shots showing an event or action, the viewer will be able to follow the flow of action across the cut if the editor has selected an appropriate point to cut on movement. This will move the viewer into the next part of the action without them consciously realizing a cut has occurred. An edit point in the middle of an action disguises the edit point.

Cutting on movement is the bedrock of editing. It is the preferred option in cutting, compared to most other editing methods, provided the sequence has been shot to include action edit points. When breaking down a sequence of shots depicting a continuous action, there are usually five questions faced by the editor:

■  What is visually interesting?

■  What part of a shot is necessary to advance the ‘story’ of the topic?

■  How long can the sequence last?

■  Has the activity been adequately covered on camera?

■  Is there a sufficient variety of shots to serve the above requirements?

Cutting on exits and entrances

One of the basic tenets of perennial editing technique is that each shot follows the line of action to maintain consistent screen direction so that the geography of the action is completely intelligible. Cutting on exits and entrances into a frame is a standard way of reducing the amount of screen time taken to traverse distance. The usual convention is to make the cut when the subject has nearly left the frame. It is natural for the viewer, if the subject is disappearing out of the side of the frame, to wish to be shown where they are going. If the cut comes after they have left the frame then the viewer is left with an empty frame and either their interest switches to whatever is left in the frame or they feel frustrated because the subject of their interest has gone. Conversely, the incoming frame can have the subject just appearing, but the match on action has to be good otherwise there will be an obtrusive jump in their walking cadence or some other posture mismatch. Allowing the subject to clear the frame in the outgoing shot and not be present in the incoming shot is usually the lazy way of avoiding continuity mismatches. An empty frame at the end of a shot is already ‘stale’ to the viewer. If it is necessary, because there is no possibility in the shots provided of matching up the action across the cut, try to use the empty frame of the incoming shot (which is new to the viewer) before the action begins, to avoid continuity problems. This convention can be applied to any movement across a cut. In general, choose an empty frame on an incoming shot rather than the outgoing shot unless there is the need for a ‘visual’ full stop to end a sequence, e.g. a fade down or mix across to a new scene.

Frequently cameras are intercut to follow action. It is important to know when to follow the action and when to hold a static frame and let the action leave the frame. For example, a camera framed on an MCU of someone sitting could pan with them when they stood up but this might spoil the cut to a wider shot. A camera tightening into MCU may prevent a planned cut to the same size shot from another camera.

Fact or fiction

The editing techniques used for cutting fiction and factual material are almost the same. When switching on a television programme mid-way, it is sometimes impossible to assess from the editing alone if the programme is fact or fiction. Documentary makers use story telling techniques learned by audiences from a lifetime of watching drama. Usually, the indicator of what genre the production falls into is gained from the participants. Even the most realistic acting appears stilted or stylized when placed alongside people talking in their own environment. Another visual convention is to allow ‘factual’ presenters to address the lens and the viewer directly, whereas actors and the ‘public’ are usually instructed not to look at camera.

■  Communication and holding attention: The primary aim of editing is to provide the right structure and selection of shots to communicate to the audience the programme maker’s motives for making the programme, and, second, to hold their attention so that they listen and remain watching.

■  Communication with the audience: Good editing technique structures the material and identifies the main ‘teaching’ points the audience should understand. A crucial role of the editor is to be audience ‘number one’. The editor will start fresh to the material and he/she must understand the story in order for the audience to understand the story. The editor needs to be objective and bring a dispassionate eye to the material. The director/reporter may have been very close to the story for hours/days/weeks – the audience comes to it new and may not pick up the relevance of the setting or set-up if this is spelt out rapidly in the first opening sentence. It is surprising how often, with professional communicators, that what is obvious to them about the background detail of a story is unknown or its importance unappreciated by their potential audience. Beware of the ‘I think that is so obvious we needn’t mention it’ statement. As an editor, if you do not understand the relevance of the material, say so. You will not be alone.

■  Holding their attention: The edited package needs to hold the audience’s attention by its method of presentation (e.g. method of storytelling – what happens next, camera technique, editing technique, etc.). Pace and brevity (e.g. no redundant footage) are often the key factors in raising the viewer’s involvement in the item. Be aware that visuals can fight voice-over narration. Arresting images capture the attention first. The viewer would probably prefer to ‘see it’ rather than ‘hear it’. A successful visual demonstration is always more convincing than a verbal argument – as every successful salesman knows.

■  Selection: Editing, in a literal sense, is the activity of selecting from all the available material and choosing what is relevant. Film and video editing requires the additional consideration that selected shots spliced together must meet the requirements of the standard conventions of continuity editing.

Reconstruction

images

An early documentary, John Grierson’s ‘Drifters’ (1929), included sequences which were shot on the beach with a ‘mock-up’ of a boat with fresh fish bought from a shop. Grierson described documentary as ‘the creative treatment of reality’ and suggested that documentary style should be ‘in a fashion which strikes the imagination and makes observation a little richer’.

Creative treatment of reality

images

A contemporary television documentary followed a woman who had repeatedly failed her driving test. One sequence depicted her waking her husband at night for him to test her knowledge of driving. A shot of a bedroom clock suggested it was 2.15 am. Was the clock set and lit for this ‘night time’ shot? Was the production unit in the couple’s bedroom at night? Was this fact or fiction?

Wildlife programmes

images

A close-up crabbing shot of geese in a wild life programme was obtained by patiently training the geese, from a young age, to fly alongside a car. After training, the shot was easily obtained. Within the context of the film, it appeared as if the geese were filmed in their natural habitat. Was this shot fact or reconstruction?

What the editor requires

Providing cutting points

Most interviews will be edited down to a fraction of their original running time. Be aware of the need for alternative shots to allow for this to happen.

Brief the presenter/journalist to start talking when you cue (i.e. approximately 5/6 seconds after start of recording) and to precede the interview with the interviewee’s name and status to ‘ident’ the interview. Also ask them not to speak over the end of an answer or allow the interviewee to speak over the start of a question. If necessary, change framing during questions but not answers unless you make the camera movement smooth and usable.

Cutaways

Make sure your 2-shots, cutaways, reverse questions, noddies, etc., follow the interview immediately. Not only does this save the editor time in shuttling but light conditions may change and render the shots unusable. Match the interviewee and interviewer shot size. If the interview is long, provide cutaways on a separate tape. Listen to the content of the interview, which may suggest suitable cutaways.

Think about sound as well as pictures

If the interview is exterior, check for wind noise and shield the microphone as much as possible by using a windshield or using the presenter’s/interviewee’s body as a windbreak. Check background noise, particularly any continuous sound such as running water which may cause sound jumps in editing. Record a ‘wild track’ or ‘buzz track’ of atmosphere or continuous sound after the interview to assist the editor.

Backgrounds

If an interview is being conducted against a changing background (e.g. a tennis match or a busy shopping arcade) reverse questions must be shot because a 2-shot, shot after the interview will not match. If an interview or ‘presenter to camera’ is staged in front of a discussed building or object (e.g. crash scene) which is an essential part of the item, make certain that cutaways of the building or object are sufficiently different in angle and size to avoid jump cuts.

Eyeline and frame

Alternate the direction the interviewees are looking in the frame so that the editor can create the feeling of a dialogue between consecutive speakers and to avoid jump cuts between subjects who are looking in the same direction.

Thinking about editing when shooting

images

images

A news item about a politician’s visit may eventually be given a running order time of less than 25 seconds. In reality the visit may have taken 2 or 3 hours. The news coverage needs to include the essential points of the visit but also to provide the editor with maximum flexibility in the final cut. The shooting ratio must be low enough to have adequate material but not over-long so that the editor cannot preview and cut the material in the time available.

The sound will be voice-over explaining the purpose of the visit with actuality sound of interview comments (9) and (10) finishing with an interview with the politician (12). The politician’s speech (8) will be a précis on the voice-over (see page 167).

The cameraman can help the editor by providing buffer shots to allow the politician to be seen in the various locations (1), (3), (5), (8) and (12) with no jump cuts (see page 167).

The cameraman can help the editor to condense time and place by:

■  Shot (2): remembering to provide a shot of the child the politician was talking to after the politician has left. This avoids a jump between (1) and (3).

■  Shot (4): providing a shot of the TV screen the politician is standing in front of in (5). This allows the introduction of a new location without a jump. The transition can also be smoothed by the content of the voice-over.

■  Shot (6): a long shot from the back of the hall allows the politician to move to a new location and the cutaway audience shots (7) and (11) allow the section to fit the allowed duration of the voice-over avoiding any lip-sync clash.

■  The cameraman has provided different eyelines for two conflicting comments about the politician. The interviewee (9) is staged looking left to right. The interviewee shot (10) is staged looking right to left which makes a natural cut between them as if they were in conversation but holding different viewpoints. They are matched in size and headroom.

■  The final interview shot (12) is a selected key point comment by the politician following on the voice-over summing-up the visit.

Telling a story

The story telling of factual items is probably better served by the presentation of detail rather than broad generalizations. Which details are chosen to explain a topic is crucial, both in explanation and engagement. Many issues dealt with by factual programmes are often of an abstract nature which, at first thought, have little or no obvious visual representation. Images to illustrate topics such as inflation can be difficult to find when searching for precise representations of the diminishing value of money. The camera must provide an image of something, and whatever it may be, that something will be invested by the viewer with significance.

That significance may not match the main thrust of the item and may lead the viewer away from the topic. Significant detail requires careful observation at location, and a clear idea of the shape of the item when it is being shot. The editor then has to find ways of cutting together a series of shots so the transitions are seamless, and the images logically advance the story. Remember that the viewer will not necessarily have the same impression or meaning from an image that you have invested in it. A shot of a doctor on an emergency call having difficulty in parking, chosen, for example, to illustrate the problems of traffic congestion, may be seen by some viewers as simply an illustration of bad driving.

A linking theme

Because the story is told over time, there is a need for a central motif or thread which is easily followed and guides the viewer through the item. A report, for example, on traffic congestion may have a car driver on a journey through rush hour traffic. Each point about the causes of traffic congestion can be illustrated and picked up as they occur, such as out-of-town shoppers, the school run, commuters, traffic black spots, road layout, etc. The frustrations of the journey throughout the topic will naturally link the ‘teaching points’, and the viewer can easily identify and speculate about the story’s outcome.

Time

With the above example, as the story progresses over time, the attitude of the driver will probably change. He/she may display bad temper, irritation with other road users, etc. There will be a difference over time and without time there is no story. Finding ways of registering change over time is one of the key activities of editing. Choosing shots that register the temperament of the driver by using small observational details (providing the cameraman has shot the detail) reveals the story to the viewer. The main topic of the item is traffic congestion and its wear and tear on everyday life. It can be effectively revealed by focusing on one drive through a narrated journey rather than generalizations by a presenter standing alongside a traffic queue.

Structuring a sequence

The chosen structure of a section or sequence will usually have a beginning, a development, and a conclusion. Editing patterns and the narrative context do not necessarily lay the events of a story out in simple chronological order. For example, there can be a ‘tease’ sequence which seeks to engage the audience’s attention with a question or a mystery. It may be some time into the material before the solution is revealed, and the audience’s curiosity is satisfied. Whatever the shape of the structure it usually contains one or more of the following methods of sequence construction:

■  A narrative sequence is a record of an event such as a child’s first day at school, an Olympic athlete training in the early morning, etc. Narrative sequences tell a strong story and are used to engage the audience’s interest.

■  A descriptive sequence simply sets the atmosphere or provides background information. For example, an item featuring the retirement of a watchmaker may have an introductory sequence of shots featuring the watches and clocks in his workshop before the participant is introduced or interviewed. Essentially, a descriptive sequence is a scene setter, an overture to the main point of the story, although sometimes it may be used as an interlude to break up the texture of the story, or act as a transitional visual bridge to a new topic.

■  An explanatory sequence is, as the name implies, a sequence which explains either the context of the story, facts about the participants or event, or to explain an idea. Abstract concepts like a rise in unemployment usually need a verbal explanatory section backed by ‘visual wallpaper’ – images which are not specific or important in themselves, but are needed to accompany the important narration. Explanatory sequences are likely to lose the viewer’s interest, and need to be supported by narrative and description. Explanatory exposition is often essential when winding-up an item in order to draw conclusions or make explicit the relevance of the events depicted.

The shape of a sequence

The tempo and shape of a sequence, and of a number of sequences that may make up a longer item, will depend on how these methods of structuring are cut and arranged. Whether shooting news or documentaries, the transmitted item will be shaped by the editor to connect a sequence of shots either visually, by voice-over, atmosphere, music, or by a combination of any of them. Essentially the cameraman or director must organize the shooting of separate shots with some structure in mind. Any activity must be filmed to provide a sufficient variety of shots that are able to be cut together following standard editing conventions (e.g. avoidance of jump cuts, not crossing the line, etc.), and enough variety of shot to allow some flexibility in editing. Just as no shot can be considered in isolation, every sequence must be considered in context with the overall aims of the production.

Descriptive shots, narrative shots, explanatory shots

images

Shot 1 (descriptive) sets the atmosphere for the item – early morning, countryside location.

Shot 2 is still mainly descriptive but introduces the subject of the narration.

Shot 3 (narrative) goes closer to the subject who remains mysterious because he is in silhouette – the story begins in voice-over.

Shot 4 (narrative) the subject’s face is revealed but not his objective.

Shot 5 (narrative) the subject is now running in city streets – where is he running to?

Shot 6 (narrative) the subject is followed inside an office building running down a corridor.

Shot 7 (narrative) the subject enters, now in office clothes.

Shot 8 (explanatory) the subject explains, in an interview, his ambition to run in the Olympics, etc., and a voice-over rounds off the story.

Editing an interview

The interview is an essential element of news and magazine reporting. It provides for a factual testimony from an active participant similar to a witness’s court statement; that is, direct evidence of their own understanding, not rumour or hearsay. They can speak about what they feel, what they think, what they know, from their own experience. An interviewee can introduce into the report opinion, beliefs and emotion as opposed to the reporter who traditionally sticks to the facts. An interviewee therefore provides colour and emotion into an objective assessment of an event and captures the audience’s attention.

How long should a shot be held?

The simple answer to this question is as long as the viewer needs to extract the required information, or before the action depicted requires a wider or closer framing to satisfy the viewer’s curiosity, or a different shot (e.g. someone exiting the frame) to follow the action. The on-screen length also depends on many more subtle considerations than the specific content of the shot.

As discussed above, the rhythm of the editing produced by rate of shot change, and the shaping of the rate of shot change to produce an appropriate shape to a sequence, will have a bearing on how long a shot is held on screen. Rhythm relies on variation of shot length, but should not be arbitrarily imposed simply to add interest. As always with editing, there is a balance to be struck between clear communication, and the need to hold the viewer’s interest with visual variety. The aim is to clarify and emphasize the topic, and not confuse the viewer with shots that are snatched off the screen before they are visually understood.

The critical factor controlling on-screen duration is often the shot size. A long shot may have a great deal more information than a close shot. Also, a long shot is often used to introduce a new location or to set the ‘geography’ of the action. These features will be new to the audience, and therefore they will take longer to understand and absorb the information. Shifting visual information produced by moving shots will also need longer screen time.

A closer shot will usually yield its content fairly quickly, particularly if the content has been seen before (e.g. a well-known ‘screen’ face). There are other psychological aspects of perception which also have a bearing on how quickly an audience can recognize images which are flashed on to a screen. These factors are exploited in those commercials which have a very high cutting rate, but are not part of standard news/magazine editing technique.

Content and pace

Although news/magazine editing is always paring an item down to essential shots, due consideration should always be given to the subject of the item. For example, a news item about the funeral of a victim of a civil disaster or crime has to have pauses and ‘quiet’ on-screen time to reflect the feelings and emotion of the event. Just as there is a need to have changes of pace and rhythm in editing a piece to give a particularly overall shape, so a news bulletin or magazine running order will have an overall requirement for changes of tempo between hard and soft items to provide balance and variety.

Cutting an interview

A standard interview convention is to establish who the interviewee is by superimposing their name and possibly some other identification (e.g. farmer, market street trader, etc.) in text across an MCU of them. The interview is often cut using a combination of basic shots such as:

■  an MS, MCU or CU of the interviewee;

■  a matched shot of the interviewer asking questions or reacting to the answers (usually shot after the interview has ended);

■  a 2-shot which establishes location and relationship between the participants or an over-the-shoulder 2-shot looking from interviewee to interviewer;

■  the interviewee is often staged so that their background is relevant to their comments.

The interview can follow straightforward intercutting between question and answer of the participants, but more usually, after a few words from the interviewee establishing their presence, a series of cutaways are used to illustrate the points the interviewee is making. A basic interview technique requires the appropriate basic shots:

■  matched shots in size and lens angle (see Camerawork section, pages 110–23);

■  over-the-shoulder (o/s) shots;

■  intercutting on question and answer;

■  cutaways to referred items in the interview;

■  ‘noddies’ and reaction shots (NB reaction shots should be reactions, that is, a response to the main subject);

■  cutaways to avoid jump cuts when shortening answers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset