2

Landscape Photography Looks So Easy

Making a great landscape photograph seems simple. After all, a camera is just like your eye, right? Your eye has a lens that forms an image on your retina, which is how you see. A camera has a lens that forms an image on a digital sensor, and that becomes the photograph. What can be so hard? And yet I suspect that nearly all photographers have had this experience: You go hiking. You see something that moves you profoundly. You point the camera in exactly the same direction you were looking when you had that emotional experience and press the shutter release. Later you download the image to your computer so you can see it full screen in all its glory, confident that you just made the best image of your career, and your reaction is, “Oh. I didn’t think it would look like that.” Why did the photo fail?

An anecdote here will give us a clue. A cataract is a defect of the eye that renders the lens opaque, so no light reaches the retina. At the beginning of the 20th century, eye surgeons developed a way to correct congenital cataracts. Adults who had been blind from birth could suddenly “see.” Or could they? They had a clear, sharp image falling on their retinas, but they were still functionally blind. They mistook shadows for solid objects; they couldn’t recognize common objects when seen from unusual angles. They had trouble recognizing faces. Learning to read was often intensely difficult. Many gave up and resumed life as a blind person soon after the surgery. In one case, the patient could almost immediately recognize by sight some objects he had learned by touch while he was blind, but recognizing other objects was much more difficult. By itself, an image of an object formed on the retina is inherently ambiguous, since it could be a large object some distance away or a similarly shaped object that is much closer. An image can move across the retina because the object is in motion, the viewer is in motion, or both. The brain does a tremendous amount of processing on that flat, ambiguous image to create the perception of a stable, three-dimensional, emotionally meaningful world.

Clearly, the formation of an image on the retina is just the very beginning of seeing. In a similar way, making an evocative photograph involves a lot more than pointing the camera toward the subject and snapping the shutter—it must be an insightful, deliberate act. The photograph must provide all the visual clues necessary for the image to have impact.

image

FIGURE 2-2 Sunrise at Tukuhnikivats Arch, Behind the Rocks Wilderness Study Area, near Moab, Utah. I first became aware of this arch when I saw Tom Till’s photograph of it. Till’s photograph made the arch look massive, and I was surprised I’d never heard of it. I was still more surprised when I finally hiked to the arch and discovered that the opening through the arch was so small I had to duck to walk through it. Canon EOS 5D Mark III, Canon EF 16-35mm f/2.8L II USM at 20mm, three-frame bracket set, two-stop bracket interval at ISO 100, images merged using Lightroom Classic’s Photo Merge>HDR utility.

So let’s go back to the initial problem: You go for a hike, see something beautiful, snap the shutter, and the picture is a disappointment. Somehow the experience of viewing the image on your monitor or in a print is not the same as viewing the real thing.

As I analyze it, there are seven ways in which viewing a print differs from viewing reality. Let’s take each in turn.

Depth Perception

We use many clues to figure out where things are in relation to other objects, both in a print and in reality. We use relative size: Things look bigger when close than when far away. We use overlap: if an object overlaps and partially obscures another object, the first object must be closer. We use the convergence of parallel lines: this is clearly seen, for example, in the way railroad tracks appear to converge in the distance. We use the pattern of light and shade: for example, we can only distinguish a sphere from a flat circle by the way sidelight reveals the sphere’s three-dimensional form. We use atmospheric perspective: distant objects appear bluer, hazier, and a bit less sharp than closer ones. All of these depth clues operate both in the real world and in a photograph, but two crucial clues do not: binocular vision and motion parallax. Binocular vision simply means that we have two eyes, which see nearby objects from slightly different angles. The image formed by an object on our right retina is therefore slightly different from the image formed on the left. Our visual system fuses those two images and gives us the perception of depth. Motion parallax refers to the way the relative position of two objects changes when we move our heads. For example, two objects that overlapped may no longer overlap when we move and see the scene from a different angle.

Both binocular vision and the lack of motion parallax tell us instantly that a photographic print is flat. If you want to create the illusion of depth, which is usually desirable in a landscape photograph, you must work hard to maximize the remaining depth clues. As Harvard neurobiologist Margaret Livingstone wrote in her book, Vision and Art: The Biology of Seeing, “Artists must look at a three-dimensional scene with their two-dimensional retinas and then generate a two-dimensional painting that appears three-dimensional to viewers who look at it with their two-dimensional retinas.”

Once you’ve had that initial emotional reaction to a scene and you decide to take a photograph, you must slow down and consciously construct an image that will appear to have depth. You can’t assume that the viewer will see depth in your print just because you saw depth while taking the picture. This may seem obvious. “Of course prints are flat!” But it’s all too easy to forget this fact in the excitement of shooting the photo. I’ll talk more about creating a sense of depth in chapter 6.

image

FIGURE 2-3 Parry primrose below Wetterhorn Peak at sunrise, Uncompahgre Wilderness, Colorado. This image shows many flowers of the same species, which the viewer assumes must be the same size. Since the size of the blooms on the image varies, the viewer assumes the larger ones must be closer, which enhances the sense of depth. Ebony SW-45 field camera, Fujichrome film. Lens and exposure unrecorded.

Limited Dynamic Range

Our eyes can see a range of brightness, from brightest highlights to darkest shadows, that corresponds to about 13 to 14 f-stops. Early DSLRs could record perhaps six stops. More recent DSLRs can register nine stops or more. However, you can only get a range of about five and a half stops from any kind of print, whether inkjet or traditional wet darkroom. As I mentioned in the introduction, one of the fundamental problems in landscape photography is learning how to compress the very broad range of tones we observe in the real world into the much narrower range of tones we can reproduce in a print. You can see rich, colorful detail in both the shadowed flowers at your feet and the glowing clouds at sunset, but your sensor probably can’t. If you don’t take the limited dynamic range of your sensor into account, you may find that your highlights have washed out and your shadows have gone black. We’ll tackle this problem from a variety of directions in chapters 7 and 8.

image

FIGURE 2-4 Turks Head and the Green River at sunset, Island in the Sky district, Canyonlands National Park, Utah. My eyes easily saw detail everywhere in this scene, from the intensely bright sky near the setting sun to the deeply shaded canyon walls. My sensor, however, could not straddle such a large brightness difference, so I shot a five-frame bracket set with a two-stop bracket interval at each camera position. I then merged each bracketed set and stitched the resulting HDR files using Lightroom Classic’s Photo Merge>HDR Panorama utility. Canon EOS 5D Mark III, Canon EF 16-35mm f/2.8L III USM at 24mm.

Limited Sensory Input

You can’t smell the flowers in a photograph. When you’re standing there in the field, all your senses are working, not just your vision. You can hear the birds singing and feel the warmth of the sun and the cool freshness of the wind on your face. You can hardly ignore the ache in your legs after hiking for hours to that scenic overlook. The viewer of your print can only use their vision to take in the scene, which means the visual content must be strong enough to convey the emotion you felt while taking the picture with no help from the viewer’s other senses. When you are standing there composing the photograph, you need to consciously block out all the non-visual sensations and ask yourself if the image you see through your viewfinder can create the effect you desire all by itself.

Brightness Constancy

Your visual system has a property called brightness constancy. Brightness constancy is the ability of your brain to see objects as having the same brightness regardless of the level of ambient illumination, so long as the ratio of brightness values in the scene is constant. For example, your eyes see snow as a bright white, or something pretty close to white, regardless of the brightness of the ambient light. Snow looks just as white at midday as it does at dusk.

image

FIGURE 2-5 The Maroon Bells from Maroon Lake in winter, Maroon Bells-Snowmass Wilderness, Colorado. Zone VI 4×5 field camera, Fuji-chrome film. Lens and exposure unrecorded.

You never see gray snow in the real world, but it’s quite easy to see gray snow in a photograph. Your eye easily compensates for the varying brightness of the light falling on snow when viewing the real thing, but it will not make the same correction when viewing a print of snow if the snow is underexposed. The reason? Your visual system calibrates itself to the average illumination in the room where you’re viewing the print. For your eye to perceive the snow in the print as being white instead of gray, about four times as much light must be reflected off the snow as off the midtone wood paneling of the wall where the print is hung. In other words, your visual system will perform corrections on the real scene that it will not perform when viewing a print. That gives us a clue how to expose snow and other white subjects: take the lightest tone and make it white. This is another topic we’ll revisit in much greater depth in chapters 7 and 8.

Color Constancy

As I mentioned in chapter 1, our visual system has a property called color constancy. This property means that our eyes do not see color the same way that a camera’s sensor does. Both sensors and our eyes are sensitive to light with a wavelength between 400 and 700 nanometers. Sensors see color in a perfectly straightforward way: light of 700 nanometers is recorded as red, while light of 400 nanometers is recorded as violet. Light of other wavelengths is recorded as the colors in between. Our visual system does not see color this way. If it did, the colors of objects would appear to shift every time the color of the illumination shifted, from yellowish tungsten light to greenish fluorescent light to red sunrise light to white noon daylight. We tend to accept the overall color of the scene’s illumination as white, regardless of the actual color of light, so we see a white waterfall as white regardless of the color of the illumination.

Color constancy is something to keep in mind when you are photographing a subject that is in the shade on a clear day. The light illuminating those shadows comes from the blue sky. It will give everything in your image a strong bluish cast. Photograph yellow flowers or aspen leaves in the shade on a clear day, and what looked like vivid colors to your eyes in the real world will look much less vivid in your print. Your visual system will color-correct an object in the real world if everything in your field of view is lit by blue light, but it won’t color-correct a photograph of that object because your visual system is already calibrated to the color of the light in the room in which you are viewing the print. If you’re shooting close-ups, you can of course change the white balance setting on your camera to shade or cloudy or do something similar with greater control in your editing software. If you’re shooting grand landscapes, however, where the background peaks or towers are lit by warm sunrise or sunset light and the foreground flowers are in bluish shade, changing the overall white balance may give the sunlit portion of the scene an odd color cast. Blue skies, for example, may shift toward a grayish green. You can, of course, apply a color correction to the shadowed flowers in your editing software, but making the shadowed foreground too warm in tone can also look unnatural.

image

FIGURE 2-6 Mt. Owen and Ruby Peak in late September, near Kebler Pass, Colorado. Notice how the aspen groves bathed in direct sunlight are much more vivid than the foreground groves, which are lit by blue light from the mostly clear sky. The foreground grove looked more vivid to my eyes than it did to my film. Zone VI 4×5 field camera, Fujichrome film. Lens and exposure unrecorded.

Color constancy can also catch you off guard if you shoot a close-up of purple and blue flowers lit by direct sunrise light. Since everything in your field of view is lit by warm light, your visual system tends to ignore the light’s true color and see the flowers as if they were lit by relatively white light. Purplish-blue columbine can be rendered with such a strong magenta cast that you may wonder if you’ve just discovered a new species.

As a general rule, I set my white balance to daylight so my camera records the actual wavelengths present in the scene. I then choose subjects and lighting conditions keeping color constancy in mind. I will, on occasion, slightly warm up intimate landscapes shot in the shade on a clear day because my eyes saw the color as warmer than my sensor recorded it. In that situation, everything in my field of view is lit by blue light, so my visual system tends to see the light as white rather than blue. However, when shooting grand landscapes lit by various colors of light, such as golden sunset light on the sunlit background and blue sky light on the shadowed foreground, I accept the colors recorded by the camera.

Clutter

We think that we see the world by taking it all in with one big gulp. Indeed, our peripheral vision has an enormous field of view: about 180 degrees left to right and about 130 degrees top to bottom. But that’s not actually how we examine the world. We actually only see clearly in an extremely limited angle of view because the region of the retina where the receptors are small enough and packed densely enough to see sharply is very small. This region of the retina is called the fovea. Foveal vision has an angle of view of only 1 or 2 degrees. That’s roughly equivalent to a 1000mm to 2000mm telephoto lens. When viewing the world, our eyes fixate on a point of interest for about 300 milliseconds or so, then jump to the next point of interest. Eye movements are very fast—perhaps 25 to 45 milliseconds—and no real perception occurs during the movement. Our eyes dart around constantly, pausing briefly at regions of interest and skipping everything else.

image

FIGURE 2-7 Wildflowers along the Blue Lakes trail below the lower Blue Lake, Mt. Sneffels Wilderness, San Juan Mountains, Colorado. My initial impression when I came across this scene was that the whole hillside was covered with wildflowers. Close examination, however, showed that most of the hillside was actually covered with green foliage, with large gaps between each of the blossoms, as shown in the image on the left. After carefully examining the hillside, I chose the densest group of flowers and made the image on the right, which does a much better job of conveying my initial impression of a hillside awash in colorful blooms. Left image: Canon EOS 5D Mark IV, Canon EF 16-35mm f/2.8L III USM at 18mm, 1/40th, f/16, ISO 200. Right image: Canon EOS 5D Mark IV, Canon EF 16-35mm f/2.8L III USM at 33mm, 1/25th, f/22, ISO 200.

Cameras have no such ability. In other words, let’s say you’re standing at the edge of a field of wildflowers and there’s a magnificent mountain, bathed in sunset light, rising above you. Unless you consciously train yourself to do otherwise, your eyes will jump from flower to flower, skipping over all the greenery in between, and will then jump all the way up to the mountaintop. You may not realize that there are really only a few flowers at your feet and that boring gray talus fills the middle third of your picture. When you look at a print of the same scene, your visual system does not perform a similar decluttering. We still examine the print using saccadic eye movements, but the effect is different. One reason for this may be physiological: when viewing the real world, our eyes have to swing through a large arc to go from the flowers at our feet to the mountain high above. When viewing a typical print from a typical viewing distance, our eyes travel through a much smaller arc, allowing us to observe every detail. Another reason for this may be cultural. When we view a print, it is typically framed and hung on a wall. It is being presented to us as something worthy of close inspection, so we tend to look at it more carefully. Regardless of the reason, it is certainly true that our visual system will skip over clutter in the real world that it will not skip over in a photograph.

Focus

Our eyes focus and refocus so rapidly as we scan a scene that we are rarely conscious of the process. As a result, everything we see normally looks sharp. A carelessly snapped photograph, however, may not have sufficient depth of field to create a convincing illusion of reality. Our eyes cannot correct blurry areas of a photograph and make them look sharp.

Clearly, creating an evocative landscape photograph is not as easy as it first appears. As we’ve all experienced, capturing what you see is easy—just put the camera to your eye and press the shutter release. Capturing what you feel, however, is harder. Hardest of all is capturing what you feel in such a clear and compelling manner that your image causes the viewer to experience the same emotion you felt when you took the photograph. That’s when a photograph becomes art.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset