Limits of Perception

In the UK, if I had known how many clear nights there would be in the year, I would have taken up fishing.

 

 

 

The chance of success from a night’s imaging improves with a little planning. Before committing to hours of exposure and precious clear skies, it pays to consider a few preliminaries, the most basic of which is frame size. The combination of telescope and camera should give the object the right emphasis within the frame. There are simple calculations that give the field of view in arc minutes, which you can compare with the object’s size listed in a planetarium program. I have two refractors and two field flatteners, which in combination give four different fields of view (FOV). High quality imaging takes time and the next thing is to check if there is sufficient opportunity to deliver the required imaging time. There are several considerations: the object’s declination, the season, the brightness of the object over the background illumination, sky quality and in part the resolution of the optical / imaging system.

Magnitude

A number of terms loosely describe brightness in many texts, namely, luminosity, flux and magnitude. Luminosity relates to the total light energy output from a star; flux is a surface intensity, which, like an incident light reading in photography, falls off with distance. The brightness or magnitude of a star is its apparent intensity from an observed position. The magnitude of a star or galaxy in relation to the sky background and the sensitivity of the sensor are the key factors that affect the required exposure. Most planetarium programs indicate the magnitude information for any given galaxy and most stars using a simple scale. This will be its “apparent” magnitude.

Apparent Visual Magnitude

Simply put, this is the luminosity of a star as it appears to an observer on Earth. Just as with light measurements in photography, astronomical magnitudes are a logarithmic measure, which provide a convenient numerical index. Astronomy magnitudes employ a scale where an increase of one unit decreases the intensity by 2.5x, and five units by 2.55 or 100x. At one time, the magnitude scale definition assigned Polaris with a magnitude of +2.0 until the discovery that it was actually a variable star! The brightest star (apart from our own sun) is Sirius at −1.47 and the faintest object observable from the Hubble Space Telescope is about +31, or about 2.4×1013 dimmer.

A mathematical simplification arises from using logarithmic figures; adding the logarithms of two values a and b is identical to the log of (a × b). This is the principle behind a slide-rule (for the younger readers, as seen in the movie Apollo 13 when they calculate its emergency re-entry). In astronomy, any pair of similarly sized objects with a similar difference in magnitude value have the same brightness ratio. Similarly, if the magnitude limit for visual observation is magnitude 4 and a telescope boosts that by a factor, expressed in magnitude terms, say 5, the new magnitude limit is 9. A visually large object, such as a galaxy, will not appear as intense as a star of the same magnitude, as the same light output is spread over a larger field of view.

The table in fig.1 sets out the apparent magnitude scale and some example objects with the number of stars that reach that magnitude. At the same time, it indicates the limitations imposed by the sensitivity of the human eye under typical light pollution as well as the exponential number of stars at lower magnitudes. Further down the table, at the lowest magnitudes, the practical benefit of using a telescope for visual use can be seen, and that improves even further when a modest exposure onto a CCD sensor replaces the human eye. At the bottom of the table, the limit imposed by light pollution is removed by space-borne telescopes, whose sensors can see to the limits of their electronic noise.

The Advantage of Telescopes

A telescope has a light-gathering advantage over the eye, easily imagined if we think of all the light pouring into the front of a telescope compared to that of the human iris. The advantage, for a typical human eye with a pupil size of 6 mm, in units of magnitude is:

 

equation

 

In the conditions that allow one to see magnitude 5 stars, a 6-inch (15 cm) telescope will pick out magnitude 12 stars, and with an exposure of less than 1 hour, imaged with a cooled CCD, stars 250x fainter still, at magnitude 18 in typical suburban light pollution.

visibility apparent magnitude # objects brighter example / notes
human eye urban sky −1 1 Sirius (−1.5)
0 4 Vega
1 15 Saturn (1.5)
2 50 Jupiter (−2.9 to −1.6)
3 <200 Andromeda Galaxy (3.4)
human eye dark sky 4 500 Orion Nebula (M42)
5 1,600 Uranus (5.5–6.0)
6 4,800 Eagle Nebula (M16)
binoculars with 50-mm aperture 7 14,000 Bode’s Nebula (M81)
8 42,000 Crab Nebula (M1)
9 121,000 M43 Nebula in Orion
typical visual 8-cm aperture 10 340,000 NGC4244 Galaxy
11 - Little dumbbell (M76)
typical visual 15-cm aperture 12 - beyond Messier Catalog
13 - Quasar 3C 273
typical visual 30-cm aperture 14 - Galaxy PGC 21789 nr. Pollux
15 20,000,000 IC 4617 Galaxy nr. M13
10-cm refractor, CCD 10× 30 seconds suburban sky 16 - faint star in image with simple stacking, about 20,000 times more sensitive than by eye alone
10-cm refractor, CCD 10×300 seconds suburban sky 18 - faint star in image with simple stacking, about 1,000,000 times more sensitive than by eye alone
suburban sky 20 - typical background magnitude in suburb
Hubble Space Telescope 31 - galaxies 13.3 billion light-years distant

fig.1 This table highlights the limits of perception for the aided and unaided eye over a range of conditions and indicates the number of objects within that range. The advantage of CCD imaging over an exposure of 5–50 minutes is overwhelming. For Earth-based imaging, the general sky background and noise, indicated by the shading, will eventually obscure faint signals, from about magnitude 18 in suburban areas. The Hubble Space Telescope operates outside our atmosphere and air pollution, and at its limit can detect magnitude 31 objects. Its sensitivity is approximately 150,000 times better than an amateur setup. It is important to note that magnitude, when applied to a large object, such as a nebula or galaxy, applies to the total amount of light being emitted. Two galaxies of the same magnitude but different sizes will have different intensities and will require a different exposure to have equal pixel values.

Absolute Magnitude

This is a measure of an object’s intrinsic electromagnetic brightness and when evaluated in the visual wavelength range is termed absolute visual magnitude. Photographers are aware that the intensity of a light source reduces with distance, for instance the light intensity from a fl ashgun obeys the inverse-square law (for each doubling of distance, the light reduces by 4x). This same is true of cosmic light sources. Absolute magnitude is similar to apparent magnitude when measured from a fixed distance of 10 parsecs. (Since meteors and asteroids are very dim, compared to the nuclear furnace in a star, they use a magnitude definition set at 100 km and 1 AU distance respectively.) Absolute magnitude is of most interest to scientists, especially in the computation of an object’s distance. For astrophotographers, the apparent magnitude from Earth is more useful, and for amateur supernova hunting, significant changes in a star’s magnitude, compared to the star’s standard photometry, indicate a possible discovery.

Optics

Advertising and consumer pressure tempt us to over-indulge in telescopes purchases for astrophotography. There are many optical and physical properties that distinguish a “good” telescope from a “bad” one, and just like with any other pursuit, knowing what is important is the key to making the correct purchasing decision. In the case of resolution, the certainty of an optical performance, backed up by physical equations, is a beguiling one for an engineer and I have to frequently remind myself that these are only reached under perfect atmospheric conditions (which I have yet to encounter). The needs of the visual observer and astrophotographer are different too since the human eye has a higher resolution than a conventional sensor (though with less sensitivity). Expensive apochromatic refractors focus all wavelengths of light at the same point, a quality valued by visual users or those imaging with a color camera. It has less significance if separately focused exposures are taken through narrowband or individual red, green or blue filters and combined during image processing.

Astrophotography has similarities to any other kind of photography; the final image quality has many factors and the overall performance is a combination of all the degradations in the imaging chain. It is easy to misinterpret the image and blame the optics for any defects. Long before digital cameras were popular, the premium optics, from companies such as Leica and Carl Zeiss, had more resolution than could be recorded on fine grain film. If a lens and a film independently have a resolution of 200 line pairs per millimeter (lp/mm), the system resolution is closer to 140 lp/mm. At the advent of digital photography, it was not uncommon to find self-proclaimed experts conducting a lens test using a digital body with a sensor resolution of just 50 lp/mm, half that of a typical monochrome film! It was amusing and annoying at the same time. The sensor plays a pivotal role in the final resolution achievable in astrophotography (just how much we discuss later on).

Resolution?

In photography, many amateurs and not a few professionals confuse resolution and sharpness. They are not completely unrelated, but in an image they convey very different visual attributes. In simple terms, resolution is the ability to discern two close objects as separate entities. Photographic resolution tests often use alternate black and white lines in various sizes and orientations and astronomers use, not surprisingly, points of light. The common lp/mm resolution measure used in photography does not relate well to celestial object separations defined by angles. For that reason astronomers quote angular resolution, quoted in arc seconds or radians.

fig105_2.jpg

fig.2 This shows a simulated diffraction-limited star image and a profile of its intensity. The measure FWHM, which can be an angular measure, or a dimension on an image or sensor is read at a point where the image intensity is 50% of the peak.

Post-exposure image manipulation cannot restore lost image resolution, but image sharpness can be increased later on using photo software. (Mild sharpening of an image may improve the actual perceived resolution of some coarser image details but often at the same time bludgeons delicate detail.) Image sharpness has no agreed measure but is our perception of contrast between adjacent light and dark areas, especially in the transition area.

The following example illustrates the difference between resolution and sharpness: On my journey into work, there is a string of electric pylons and power lines across the horizon. In the distance, I can clearly see the lines slung between the pylons arms and each line appears “sharp” in the clear morning air. As I draw closer, my eyes resolve these lines as pairs of electrical conductors, several inches apart; that is resolution.

This is also a useful analogy for astrophotography; in many images the stars appear randomly sprinkled throughout an image with plenty of space between them and like the power lines on the horizon, we do not necessarily require a high resolution to see them, only contrast. Continuing with the analogy, we do not require a high resolution to appreciate the clouds behind the pylons and in the same sense images of nebulae and galaxies have indistinct object boundaries, in addition to which, many popular nebulae span a wide angle and do not require a high optical magnification or resolution. In these cases, not only is the seeing and optical resolution often better than the resolution of the sensor, but also the long exposure times required for dim deep sky objects often over-expose foreground stars, which bloat from light scatter along the optical path, destroying optical resolution. On the other hand, a high angular resolution is required to distinguish individual stars in globular clusters and double stars.

Many optical equations use radians for angular measure. They have the property that for very small angles, the sin or tan of that angle is the same as the angle expressed in radians. This provides a handy method to simplify formulae for practical use. There are 2π radians in 360 degrees and as most of us are more familiar using degrees, nanometers and millimeters, rather than radians and meters, you will encounter, in more convenient equations, the numbers 206 in varying powers of 10 to convert angular resolution into arc seconds.

Resolution, Diffraction Limits and FWHM

Although a star is a finite object, it is so distant that it should focus to an infinitely small spot in an image. Due to diffraction and even with perfect optics, it appears as a diffuse blob with pale circular bands. The brightest part of the blob is at its center and a measure of its blobbiness is its diameter at which its intensity is half its peak value (fig.2). This defines the Full Width Half Maximum, or FWHM for short, and is often an information call-out within most image capture and focusing programs. (Most focus algorithms assume the optimum focus occurs when a star’s FWHM (or a related measure, Half Flux Diameter) is at a minimum.) The minimum FWHM (in radians) of a point image is dependent upon the wavelength λ and aperture D by the equation:

 

equation

 

The same physics of light diffraction limits our ability to distinguish neighboring stars and is similarly dependent upon the aperture and wavelength. The theoretical resolution determined by Lord Rayleigh, referred to as the Rayleigh Criterion, is shown below, for resolving two close objects, in radians, through a circular aperture D and wavelength λ:

 

equation

 

Conveniently, both the FWHM and Rayleigh Criterion have very similar values and can be treated as one and the same in calculations.

Either equation can be made more convenient by expressing the angular resolution in arc seconds:

 

equation

 

The interesting feature of these equations is the resolution improves with aperture, and significantly, is irrespective of the focal length or magnification. The simulated image sequence in fig.3 shows two equal intensity stars at different degrees of separation. The point at which the two blobs are distinguished occurs when the peaks are at the Rayleigh Criterion distance or approximately one FWHM distance apart. These equations work for a simple refractor; those telescopes with central obstructions have more diffraction for the same aperture.

Astronomical Seeing

Astronomical seeing is an empirical measure of the optical stability of our atmosphere. Turbulence causes rapid (~10–30 ms), localized changes in air density and parallel beams are deviated through refraction. Astronomers look through about 20 miles of atmosphere (looking straight up) and double that, closer to the horizon. Turbulence makes stars shimmer or blur when viewed through a telescope. At any one time the light beams pass through adjacent small air pockets (a few centimeters across) with different refractive indices. This occurs mostly in the denser air near the ground or from tiny convection currents within the telescope tube. At high magnifications, typical with planetary imaging, the individual video frames jump about the screen, some are badly blurred and others remarkably sharp. When the light path has a consistent refractive index, a frame is sharp and when it is variable, blurred. During longer exposures, the photons from these sharp, blurred and displaced images accumulate onto the sensor, creating a smeared star image. Seeing affects angular resolution and it is measured in the same units (arc seconds). Astronomical forecasts of seeing conditions around the Earth are available from websites, some examples of which are shown in figs. 4 and 5 opposite. Others include “metcheck” and various applications for mobile devices such as Scope Nights. Humidity and pressure readouts from portable devices also help to predict atmospheric transparency and mist.

fig105_3.jpg

fig.3 These three simulated illustrations show diffraction-limited images from two identical stars at different angular separations. The profiles underneath show the separate and combined image intensities. The image on the left has the two stars separated by half the FWHM distance and although it is oblong, the two stars are not distinguishable. The middle image, at the Rayleigh Criterion separation, shows a clear distinction and exhibits a small dip in central intensity. This separation is just a little larger than the FWHM width of a single star. The right-hand image shows clear separation at a distance of 1.5 FWHMs apart.

For a prime site, a seeing condition of 0.5 arc seconds is possible but values in the range of 1.5–3.5 are more typical for most of us. More often than not, the prevailing seeing conditions will limit the resolution for any given telescope. The table in fig.6 shows the theoretical limits of visible light resolution for several common amateur telescope sizes in relation to typical seeing conditions. It is quite sobering to realize the limitation imposed by typical seeing conditions through the atmosphere is equivalent to a telescope with an aperture in the region of 3 inches (75 mm).

Seeing conditions are particularly sensitive to perturbations in the dense atmosphere closest to Earth, and generally improve with altitude and proximity to large expanses of water (due to the moderating effect on thermal generation). The mountain observatories in Hawaii and the Canary Islands are good examples of prime locations. Seeing conditions also change with the season and the amount of daytime heating. The local site has an immediate bearing too; it is better to image in a cool open field than over an expanse of concrete that has received a day’s sunshine. Astronomers choose remote sites not to be anti-social; they just need to find high altitude, clear skies, low light pollution and low air turbulence. Knowing and predicting the prevailing conditions is a key part of our day-to-day activity. In some countries especially, each opportunity is a precious one.

fig105_4.jpg

fig.4 This screen capture is of a typical clear sky chart for a site in North America. It is available from www.cleardarksky.com

fig105_5.jpg

fig.5 Another forecast site, this time from First Light Optics

Other Atmospheric Effects

We become increasingly aware of light pollution as soon as we take up astronomy. As we have seen earlier, light pollution masks the faint stars and nebula. This light is scattered back from atmospheric aerosols, dust and water vapor and places a (typically) orange-yellow fog over proceedings. A full moon too has a surprisingly strong effect on light pollution and puts a damper on things. After it has been raining, the air is often much cleaner and the effects of light pollution are slightly reduced due to better atmospheric transparency. Atmospheric transparency can be forecast and is included in the readouts in figs. 4 and 5 along with dew point, moon-phase, humidity and wind.

Why then do so many sources recommend buying the largest affordable aperture and that “aperture is king”? Larger apertures technically have the potential for better resolution, but above all, capture more light. For visual use, the extra aperture is the difference between seeing a dim galaxy or not. For imagers, the extra light intensity delivers an opportunity for shorter exposure times or more light captured over a fixed period, which reaps benefits in sleep deprivation and lower image noise.

Not all telescope designs are equal; there are some subtle differences between the optical performance of the various telescope architectures; the more complex have additional losses in transmission, reflection and diffraction at each optical boundary. Of the telescope designs, for any given aperture, refractors are the simplest optically and have the highest image contrast, followed by Newtonian reflectors and then folded designs which, for the moment, we will collectively call Schmidt-Cassegrain’s or SCTs. On top of the limitations of the optical path, vibration, flexure, focus shifts, tracking accuracy and atmospheric effects contribute to the blurring of the eventual star image. If it was going to be easy, it would not be nearly as rewarding or half as much fun!

fig105_6.jpg

fig.6 The chart above indicates the diffraction-limited resolution for visible light, in arc seconds, for any given aperture in relation to the limits imposed by typical seeing conditions.

Imaging Resolution

The term sensor describes the light sensitive device that resides in a camera. Digital sensors are quite complex and they have their own chapter. For now, a single light-sensitive element on the sensor, or photosite, corresponds to a pixel in the image. It converts photons into an electrical signal. This signal is amplified, sampled and stored as a digital value. An imaging sensor has a grid of photosites of fixed pitch, typically in the range of 4–7 microns. The photosites simply accumulate electrons, triggered by incident photons and largely irrespective of wavelength. To make a color “pixel” requires a combination of exposures taken through red, green and blue filters. This can either be achieved from separate exposures taken through a large filter placed in front of the entire sensor or a single exposure through a color filter mosaic fixed over the sensor (Bayer array). Astrophotographers use both approaches, each with benefits and drawbacks and these are discussed later on.

The pitch of the photosite grid has a bearing upon image resolution. Up to now, the discussion has revolved around angular resolution. To consider the physical relationship between angular and linear resolution on an imaging sensor we need to take account of the focal length fz of the optics.

The angle subtended by 1 pixel (arc seconds per pixel) is given by the following simplified equation from basic trigonometry (fL in mm):

 

equation

 

Classical (Nyquist) sampling theorems might suggest two pixels are required to resolve a pair of stars but experts settle on a number closer to 3.3 adjacent pixels to guarantee the resolution of two points. (Stars do not always align themselves conveniently with the sensor grid and must consider all angles. The pixel spacing on the diagonal is 40% larger than the grid axis.) The angular resolution of a CCD is 3.3x its arc second/pixel value and changes with the focal length of the optics.

It is interesting to compare this to the diffraction limit and calculate the equivalent pixel pitch for the same resolution:

 

equation

 

This simplifies, assuming green light to:

 

equation

 

In the case of my refractor, it has a measured focal length of 924 mm and an aperture of 132 mm and I use it with a sensor with a pixel pitch of 5.4 microns. The telescope has a diffraction-limited resolution (x) of approximately 1.0 arc second, but the sensor’s resolution (y) is 4.0 arc seconds. For the CCD to match the diffraction-limited performance of the optics, it would require a smaller pitch of 1.4 microns. That might look quite damning but there is another consideration, the effect of astronomical seeing: The CCD resolution of 4.0 arc seconds is only marginally worse than typical seeing conditions (z) of say 3.0 arc seconds in a suburban setting. The system resolution is a combination of all the above and defined by its quadratic sum:

 

equation

 

The system resolution is a combination of the individual values and is always less than the weakest link in the imaging chain. This resolution is further degraded by star tracking issues too, which can be significant during unguided exposures. (Guided exposures in a well adjusted system typically have less than one arc second of error.)

To sum up, in this typical setup, the telescope’s optical diffraction has little influence on the final resolution and more surprisingly, the CCD is the weakest link. The seeing and CCD resolution are similar though, and while sensors with a finer pitch can be used (albeit with other issues), in astrophotography the most difficult thing to change is one’s environment. All these factors are weighed up in the balance between resolution, image requirements, field of view, signal strength, cost and portability. The conventional wisdom is to have a CCD whose arc seconds/pixel value is about 1/3rd of the limiting conditions; either the seeing condition or the diffraction limit (normally on smaller scopes).

Dynamic Range

In the early days of digital imaging, many wedding photographers preferred the results from color negative film as it captured a larger brightness range than a digital camera. In effect, what they were saying was that film could distinguish a higher ratio of light levels between highlights and shadows. Comparing analog and digital systems, however, is not easy; there is more at play here than just the difference in light levels; there is tonal resolution too, and this is where film and digital sensors are very different.

Astrophotography is particularly demanding on the dynamic range of a sensor. In any one image, there may be bright stars, very dim clouds of ionized gas and somewhere in the middle, brighter regions in the core of a galaxy or nebula. The Orion Nebula is one such subject that exceeds the abilities of most sensors. Extensive manipulation is required to boost the dim clouds, maintain good contrast in the mid range and at the same time emphasize the star color without brightening them into white blobs. For this to be a success, the original image data needs to not only record the bright stars without clipping but also to capture the dim elements with sufficient tonal resolution so that they both can withstand subsequent image manipulation without degradation.

In one respect, the dynamic range of a sensor is a simple measure of ratio of the largest signal to the lowest, expressed either as a ratio or in decibels (dB) calculated by:

dB = 20. log (light ratio)

In photography, dynamic range is related to bit depth, which is measured by the number of binary digits output from the sensor’s analog to digital converter (ADC). A 16-bit ADC has 216 voltage levels, over 65,000:1. In practice, this is not the whole story: Firstly, many sensors require less than one electron to change the output value and then there is image noise. The other fly in the ointment is that sensors are linear devices and we are accustomed to working in logarithmic units; in a digital image, there are fewer signal levels per magnitude at the dark end than at the bright end of the exposure scale.

Full Well Bit Depth and Noise

Just as with system resolution, there is more to dynamic range than one single measure. First there is bit depth: Consumer cameras record JPEG files and these are made with three 8-bit values each for red, green and blue. That is just 256 levels per color channel, even though the sensor may have more detailed information, typically 16–32x better. Color fidelity in astrophotography is not a primary concern, but low color noise and fine tonal resolution in the shadow areas are essential qualities in the original downloaded file. Image capture programs obtain the maximum bit depth possible from the sensor, similar to the RAW file formats found on advanced digital cameras. For example, the Kodak KAF8300M monochrome sensor has a 16-bit readout and one might assume that it has 65,000 output levels. There is a snag; this sensor only requires 25,000 electrons to saturate (max-out) a photosite. In reality, it has 25,000 states, equivalent to just over 14-bit. This number of electrons is known as the Full Well Capacity and varies between sensor models. This is not the end of the story; random electrons introduced during the sensor readout process further reduce the effective dynamic range. This concept is a little difficult to appreciate but noise affects a sensor’s dynamic range and set a minimum exposure requirement.

The previous formula sets the effective dynamic range. In the case of the KAF8300M sensor, the read noise is typically 7 electrons, yielding a dynamic range of 3,570:1 or about 12-bit. The 16-bit ADC in the sensor has sufficient resolution above the signal resolution to ensure that it does not introduce sampling noise, which is of more concern in high-quality audio reproduction.

We also need to go back and consider the linear nature of imaging sensors: Starting at a data level of 1, successive doubling of light intensity (think aperture stops on a camera lens) produce pixel values in the sequence 1, 2, 4, 8, 16, 32, 128, 256, 512 and so on. At the dark end, there are no intermediate values and the tonal resolution is 1 stop. At the other extreme, there are 256 values between 256 and 512 giving a tonal resolution of 1/256 stop. Fortunately in conventional photography the human eye can discriminate smaller density changes in print highlights than it can in shadow areas, by a factor of about 5x and thankfully the highlights are the key to any photograph.

The challenge of astrophotography is that much of the interesting information is not only invisible to the visual observer but has to be boosted out of the shadows through intense image manipulation. The astrophotographer needs all the tonal resolution they can muster in the shadow regions. The numbers tell a worrying story too: How can smooth images be stretched out of a signal with only 3,570 data levels? The answer is that it is highly unlikely from a single image exposure. We are jumping ahead of ourselves a little but it is good to know that the astrophotographer has two ways to significantly improve the dynamic range of an image. We will discuss again in more detail later on but for now, here is a preview.

fig105_7.jpg

fig.7 This enlarged section of a single CCD sensor bias frame shows the read noise from the sensor. It has a standard deviation of 22, that is 68% of all values are in the range of ±22. Each electron is 2.6 units, so the read noise is 8.5 electrons.

fig105_8.jpg

fig.8 This enlarged section of the average of 50 CCD sensor bias frames shows the read noise from the sensor. It has much less randomness and has a standard deviation of 4, a little higher than the predicted value of 3.1 due to other artefacts.

Improving Dynamic Range

The wonderful thing about noise is that it is mostly random. If you flip a coin enough times, the number of heads or tails will be about the same. The same is true with astrophotography. If an image pixel wants to be 1,001.67 units, successive exposures from the sensor will be either 1,001 or 1,002 in the main or occasionally numbers further out, due to noise. Assuming an even noise distribution, the values of 1,002 or higher will occur twice as often as the value 1,001 or lower in subsequent exposures. If many (aligned) images have their pixel values averaged, the math can achieve an intermediate level, close to the true noiseless value.

With an increasing number of averaged exposures of the same subject, the random noise in the image is reduced and the Signal to Noise Ratio (SNR) is improved by:

 

equation

 

For example, if 10 samples are taken with the KAF8300M sensor, the read noise is reduced to about 2.2 electrons and the dynamic range is closer to 25,000 / 2.2 =11363:1 (equivalent to a bit depth between 13 and 14-bit).

It is standard practice to combine (integrate / stack) multiple images to improve the image SNR, to boost shadow resolution (and noise) to enhance faint nebulosity and extend galaxy perimeters as well as remove one-off events such as cosmic rays and plane trails.

The second trick is to cheat! For those of you who are familiar with advanced digital photography, you will likely know that a subject with a high dynamic range can be captured by combining long and short exposures of the same scene that have been individually optimized for their shadow and highlight regions. Photographers can use a fancy Photoshop plug-in to combine them but I’m not a fan of its unnatural smudgy ethereal look. In astrophotography, however, it is considerably easier to disguise the boundaries since there are fewer bright mid-tones in an image.

In practice, a simple combination of optimized exposure sequences for bright stars, bright nebulosity and dim nebulosity will improve the dynamic range of an image. The grouped images are aligned and averaged and then the (three) stacked images are aligned and selectively combined, for instance using Photoshop. Here, each image is assigned to a layer and each is blended using a mask generated from inverted image data. The result is a photo-fit of bright stars, bright nebulosity and dim nebulosity with a dynamic range many times greater than the imaging sensor. The masks are tuned to ensure smooth transitions between the image data in the three layers. This trick is not always required but there are a few objects, the Orion Nebula being one, where it is a helpful technique to capture the entirety of its huge brightness range with finesse. Using a combination of exposure lengths, the dynamic range is extended by the ratio of the longest and shortest time, as much as 100x or about 5 magnitudes. The same is true for imaging the enormous Andromeda Galaxy (M31), although less obvious. The core of the galaxy saturates a CCD sensor in under 2 minutes but the faint outer margins require 10 minutes or longer to bring out the details. (The processing details for M31 appear in one of the first light assignment chapters.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset