Sensors and Exposure

Understanding how sensors work and their real-world limitations are key to achieving high-quality images.

 

 

Sensors, exposure, and calibration are inextricably linked. It is impossible to explain one of these without referencing the others. Electronic sensors are the enabler for modern astrophotography and without them it would be a very different hobby. Putting the fancy optics and mounts to one side for a moment, it is a full understanding of the sensor and how it works (or not) that shapes every imaging session. We know that astrophotographers take many exposures but two key questions remain, how many and for how long? Unlike conventional photography, the answer is not a simple meter reading. Each individual session has a unique combination of conditions, object, optics, sensor and filtering and each requires a unique exposure plan. A list of instructions without any explanation is not that useful. It is more valuable to discuss exposure, however, after we understand how sensors work, the nature of light and how to make up for our system’s deficiencies. Some of that involves the process of calibration, which we will touch upon here but also has its own chapter later on. The discussion will get a little technical but it is essential for a better understanding of what we are doing and why.

Sensor Noise

Both CMOS and CCD sensors convert photons into an electrical charge on the individual photosites and then use complicated electronics to convert the accumulated electrical charge into a digital value that can be read by a computer. Each part of the process is imperfect and each imperfection affects our image quality. The conversion process and some of these imperfections are shown in fig.1. Looking at this, it is a wonder that sensors work at all. With care, however, we can control these imperfections to acceptable levels. Working systematically from input to output, we have incident light in the form of light pollution and the light from a distant object passing through the telescope optics. The light fall-off from the optics and the dust on optical surfaces will shade some pixels more than others. The photons that strike the sensor are converted and accumulated as electrons at each photosite. It is not a 1:1 conversion; it is dependent upon the absorption of the photons and their ability to generate free electrons. (The conversion rate is referred to as the Quantum Ef-ficiency.) During the exposure, electrons are also being randomly generated thermally; double the time, double the effect. Since this occurs without light, astronomers call it dark current. These electrons are accumulated along with those triggered by the incident photons. The average dark current is also dependent on the sensor temperature and approximately doubles for each 7°C rise. (By the way, you will often see electrons and charge discussed interchangeably in texts. There is no mystery here; an electron has a tiny mass of 9 × 10−31 kg and is mostly charge (1.6 × 10−19 coulombs).

When the overall charge is read by an amplifier, there is no way to tell whether the charge is due to dark current, light pollution or the light from a star. The story is not over; each pixel amplifier may have a slightly different gain and it will also introduce a little noise. For simplicity we have gathered all the noise mechanisms within the electronic circuits and given them the label “read noise”. Finally the amplifier’s output voltage is converted into a digital value that can be read by a computer. (The gain of the system is calculated from the number of electrons required to increase the digital count by one.) The process that converts the voltage to a digital value has to round up or down to the nearest integer. This small error in the conversion is called quantization noise, which can become noticeable in low signal areas. As luck would have it, the techniques we use to generally minimize noise also improve quantization noise too. Quantization noise becomes evident when a faint image signal undergoes extreme image stretching to increase its contrast.

fig114_1.jpg

fig.1 This simplified schematic shows the principal signals and sources of error in a sensor and its associated electronics at the pixel level. Understanding how to minimize their effects is key to successful astrophotography. A deliberate omission in this diagram is the effect of the random nature of photons striking the pixel. This gives rise to shot noise and is discussed at length in the main text.

 

The random (shot) noise level is defined as a statistical range around the average signal value, in which 68% of the signal values occur. This value is defined as a Standard Deviation or 1 SD. All signals, whether they are from deep sky or general sky glow have a noise level (1 SD) that happens to be equal to the square root of the mean signal level. With this rule, we can easily calculate the signal to noise ratio for any signal level.

Mathematically speaking, if on average, photons strike a sensor at 100 per second, in one second:

equation

In 100 seconds (or the average of ten 10-second exposures):

equation

Light and Shot Noise

Over many years, scientists argued that light was a wave or a particle. Einstein’s great insight was to realize it was both. In our context, it is helpful to think of light as a stream of particles. The more particles, or photons, per second, the brighter the light. We see light as a continuous entity but in fact, the photons that strike our eyes or a sensor are like raindrops on the ground. Whether it is raining soft or hard, the raindrops land at random intervals and it is impossible to predict precisely when the next raindrop will fall, or where. All we can reliably determine is the average rate. It also applies equally to light, either arising from light pollution or from the target star. Any exposure of a uniformly lit subject, with a perfect sensor that introduces no noise of its own, will have a range of different pixel values, distributed around a mean level. This unavoidable randomness has no obvious work-around. The randomness in the pixel values is given the term shot noise. If you pause to think about it, this is quite a blow; even a perfect sensor will still give you a noisy image! Shot noise is not only restricted to incident light, it also applies to several noise mechanisms in the sensor and sensor electronics, mostly generated by thermal events.

Signals, Noise and Calibration

So what is noise? At its simplest level, noise is the unwanted information that we receive in addition to the important information, or signal. In astrophotography, noise originates from several electronic sources and from light itself. For our purposes, the signals in astrophotog-raphy are the photons from the deep sky object that are turned into electrical charge in the sensor photosites. Practically, astrophotogra-phy concerns itself with all sources of signal error. These are broadly categorized into random and constant (or consistent) errors. So long as we can define the consistent errors in an image, they are easy to deal with. Random errors are more troublesome: Image processing inevitably involves extreme stretching of the image tones to reveal faint details. The process of stretching exaggerates the differences between neigh-boring pixel values and even a small amount of randomness in the original image appears objectionably blotchy after image processing. The random noise from separate light or thermal sources cannot be simply added, but their powers can. If a system has three distinct noise sources with signal levels, A, B and C, the overall noise is defined by:

equation

Dealing with unwanted errors involves just two processes, calibration and exposure. Calibration deals with consistent errors and exposure is the key to reduce random errors. For now, calibration is a process which measures the mean or consistent errors in a signal and removes their effect. These errors are corrected by subtracting an offset and adjusting the gain. Since no two pixels on a sensor are precisely the same, the process applies an offset and gain adjustment to each individual pixel. The gain adjustment not only corrects for tiny inconsistencies between the quantum efficiency and amplifier gain of individual pixels but usefully corrects for light fall-off at the corners of an image due to the optical system, as well as dark spots created by the shade of a dust particle on an optical surface. This takes care of quite a few of the inaccuracies called out in fig.1. Briefly, the calibration process starts by measuring your system and then during the processing stage, applies corrections to each individual exposure. These calibrations are given the names of the exposure types that measure them; darks, reads and flats. Unfor-tunately, these very names give the impression that they remove all the problems associated with dark noise, read noise and non-uniform gain. They do not. So to repeat, calibration only removes the constant (or mean) errors in a system and does nothing to fix the random ones. Calibra-tion leaves behind the random noise. To establish these calibration values we need to find the mean offset error and gain adjustment for each pixel and apply to each image.

Exposure and Random Error

Although random noise or shot noise is a fact of physics and cannot be eliminated, it is possible to reduce its effect on the signal. The key is locked within the statistics of random events. As photons hit an array of photosites their randomness, that is, the difference between the number of incident photons at each photosite and the average value, increases over time, as does the total number of impacts. Statistics come to the rescue at this point. Although the randomness increases with the number of impacts, the randomness increases at a slower rate than the total count. So, assuming a system with a perfect sensor, a long exposure will always have a better signal to noise ratio than a short one. Since the electrons can only accumulate on a sensor photosite during an exposure, the pixel values from adding two separate 10-second exposures together are equivalent to the value of a single 20-second exposure.

The practical outcome is that if an image with random noise is accumulated over a long time, either as a result of one long exposure, or the accumulation of many short exposures, the random noise level increases less than the general signal level and the all-important signal to noise ratio improves. If you stand back and think about it, this fits in with our general experience of normal daylight photography: Photographers do not worry about shot noise since the shot noise level is dwarfed by the stream of tens of millions of photons per second striking the camera sensor that require a fast shutter speed to prevent over-exposure.

There is an upper limit though, imposed by the ability of each photosite to store charge. Beyond this point, it is said to be saturated and there is no further signal increase with further exposure. The same is true of adding signals together mathematically using 16-bit (65,536 levels) file formats. Clearly, if sky pollution is dominating the sky and filling up the photosites, this leaves less room for image photons and so reduces the effective dynamic range of the sensor that can be put to good use on your deep sky image.

Exposure Bookends

The practical upshot of this revelation is to add multiple exposures that, individually, do not saturate important areas of the image. Stars often saturate with long exposures and if star color is of great importance, shorter exposures will be necessary to ensure they do not become white blobs. The combining of the images (stacking) is done by the image processing software using 32-bit arithmetic, which allows for 65,536 exposures to be added without issue. At the same time, each doubling of the exposure count adds a further bit of dynamic range, due to the averaging effect on the signal noise and equally reduces the quantization noise in the final image. If the exposures are taken through separate filters (e.g. LRGB) the image processing software (after calibrating the images and aligning them) combines the separate images to produce four stacks, one for each filter. This is done on a pixel by pixel basis. The combined exposure has a similar quantization noise level as a single exposure but when the averaging process divides the signal level to that of a single exposure, the quantization level is reduced. In general, the random noise improvement is determined by the following equation:

equation

So, the long exposure bookend is set to an individual exposure time that does not quite saturate the important parts of the image, for example, the core of a galaxy.

The other bookend has yet to be determined; how low can we go? Surely we can take hundreds of short exposures and add (or average) them together. The answer is yes and no. With a perfect sensor, you could do just that. Even a real sensor with only shot noise would be game. So how do we determine the minimum exposure? Well, in any given time, we have a choice of duration and number of exposures. The key question is, what happens if we take many short exposures rather than a few long ones? For one thing, with multiple exposures it is not a crisis if a few are thrown away for any number of singular events (guiding issue, cosmic-ray strike, satellite or aircraft trail etc.). To answer this question more precisely we need to understand read noise in more detail.

Read Noise and Minimum Exposure

The catchall “read noise” within a sensor does not behave like shot noise. Its degree of randomness is mostly independent of time or temperature and it sets a noise floor on every exposure. Read noise is a key parameter of sensor performance and it is present in every exposure, however brief. Again it is made up of a mean and random value. The mean value is deliberately introduced by the amplifier bias current and is removed by the calibration process. The random element, since it is not dependent on time (unlike shot noise) is more obvious on very short exposures. Read noise is going to be part of the decision making process for determining the short exposure bookend. To see how, we need to define the overall pixel noise of an image. In simple terms the overall signal to noise ratio is defined by the following equation, where t is seconds, R is the read noise in electrons, N the number of exposures and the sky and object flux are expressed in electrons/second.

This equation is a simplification that assumes the

equation

general sky signal is stronger than the object signal and calibration has removed the mean dark current. This equation can be rearranged and simplified further. As-suming that the read noise adds a further q% to the overall noise, it is possible to calculate an optimum exposure topt, that sets a quality ratio of shot noise from the sky exposure to the read noise for a single pixel:

equation

Empirically, several leading astrophotographers have determined q to be 5%. The sky flux in e / second can be calculated by subtracting an average dark frame image value (ADU) from the sky exposure (ADU measured in a blank bit of sky) using exposures of the same duration and temperature. The gain is published for most sensors as electrons/ADU:

equation

Interestingly, a 5% increase in overall noise mathematically corresponds to the sky noise being 3x larger than the read noise. A 2% increase would require sky noise 5x larger than the read noise, due to the way we combine noise. At first glance, the math does not look right, but recall that we cannot simply add random noise. For instance, using our earlier equation for combining noise sources, if read noise = 1 and sky noise is 3x larger at 3, the overall noise is:

equation

The above equation suggests the total noise is just made up of sky noise and read noise. This simplification may work in highly light-polluted areas but for more rural locations they are more evenly balanced. If we account for the shot noise from the subject, a minimum exposure is estimated by halving the optimum exposure topt for the sky noise alone; assuming our prior 5% contribution assumption and the following simplified formula:

equation

The exposure tmin marks the lower exposure bookend and something similar is assumed by some image acquisition programs that suggest exposure times. The recipe for success then is to increase the exposure or number of exposures, to reduce the effect of random noise on the signal. It is important to note that all these equations are based on single pixels. Clearly, if the pixels are small, less signal falls onto them individually and read noise is more evident. It might also be evident that the calibration process, which identifies the constant errors, also requires the average of many exposures to converge on a mean value for dark noise, read noise and pixel gain.

Between the Bookends

In summary, each exposure should be long enough so that the read noise does not dominate the shot noise from the incident light but short enough so that the important parts of the image do not saturate. (Just to make life difficult, for some subjects, the maximum exposure prior to clipping can be less than the noise-limited exposure.) At the same time we know that combining more exposures reduces the overall noise level. So, the key question is, how do we best use our available imaging time? Lots of short exposures or just a few long ones? To answer that question, let us look at a real example:

Using real data from a single exposure of the Bubble Nebula fig.2, fig.3 shows the predicted effective pixel signal to noise ratio of the combined exposures over a 4-hour period. It assumes that it takes about 16 seconds to download, change the filter and for the guider to settle between individual exposures. At one extreme, many short exposures are penalized by the changeover time and the relative contribution of the read noise. (With no read noise, the two summed 5-minute exposures have the same noise as one 10-minute exposure.) As the exposures lengthen, the signal to noise ratio rapidly improves but quite abruptly reaches a point where longer exposures have no meaningful benefit. At the same time with a long exposure scheme, a few ruined frames have a big impact on image quality. In this example,the optimum position is around the “knee” of the curve?and is about 6–8 minutes.

The sensor used in fig.2 and fig.3 is the Sony ICX694. It has a gain of 0.32 electrons/ADU and a read noise of 6 electrons. The blank sky measures +562 units over a 300-second exposure (0.6 electrons /second). It happened to be a good guess, assuming 5% in the tmin formula above. It suggests the minimum sub exposure time to be 300 seconds, the same as my normal test exposure. If I measure some of the faint nebulosity around the bubble, it has a value of +1,092 units. Using the equation for topt and using the overall light signal level, topt =302 seconds. The graph indication bears out the empirical 5% rule and the equations are directionally correct. In this particular case it illustrates a certain degree of beginner’s luck as I sampled just the right level of faint nebulosity.

So, the theoretical answer to the question is choose the Goldilocks option; not too long, not too short, but just right.

Practical Considerations

Theory is all well and good but sometimes reality forces us to compromise. I can think of three common scenarios:

 

1) For an image of a star field, the prime consideration is to keep good star color. As it is a star field, the subsequent processing will only require a gentle boost for the paler stars and noise should not be a problem on the bright points of light. The exposure should be set so that all but the brightest stars are not clipping but have a peak value on the right hand side of the image histogram between 30,000 and 60,000. This will likely require exposures that are less than the minimum exposure tmin. Since the objects (the stars) are bright, it may not require as many exposures as say a dim nebula. An image of a globular cluster requires many short exposures to ensure the brightest stars do not bloat but the faint stars can be resolved.

2) For a dim galaxy and nebula, in which bright stars are almost certainly rendered as white dots, the important part of the image are the faint details. In this case the exposure should be set to somewhere between tmin and topt. This image will require a significant boost to show the fine detail and it is important to combine as many exposures as possible to improve the noise in the faint details.

fig114_2.jpg

fig.2 This single 300-second exposure around the Bubble Nebula has areas of dim nebulosity and patches in which the sky pollution can be sampled. To obtain the signal generated by the light, I subtracted the average signal level of a 300-second dark frame from the sample.

fig114_3.jpg

fig.3 This graph uses the sampled values from fig.2 to calculate the total pixel SNR for a number of exposure options up to but not exceeding 4 hours. It accounts for the sensor’s read noise and the delay between exposures (download, dither and guider settle). It is clear that many short exposures degrade the overall SNR but in this case, after about 6 minutes duration, there is no clear benefit from longer exposures and may actually cause highlight clipping.

fig114_4.jpg

fig.4 Showing how the various signals and noise combine in a pixel is quite a hard concept to get across in a graph. The salmon-colored blocks are unwanted signals, either as a result of light pollution or sensor errors. They can be averaged over many exposures and effectively subtracted from the final image during the calibration process. The short bars represent the variation in the levels caused by random noise. Random noise can never be eliminated but, by increasing the exposure or number of combined exposures, its value in relation to the main signal can be reduced. It is important to realize that every pixel will have a slightly different value of signal, mean noise and noise level.

3) In some cases it is important to show faint details and yet retain star color too. There are two principal options named after card games; cheat and patience. To cheat, the image is combined from two separate exposure schemes, one optimized for bright stars and the other for the faint details. The alternative is to simply have the patience to image over a longer period of time with short exposures.

Location, Exposure and Filters

While we were concentrating on read noise, we should not forget that the shot noise from the sky pollution is ruining our images. In 2) and 3) above, sky pollution and the associated shot noise take a considerable toll. Not only does it rob the dynamic range of the sensor, forcing us to use shorter exposures, but it also affects the accuracy of our exposure assessment. The tmin equation assumes sky pollution is at about the same intensity as faint object details (and the shot noise is similar). In many cases light pollution can exceed the all important object intensity.

If they are about equal, the noise will always be about 41% worse than the subject shot noise alone. If sky pollution is double the subject intensity, it impacts by a massive 123% increase. You would need to have 5x more exposure to get to the same level of noise as one without light pollution. No matter how many exposures you take, the noise performance is always going to be compromised by the overwhelming shot noise from sky pollution. The only answer is to find a better location or to use filtration.

You often come across city-bound astrophotogra-phers specializing in wide-field narrowband imaging. There is a good reason for this. They typically use a short scope, operating at f/5 or better with narrowband filters optimized for a nebula’s ionized gas emission wavelengths (Hα, SII, OIII and so on). These filters have an extremely narrow pass-band, less than 10 nm and effectively block the sodium and mercury vapor light-pollution wavelengths. Read and thermal noise dominates these images. Long exposures are normal practice and a fast aperture helps to keep the exposure time as short as possible. In addition to narrowband filters and with a growing awareness of sky shot noise, there is an increasing use of light-pollution filters in monochrome as well as one-shot color imaging.

The effectiveness of light pollution filters vary with design, the subject and the degree of light pollution. Increasingly, with the move to high-pressure sodium and LED lighting, light pollution is spreading across a wider bandwidth, which makes it more difficult to eliminate through filtration. The familiar low-pressure sodium lamp is virtually monochromatic and outputs 90% of its energy at 590 nm. Most light pollution filters are very effective at blocking this wavelength. High-pressure sodium lamps have output peaks at 557, 590 and 623 nm with a broad output spectrum that spreads beyond 700 nm. Mercury vapor lamps add two more distinct blue and green wavelengths at 425 and 543 nm and make things more difficult to filter out. It is possible though, for instance, the IDAS P2 filter blocks these wavelengths and more. They are not perfect however. Most transmit the essential OIII and Ha wavelengths but some designs attenuate SII or block significant swathes of spectrum that affect galaxy and star intensity at the same time. In my semi-rural location, I increasingly use a light pollution filter in lieu of a plain luminance filter when imaging nebula, or when using a consumer color digital camera.

Object SNR and Binning

At first, object SNR is quite a contrary subject to comprehend. This chapter has concentrated firmly on optimizing pixel SNR. In doing so, it tries to increase the signal level to the point of clipping and minimize the signal from light pollution and its associated shot noise. The unavoidable signal shot noise and read noise is reduced by averaging multiple exposures. Long exposure times also accumulate dark current and its associated shot-noise. To shorten the exposure time it helps to capture more light. The only way to do that is to increase the size of the aperture diameter. Changing the f/ratio but not the aperture diameter does not capture more light. In other words, the object SNR is the same for a 100 mm f/4 or a 100 mm f/8 telescope. If we put the same sensor on the back of these two telescopes, they will have different pixel SNR but the same overall object SNR, defined only by the stream of photons through the aperture for the exposure time.

Similarly, when we look at the pixel level, we should be mindful that a sensor’s noise characteristics should take its pixel size into account. When comparing sensors, read noise, well depth and dark noise are more meaningful if normalized per square micron or millimeter. If two sensors have the same read noise, dark noise and well depth values, but one has pixels that are twice as big (four times the area) as the other, the sensor with the smaller pixels has:

 

4x the effective well capacity for a given area

4x the effective dark current for a given area

2x the effective read noise for a given area

 

Since the total signal is the same for a given area, although the well capacity has increased, the smaller pixels have higher levels of sensor noise. In this case bigger pixels improve image quality. If we do not need the spatial resolution that our megapixel CCD offers, is there a way to “create” bigger pixels and reduce the effect of sensor noise? A common proposal is binning.

Binning and Pixel SNR

Binning is a loose term used to describe combining several adjacent pixels together and averaging their values. It usually implies combining a small group of pixels, 2x2 or 3x3 pixels wide. It can occur within a CCD sensor or applied after the image has been captured. So far we have only briefly discussed binning in relation to achieving the optimum resolution for the optics or the lower resolution demands of the color channels in a LRGB sequence. As far as the sensor noise and exposure performance is concerned, it is a little more complex. If we assume 2x2 binning, in the case of the computer software summing the four pixels together, each of the pixels has signal and noise and the familiar √N equation applies. That is, the SNR is improved by √4, or 2.

When binning is applied within the sensor, the charge within the four pixels is accumulated in the sensor’s serial register before being read by the amplifier. It is the amplifier that principally adds the read noise and is only applied once. The pixel signal to read noise ratio improves by a factor of 4. It is easy to be carried away by this apparent improvement and we must keep in mind that this improvement relates to sensor noise and not to image noise. Image noise arising from the shot noise from the object and background sky flux will still be at the same level relative to one another, irrespective of the pixel size. (Although the binned exposure quadruples the pixel signal and only doubles the noise, there is a reduction in spatial resolution that reduces the image SNR.)

One of the often cited advantages of binning is its ability to reduce the exposure time. If the signal is strong and almost fills the well capacity of a single pixel, then binning may create issues, since the accumulated charge may exceed the capacity of the serial register. Some high performance CCDs have a serial register with a full well capacity twice that of the individual photosites and many use a lower gain during binned capture. (The QSI683 CCD typically uses a high gain of 0.5 e-/ADU in 1×1 binning mode and lowers it to 1.1 e-/ADU in binned capture modes.)

Significantly, in the case of a CMOS sensor, the read noise is associated with each pixel photodiode and there is no advantage to binning within the sensor. A number of pixels can be combined in the computer, however, with a √N advantage though. You cannot bin a bayer image either.

In the case of a strong signal, imaging clipping is avoided by reducing the exposure time but at the same time this reduces the signal level with respect to the sensor noise, potentially back to square one. Binning is, however, a useful technique to improve the quality of weak signals, not only for color exposures but also when used for expediency during framing, focusing and plate solving.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset