Image Calibration and Stacking

Two strategies that go hand-in-hand to remove mean errors and reduce the noise level in the final image.

 

 

If a deep sky image was like a conventional photograph and did not require extensive image processing to enhance its details, calibration would be unnecessary. Unfortunately this is not the case and calibration is required to keep images at their best, even with extensive manipulation. The calibration process measures the consistent errors in an image and removes their effect. These errors are corrected by subtracting an offset and adjusting the gain for each image exposure. No two sensor pixels are precisely the same and the process of calibration applies unique corrections to each pixel in each image. Thankfully the image processing applications automate the calibration adjustment process and it is just left to the astrophotographer to provide the calibration data.

Calibration Overview

The calibration process starts by measuring your system and then, during the processing stage, applies a set of corrections to each individual image file. These calibrations are given the names of the exposure types that measure them_ bias, darks and flats. Unfortunately, these very names give the impression that they remove all the problems associated with bias or read noise, dark noise and non-uniform gain. Unfortunately they do not; calibration only removes the constant (average or mean) error in a system and does nothing to fix the random errors. The process to calculate the mean offset error and gain adjustment for each pixel uses the same methods employed to reduce random noise, that is by averaging many exposures – the more the better. It takes a while to establish a good set of calibration values, but once defined, these only require updating if the camera ages or the optical system changes in some way.

Calibration, the Naming of Parts

Fig.1 shows the elements of a single image exposure. The bias, dark current and general light pollution all add to the image make-up, each with an unknown mean and random value. On top of this is a slight variation in system gain for each pixel, caused by sensor and optical effects. If we take a sneak peek at fig.3, which shows the calibration process for each image frame, we can see that the calculations are a little involved. The complication is driven by the equation used to normalize the gain of each image. To understand why, we have to circle back to what we can directly and indirectly measure through the bias, dark and flat frame calibration exposures. Taking the exposure values from fig.1, fig.2 shows the make-up of the calibration exposures in a little more detail. The actual image exposure is called a “light frame” for consistency.

fig122-1.jpg

fig. 1This shows the make-up of an image pixel value; from the bias, dark current, light pollution and target image and the associated noise in each case. The trick is to identify each element and remove it.

fig122-2.jpg

fig. 2This relates to fig.1 and shows the constituents of the three types of calibration exposure; bias, dark and flat. The right two right hand columns show the image pixel before and after calibration.

fig122-3.jpg

fig. 3Thankfully the calibration process is mostly automated by the image processing programs. This diagram shows the sequence of events to create the master dark, bias and flat files. Each light frame is calibrated separately and then the calibrated frames are statistically combined during the stacking (alignment and integration) process to reduce the image noise. This calibrated and stacked image will still carry the mean level of light pollution, which is removed by image levels and gradient tools.

Bias Frames

The signal bias is present in every sensor image, irrespective of its temperature, exposure time or light level. It is easy enough to exclude all light sources from the image but even so, dark current accumulates with time and temperature. For that reason, bias is measured by taking a zero time or very brief exposure in the dark. In our example, each bias frame has a mean bias level (120) and noise (10). Taking 100 bias frames and averaging them reduces the image noise to around 1 electron, to form a master bias file (fig.4). If you typically acquire images at 1×1 binning for luminance frames and use 2×2 binning for RGB frames, you will need to derive two sets of bias frames at 1×1 and 2×2 binning levels. (If you use a modern photographic camera, it is likely that it performs its own approximate bias subtraction but it still requires a master bias frame for flat computations.)

Dark Frames

The purpose of a dark frame is to generate the same amount of dark current as is present in the light frame. For this to be accurate, the exposure has to be taken on the same sensor and have:

no light

same sensor temperature as the light frame

same exposure duration as the light frame

same binning level as the light frame

Each dark frame exposure captures the dark current, bias and associated random noise. The random noise is reduced by combining many dark frame exposures to form an average dark file. (This process is called integration by some applications. In its simplest form it is a computed average value but can also be a median or a more advanced statistical evaluation of the corresponding pixel values.) The dark current is isolated by subtracting the master bias pixel values from the average dark pixel values. The result is saved as the master dark file (fig.4) Take note that some applications, like PixInsight, do not subtract the master bias at this stage.

In practice several sets of dark frames are taken, for different combinations of exposure time and temperature and the software calculates intermediate values by scaling for temperature and exposure duration.

Simple Calibration

The dark frame consists of dark current and bias current, as does the light frame or image exposure. At its simplest level, a calibration process subtracts the average dark frame exposure from every flat frame exposure, leaving behind the signal, image noise and the general light pollution. In fig.2, we can see that a dark frame consists of bias (120), dark current (40) and its associated random noise (11). If many dark frames are averaged together, the noise level is reduced. If the average value (160) is subtracted from each light frame exposure, it leaves behind the light induced signal (650) and its shot noise (28). This still has the effect of light pollution (and its shot noise) embedded in each frame. Light pollution, as the sky background is aesthetically removed during image manipulation.

Full Calibration with Flat Frames

That might be the end of the story, but if we wish to normalize the exposure gain for every pixel, we need to do more work. This gain adjustment not only corrects for tiny inconsistencies between each pixel’s quantum efficiency and amplifier gain but usefully corrects for light fall-off at the corners of an image due to the optical system, as well as dark spots created in the shade of dust particles on the optical surfaces. The calculation to do this works out a correction factor for each pixel, derived from an exposure of a uniformly lit subject, or flat frame. These flat frame exposures capture enough light to reliably measure the intensity, typically at a level around 50% of the maximum pixel value. As before, there is shot noise in this exposure too and as before, many flat frames are averaged to establish mean pixel values (typically 50+). These exposures are brief (a few seconds) and of a brightly lit diffuse image. This not only speeds the whole process up but also reduces dark current to negligible levels, sufficient for it to be ignored. In fig.2, a flat frame typically comprises just bias noise and about 10,000 electrons. As with bias and dark frames, the flat frame exposures must have the same binning level as the light frames that they are applied to. Some programs (like Nebulosity) provide options to blur the master flat file to lower its noise level or even the different sensitivities of adjacent color pixels in a photographic camera image or one-shot color CCD. As you can see in fig.3, flat frames are calculated from flat, bias and dark exposures. Each flat frame is individually calibrated and then averaged using powerful statistical methods that equalize fluxes between exposures.

Master Calibration Files

We are ready to take another look at fig.3. In the first two lines it establishes the master bias and master dark files. In the third line, we subtract the master bias from the averaged flat frames to establish the master flat file. The master flat file now shows the variation in gain and light fall-off across the whole image, without any noise or offsets. If we take an average of all its pixel values, it provides an average master flat pixel value. The gain normalization for each pixel in every image requires a correction factor of (average master flat pixel value / master flat pixel value). This is applied to each light frame, after the offsets from dark current and bias have been removed (in our simple calibration example above).

fig122-4.jpg

fig. 4From left to right, these three images show the master bias, dark (300-second exposure) and flat for a refractor system. To show the information in each, their contrast has been greatly exaggerated. In the bias exposure, fine vertical lines can be seen, corresponding to some clocking issues on the sensor (fixed later on by a firmware upgrade). Remember, this image is the average of about 50 separate bias exposures and the read noise is 1/7thwhat it would be normally. This sensor has a very good dark current characteristic and has a fine spattering of pixels with a little more thermal current than others. There are no hot pixels to speak of. The image of the flat field has been stretched to show the light fall-off towards the corners and a few dust shadows (judging from their size, on the sensor cover slip) near the edges. As long as these dust spots do not change, the effect of dust in the system will be effectively removed during the calibration process.

equation

We now have a set of fully calibrated image (light) frames. Each will comprise an image made up of the photons from the exposure, corrected for dust spots and light fall-off but will still have all the random noise.

Wrinkles

Life is never quite that simple. There are always a few things going on to complicate matters. In the case of the dark frames, there may be an imperfect match between the dark frame and image exposure time and sensor temperature. This is often the case when using regular photographic cameras without temperature control, accident or your CCD cooling is insufficient on a particularly hot night. Murphy’s law applies too: You may have standardized on 2-, 5- and 10-minute dark frame exposures only to use 3 minutes and 6 minutes on the night. In this case all is not lost as the process for generating dark current is proportional to temperature and time. To some extent a small adjustment to the master dark value can be achieved by scaling the available master dark files. This is done by the image processing software, providing that the sensor temperature was recorded in the file download. This adjustment is more effective with astronomical CCD cameras than digital SLRs since many DSLRs internally process their raw files and confuse this calculation. The other gremlin is caused by cosmic ray hits. These ionizing particles create tiny white squiggles on the image. The longer the exposure, the more likely you will get a hit. Since their effect is always to lighten the image, they can be statistically removed during the averaging or integration process by excluding “bright” noise more than “dark” noise with an asymmetrical probability (SD) mask. PixInsight has the ability to set individual rejection levels for high and low values and to scale dark frame subtraction by optimizing the image noise.

Generating master flat files is not without problems either since each optical system needs to be characterized. If you are using separate filters, this requires a master flat file for each combination of telescope, sensor, filter and field-flattener. In the case of dust, if dust changes over a period of time, the calibrated light frames will not only show dark areas where the dust has now settled but light areas from its prior location. The size and intensity of dust shadows on the image change with their distance from the sensor. Dust that is close to the sensor has a smaller but more prominent effect and conversely away from the sensor it is larger and less well-defined.

In my system I keep my filters scrupulously clean and make sure that I keep lens caps on my sensor and filter wheel when not in use. I still have master flat files for each filter, since the light fall-off through the complicated coatings changes with incident angle. Another wrinkle is if your particular camera has a high dark current. In this case you need to change the calibration routine for your flat files and not only subtract the master bias but also a master dark file (set at the flat frame exposure time). These are sometimes named flat-darks. There may also be a lower limit on the flat frame exposure time. Those sensors that use a physical shutter, driven by small solenoids are primitive and slow compared to those in digital SLRs. Exposures less than 1 second should be avoided as the slow shutter movement will produce a noticeable light fall-off pattern across the sensor.

fig122-5.jpg

fig. 5This screen grab from Maxim DL shows the calibration files for a set of biases, darks and flats at different binning levels. The flats, for each of the filters, are further down the list. The “Replace with Master” option will average all your calibration frames and compute them into master frames for future use. Maxim DL uses the sensor, telescope, binning and sensor temperature information in the FITS header to ensure the right files are processed together. This is one reason why it pays to set up Maxim DL properly before imaging. (The actual image calibration is performed during the stacking process.)

Lastly and most importantly, the outcome of the calibration process is a set of calibrated light frames, each of which still has light pollution, shot noise and read noise. The next step in the process is called stacking (a combination of registration and integration) that aligns and combines these calibrated light frames in a way that reduces the noise level. After stacking, the image file still includes the average level of light pollution. This may be even, or vary across the frame. The background light level is removed as one of the first steps of image manipulation, to achieve the right aesthetic appearance.

In Practice

Acquiring calibration images is a fairly straightforward process that does not mandate the telescope to be mounted. Bias and dark frame images just require the sensor, fitted with a lens cap (that blocks infrared) and a large capacity disk drive to accept all the image files. It does not pay to be frugal with the number of exposures that you take; most texts recommend a minimum of 20 exposures of each but I recommend 50 or more dark and flat frames and 100 or more bias frames. For dark frames, this may take several days to complete and if you are using a normal photographic camera, an external power supply adaptor is a great help. If the dark frames require cool conditions, to simulate the temperatures at night, place the camera in a fridge and pass the cables through the seal. In this way when the images are combined to form master files, the dark current is about right and the read noise level is acceptably low.

fig122-6.jpg

fig. 6As fig.5, only this time in Nebulosity 3. It is easier to see here what is going on and shows the options for when a full set of calibration images are not available. Here it performs the averaging and then applies them to the image files.

fig122-7.jpg

fig. 7This 6-inch electroluminescent flat panel is a convenient way of creating a uniformly lit target for taking flat-frames. The neutral density gels are in ND2, 3 and 4 strengths, to regulate the output to achieve convenient sub 10-second exposures.

Flat frames are the most technically challenging to take. You need a set for each optical configuration and that includes the telescope, field-flattener, filters and sensor (and angle). If the configuration is repeatable (including the ability to prevent further dust) the flat frames can be reused. This is not always the case. Some telescope designs are open to the elements and it is particularly difficult to keep dust out. It is feasible to keep filters and field flatteners clean but some sensors have a shutter. These moving parts can deposit dust on the sensor within the sealed sensor housing. My original CCD had this issue and I had to redo my light frames several times and have the CCD professionally cleaned when the dust became too severe.

A flat frame requires an image of a uniformly lit target. Some use an artificial light source, or light box in front of the telescope lens; others cover the telescope with a diffuser and point it at the night sky during twilight or dusk (sky-flats). A scour of the Internet provides a number of projects to make custom-built diffuser light panels for their particular scope aperture. This normally involves sheets of foam core (a strong and light card and polystyrene sandwich board) and the now common white LEDs. White LEDs typically have a peak intensity at 450 nm but the intensity drops off sharply at longer wavelengths and is 10% or lower for Hydrogen alpha and Sulfur II wavelengths. Another alternative is to use an electroluminescent flat panel. These occupy little space and automatically provide a uniformly lit surface. I use an A2-size panel and hang it to a wall or use a 6-inch electroluminescent panel (fig.7). Its light output reduces at the longer wavelengths but is still usable. These panels are too bright for luminance images, just right for RGB exposures and not really bright enough for narrowband red filters. It is not feasible to dim an electroluminescent light output electrically and I insert neutral density gel lighting filters over the panel to reduce the light output. If narrowband red exposures are too long, a third alternative is to use a diffuse tungsten halogen lamp behind a diffuser.

fig122-8.jpg

fig. 8These two screen grabs from PixInsight show their rather detailed approach to combining and calibrating images. On the left, there are specialist combination methods, with and without normalization and rejection criteria for creating master calibration files. On the right, these master files are used to calibrate three exposure files. Image registration is carried out in another dialog.

Fig.4 shows some typical master calibration files from a ICX694 camera, coupled to a refractor with a field-flattener. The contrast of each image has been increased to show you the detailed appearance, otherwise they would show up simply as black, black and grey. The master bias shows vertical lines, indicated the small differences between alternate lines on the sensor. The master dark does not have any hot pixels (pure white) but does have “warm” ones. In the master flat, the light fall-off of the field-flattener is obvious. It is not symmetrical and I suspect that my flat frame target or the sensor has a slight fall-off. The flat frames show a few small dust spots. The size increases with distance from the sensor by the following equation, where p is the sensor pitch, f is the focal ratio and d is the diameter of the spot in pixels:

equation
fig122-9.jpg

fig. 9From the top, these enlarged images of M81 (Bode’s Galaxy) are of an un-calibrated light frame, calibrated light frame and stack of aligned and calibrated light frames. These show the effect of calibration and stacking on noise and detail. (The screen levels are deliberately set to high contrast to show the faint detail in the spiral arms and the background noise. This setting makes the core of the galaxy appear white but in reality the peak pixel value is less than 50,000.) These images were taken with an 8 megapixel Kodak KAF8300 sensor and have considerably higher dark current per pixel than the Sony sensor used for fig.4. The dust spots in the un-calibrated frame are removed during the calibration process.

(The dust spots take on the shape of the aperture; if the dust spots on the flat frames are rings rather than spots, it indicates the telescope has a central obstruction.)

Image Calibration

Figs.5 and 6 show the main calibration windows for Maxim DL and Nebulosity. They do the same thing but differ slightly in the approach. Maxim uses the FITS file header to intelligently combine images and calibration files that relate to one another. In Nebulosity, you have to match up the files manually by selecting individual files or folders for the bias, dark and flat frames. It has the ability to process and mix’n’match different sets of these depending on how you captured your images. You can also see the flat frame blurring option, in this case set to 2×2 mean, where each pixel is replaced by the mean of 4 pixels. (This is not quite the same as 2×2 binning, which reduces the number of pixels.) Other flat frame calibration options including normalization to the mean pixel value in the image, particularly useful if your flat frames are taken in natural light.

After this intense math, what is the result? Fig.9 shows three images: as single light frame, calibrated light frame and, as a precursor to the next chapter, a stack of 20 aligned light frames. The dust spots and hot pixels disappear in the calibrated frame and the stacked frame has less noise, especially noticeable in the galaxy spiral. These images are screen stretched but not processed. After processing, the differences will be exaggerated by the tonal manipulation and be very obvious.

Processing Options

So far we have assumed that the master files are produced by simply averaging the pixel values. There are a few further “averaging” options that help in special cases; median, sigma clip, SD mask and bad pixel mapping.

The median value of a data set is the middle value of the ordered data set. (The median of 1,2,3,5 and 12 is 3). It can be useful for suppressing random special causes, such as cosmic ray hits, at the expense of a little more general noise. Alternatively some programs allow an asymmetrical sigma clip, say 4 low and 3 high. This rejects positive excursions more than negative ones and eliminates cosmic ray hits, which are more likely to occur during long exposures. Sigma clip establishes, for each pixel, which frames, when averaged, have all their values within a specified number of standard deviations. It rejects the other frames and averages what is left. (For image frames, this is a great way to lose a Boeing 747.) The SD mask does a similar task and is particularly effective with small sample sets. It uses a computation that either averages the pixel values or uses their median value, depending on whether there is just general Gaussian noise or some special event. In this way, the noise level is optimized. There are many other variations on a theme and PixInsight has most of them in its armory.

In addition to combining techniques, there are other options that optimize light frame calibrations: Ideally the light frames and dark frames are exposed with similar sensor temperatures and times. This is not always the case, especially with those images from cameras that are not temperature regulated. Since the dark current increases with exposure time at a linear rate, the dark frame subtraction can be scaled so it compensates for the difference in exposure time. To some extent the same applies to differences between the light frame and dark frame temperatures. Most CCD cameras are linear and some astrophotographers take a single set of long dark frame exposures and use the compensation option in the calibration program to scale them to match the conditions of the light frame.

Bad Pixel Mapping

Bad pixel mapping is not an averaging technique per se, but is used to substitute hot pixels in an image with an average of the surrounding pixels. It is an alternative to calibrating light frames by dark frame subtraction. Maxim DL and Nebulosity provide the option to identify hot pixels (and rows) in a master dark frame. The detection threshold is defined by the user, with a numeric value or slider. The positions of these pixels are stored in a hot or bad-pixel map. In practice, one might have a separate hot pixel map for a 1,200-, 600-, 300- and 150-second exposures.

During the light frame calibration setup, the user selects the “remove bad pixels” option. Instead of subtracting the master dark frame from each light frame, the image pixels that occur at the hot pixel positions are substituted by the average of their neighbors. (Bad pixel mapping can only be applied to camera RAW files or monochrome image frames.)

PixInsight has its own version, CosmeticCorrection, that will identify hot and cold pixels and substitute them with the average value of neighboring pixels in the image. Cold pixels are not usually a sensor defect but are the result of over correction of bias and dark noise. If this becomes a problem, it may help to disable the auto scaling dark subtraction option during image calibration.

Stacking Image Frames

Image stacking is essentially two activities: registration and integration. Although a few images can be registered and blended in Photoshop, astrophotography demands advanced tools for accurate registration and statistical combination. These tools included the widely used (and free) DeepSkyStacker as well as CCDStack, Maxim DL and PixInsight. Both registration and combining (some programs call this integration) are carried out on calibrated light frames. In Maxim DL, the image calibration is done behind the scenes, using individual calibration files (or master calibration files) identified in the calibration dialog. As the image files are added into the stacking process they are calibrated. The stacking process also importantly offers quality checks on roundness, contrast and star size prior to alignment (registration) and combining. PixInsight individualizes these as separate processes, again less automated than some programs but offering greater control over the final result. (PixInsight has a powerful scripting capability that sequences these calibration commands; fig.10 shows a batch process to create master calibration files and stacked images, similar to Maxim DL.) Image assessment in PixInsight is carried out with a utility script called SubframeSelector or visually, using the Blink tool.

fig122-10.jpg

fig. 10PixInsight does require some perseverance at first. The process of calibration and stacking can be done in many individual stages or using a script, which automates master calibration file generation and calibrates and registers light frames. It also allows for automatic cosmetic defect removal (hot and cold pixels). It can also integrate the registered images too, for a quick image evaluation. The dedicated integration tool provides additional options for optimized pixel rejection and averaging.

fig122-11.jpg

fig. 11Maxim DL can automatically reject frames based on certain parameters. Rejected images are not combined. It is also possible to override these selections and manually evaluate the images.

Checking the individual quality of calibrated image frames is an important step before combining images and should not be overlooked. Even though localized transient events such as airplanes and cosmic ray hits can be removed by statistical methods, the best-controlled system has the occasional glitch that affects the entire image; a gust of wind, a passing cloud or, as occurred to me last week, an inquisitive cat. These frames should be discarded. Maxim DL allows manual intervention and also has the option to eliminate frames automatically based on parameters such as roundness, contrast and star size.

Registration

At its simplest level, registration is just an alignment of calibrated image frames on orthogonal axes. This approach may work on a series of images taken at the same time in a continuous sequence but many real-world effects make it insufficiently accurate for quality work including:

images before and after a meridian flip

images from different sessions or camera angle

images from different sources

focal length differences between red, green and blue

mosaics

images taken at different binning settings

In increasing complexity, registration routines either translate, translate and rotate, or translate, rotate and scale images… and even distort! The registration data is either stored as separate file or as new images with aligned positions. The process of aligning stars can also be manual, by clicking on the same star or star pair in each image, computed automatically using mathematical modelling or by plate solving each image and using the solved image center, scale and rotation to align the frames. Mathemati-cal models need to carefully calculate star centroids and also use several manipulations to distinguish stars from galaxies, noise or cosmic ray hits.

Imprecise alignment within an exposure group blurs star outlines or worse, between color groups, creates colored fringes around bright stars. The better registration programs can adjust for deliberate dither between image frames and binning, recognize an inverted image after a meridian flip and still align the images. Once these aligned frames are combined there will be an untidy border of partial overlap. To maximize the usable image area it is important to check the alignment after a meridian flip or between sessions. During the acquisition phase, some programs use plate solving to compare the original image and the latest one and calculate the telescope move to align the image centers. This is often labelled as a sync option in the acquisition or planetarium program. With the centers aligned, compare the two images side by side and check the camera angle is the same.

One of the more nifty features of the registration process is its ability to align different scaled frames of the same image, for instance 1x1 binned luminance frames with 2×2 binned red, green and blue frames. In the registration setup, the master frame is identified and the remaining frames are aligned to it. (In some programs, such as Nebulosity, the 2×2 binned frames require re-sampling by 2x before registration for this to work.)

fig122-12.jpg

fig. 12Nebulosity (top) and Maxim DL have very different alignment mechanisms. Nebulosity allows for manual star selection (identified by the small red circle) and Maxim DL can use PinPoint plate solving to establish position, scale and rotation of each image in a few seconds.

Drizzle

In the case of under-sampled images, the registration process in PixInsight additionally has the option to enhance resolution from dithered images by using the drizzle algorithm. This algorithm works by projecting image pixels onto a higher resolution grid of output pixels. When it is followed by a special application of the ImageIntegration and DrizzleIntegration tools, the effective resolution can be almost doubled. It relies upon the image of a star being slightly blurred and randomly lying across two or more pixels, even though, if it were perfectly centered on a pixel, the surrounding pixels would not see it.

Combining Image Frames (Integration)

By now we are aware of the various methods of averaging calibration frames using various statistical rejection techniques. The critical stage of image stacking adds a few more. Unlike calibration frames, image frames are taken in real-world conditions and may vary in intensity and noise for a number of reasons. Statistical integration generally rejects outlier pixel values and works well with images that share the same mean values. It becomes increasingly difficult if the images themselves are different. This may occur due to a change of sky condition, exposure time or sensor temperature.

To compensate for this, the stacking processes have a number of normalization options. Depending on the nature of the issue, these generally either add or delete a fixed value to each pixel in the image (add or subtract) and / or scale (multiply or divide) the image pixel values. These algorithms extract statistical information from each calibrated image exposure to work out the necessary adjustment. Having made these adjustments, the pixel rejection algorithms work more effectively to reject random hot pixels, cosmic rays and other transient issues. PixInsight employs two normalization phases. The first phase applies a temporary normalization to the image frames to determine which pixels to reject. The second phase normalizes the remaining pixels prior to integration. The two normalization criteria are usually the same (for bias or dark calibration frames) or subtly different (for flat frames and image frames).

The outcome of the integration process can be a single RGB image, or a set of monochrome images, representing luminance and the individual color channels. Some programs also produce a combined RGB image, with or without color balance weighting on each channel. PixInsight additionally provides two further images to show the rejected pixels, a handy means to ensure that the pixel rejection criteria were set correctly. The image files are ideally 32-bit, recalling that although a camera may only have a 25,000 electron well capacity, once many frames have been calibrated and averaged, the tonal resolution will increase and exceed 65,536 and the capability of a 16-bit file.

As an example, consider 16 image files that have a faint dust cloud that occupy a 10 ADU range. Individually, each file will have a range of pixel values due to the image noise but ultimately can have no more than 10 ADU steps of real signal data. If the images are simply averaged and stored back again as a 16-bit value, the individual pixel values will have less noise but the tonal signal resolution will still occupy 10 steps. If, however, the averaging is done in a 32-bit space, the noise levels of the individual pixels average to intermediate values over the 10 ADU range and provides the potential for finer tonal resolution. Since 16 = 24, the final combined image can potentially have up to 160 image values, or equivalent to a 20-bit image. During extreme image stretching, the benefit becomes apparent. Since this is only required in astro-photography, the dedicated image processing programs are fully compatible with 32-bit and sometimes 64-bit processing. In comparison all Photoshop’s tools work on 8-bit files and each successive version progressively supports more functionality for 16-bit and 32-bit images.

Image integration is a crucial stage in image processing and in this second edition it has its own in-depth chapter, centered on the exhaustive in PixInsight.

One last point: These stacked files are a key step in the processing sequence and should be stored for future reference, before any manipulations are carried out.

Mixing Programs

During the research for this book and numerous experiments I quickly realized that not all programs can read each other’s files. The FITS file data format has many settings: It can be saved in 8-, 16- or 32-bit as a signed or unsigned integer or alternatively as a 32- or 64-bit IEEE 754 floating point value. For instance, at the time of writing, the PixInsight calibration routines have trouble with 32-bit FITS file output from Maxim DL, whereas it has no difficulty with 16-bit files or its own 32-bit float FITS file. Nebulosity also has issues with reading some PixInsight files. Nebulosity and Maxim DL are not always on the same page either.

Capture programs such as Maxim DL and Sequence Generator Pro usefully label each file with essential exposure about the exposure conditions and equipment. These “tags” enable programs such as Maxim DL and PixInsight to segregate files by exposure length, binning and filter type. In some programs, the process of creating master files strips this important data out and it is necessary to add keywords to the FITS header with an editing program to ensure it is used correctly in subsequent operations. Alternatively, one can store these master files in unambiguous folders for later retrieval. Over time these nuances will be ironed out and effective interoperability between programs will echo the same that has occurred with JPEG and TIFF formats in the mainstream photographic world. If only the same could be said of camera RAW files!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset