Linear Image Processing

It easy to forget that sensor data is inherently linear and that some image processing algorithms, to be effective, depend on unadulterated sensor data.

 

 

The title for this chapter is a reminder that certain image processing tools are better suited to image files before they are stretched. Indeed, the typical workflow shown in fig.1 can equally be called “basic processing”. As mentioned earlier, stretching causes a linear image to become non-linear and certain specialist image manipulation tools are optimized for linear images. These generally fix problems in the original files such as background gradients, color balance and convolution (blurring). If these issues are not fixed early on, subsequent image stretching exaggerates these problems and they become increasingly difficult to remove. General image processing programs like Photoshop work implicitly on mildly stretched (non-linear) images. An in-camera JPEG file is already stretched and similarly a RAW file converted and tagged with Adobe RGB assumes a gamma stretch of 2.2. A linear image in Photoshop would have a gamma setting of 1. For that reason I prefer dedicated astronomy imaging programs for the linear processing leading up to image stretching.

This distinction is not widely publicized and I was blissfully unaware up to the point that I researched image processing for this book; I now know why some of my early attempts failed or produced unexpected results. My early images have bloated white stars, complex color crossovers and excessive background noise. The key is to not stretch the image too early in the process (to reveal faint deep sky details) even though it is very tempting. Tools like Digital Development Processing (DDP) in Maxim and Nebulosity give an instant result. This early gratification yields to less heavy-handed processing over time. This is the problem; it is impossible to judge imaging controls without a stretched image. Luckily astro imaging programs have a screen stretch function (Photoshop can use a temporary adjustment layer) that gives the impression of the final result without affecting the imaging data.

Color Management and Image Output

Color management is a science in its own right. In simple terms, color management defines what a certain pixel RGB value looks like. A color profile defines the color range (gamut) of an RGB image and its color temperature (the color of white). Some device profiles go further and define a transfer function too, akin to a set of curves in RGB that help programs convert color rendition accurately between physical devices. Significantly, several astro imaging programs do not have explicit color management or default to a common consumer color profile (sRGB) designed for portable electronics and web use. Seasoned photographers know that Adobe RGB (1998) has a wider color gamut and is preferred as an editing space. Both PixInsight and Photoshop have robust color management options that define color image appearance and convert between profiles if necessary.

fig123_1.jpg

fig.1 An outline of a typical linear processing workflow assuming separate Red, Green, Blue and Luminance exposures. Careful adjustment of background and color, at this stage in particular, pay dividends later on.

Image Output

When it comes to image output, the target media determines the format and color profile. TIFF and JPEG image files are both common formats. For web use, 8-bit sRGB tagged JPEG files are preferred. Take care though; JPEG files are compressed and a medium or low-quality setting may shrink an image to a small file size, but it will not render dark skies with a smooth gradation.

For printing, 16-bit Adobe RGB TIFF files maintain the full quality of the original. Printing is another science in its own right and uses Cyan, Magenta, Yellow and blacK inks (CMYK) to selectively absorb light, whereas a monitor emits red, green and blue light. For that reason, the color gamut of a print will not match that of a monitor. The issue is we edit using a RGB monitor to decide the final output. When it comes to printing, the output may be quite different. Some desktop printers add additional red, green and blue inks to extend the color gamut of the output but standard book publishing uses CMYK and photo materials just use CMY. The translation of natural tones from RGB<->CMY(K) is normally without incident. Bright garish colors on a monitor, similar to those that many prefer in narrowband images, are likely to be muted in the final print. To anticipate these changes, check if your imaging program has a soft-proof option to display a facsimile of what the final print will look like on screen. If it does, enable it and select the destination as your printer profile. Some photographers edit in this mode if the output is for print-use only. The colors at most risk of being muted are saturated primary colors.

Monitor Calibration

Up to this point, the image processing has been mechanical and does not require an accurate visualization of the image on screen. Before starting any image manipulation, it is essential to calibrate the computer monitor to ensure that it is accurately representing color and tone. The best way of doing this is to use one of the many monitor calibrators: Macbeth, X-Rite, Datacolor and Pantone have similar models. These rest on the LCD screen and measure the brightness, tonality and color of the monitor. The outcome is a monitor color ICC/ICM profile that ensures some correlation to a standard and between computers.

fig123_2.jpg

fig.2 An unusual feature of PixInsight is the ability to drag an operation from the image’s processing history onto another file. In this case, the luminance image in the background was cropped and the same cropping is being applied to one of the color channels. (These files were previously aligned with each other during the calibration phase.)

The starting point for linear processing is a stacked set of registered and calibrated image files. This may be a single file in the case of one-shot color cameras (for instance a typical digital SLR or an astronomical CCD camera fitted with a Bayer array). Alternatively it may require separate image stacks for red green and blue, as well as luminance and narrowband images. The image processing of one-shot color and separate files broadly follows the same path but may deviate in a few places.

Getting Started

The worked example is a challenging arrangement of the loose cluster M52 and the bubble nebula, containing bright stars and dim nebulosity. Our set of stacked images, or in the case of one-shot color, a single stack, is the result of the selection, registration and integration of calibrated image files, or “lights”. These stacks represent a critical stage in image processing and I recommend you archive for later reference. The registration process will likely have created a ragged border as a result of deliberate or accidental misalignments between individual light frames. Several imaging processes examine the contents of the image to determine their correction parameters and assume the image(s) only contain good data. The extent of the good data is easily seen by applying an extreme screen stretch or temporary curve adjustment layer to highlight the small differences at the frame edges. The cropping tool snips these off.

fig123_3.jpg

fig.3 One of the powerful features of PI is its DynamicBackgroundExtraction tool that eliminates gradient backgrounds. In this case I have deliberately chosen points close to nebulosity or bright stars and the background mapping in the bottom left corner shows a complex patchwork. This is a sign that something is wrong.

Image Cropping

In PI, the dynamic crop tool is able to mimic the precise crop parameters to previously registered files and crop each image so that the outcome is a fully aligned set of individual images. This is done by determining the crop settings on one image file and then applying the same settings to the others, either by dragging an instance onto an open file, or dragging the dynamic crop entry in an image’s history to another image. In the case of Maxim DL, the outcome of the stacking process creates aligned files and these are similarly cropped using the crop tool. Check the crop trims off all the waste on the most affected image. Open the other images one at a time and crop these without altering the crop tool settings. Nebulosity 3 only allows a single file to be loaded at a time. The crop tool resets each time and is a simple dialog based on the number of pixels to discard off each edge. Write these four numbers down and re-use them on the other images. Thankfully, in Photoshop, if the images are all aligned and in layers, an image crop affects all the layers equally.

fig123_4.jpg

fig.4 As fig.3 but the offending points have been deleted and additional points near the corners more accurately map slight gradients in the image. (The contrast of the background map is greatly exaggerated to show the effect.)

Background Gradient Removal

Even though the images may have had the background adjusted for light fall-off in the calibration phase, there will still be light gradients across the image, mostly as a result of the actual sky background and light pollution near the horizon. It is important to remove these background gradients, not only in the luminance channel but also in the individual red, green and blue color information. This is especially important to remove these gradients before stretching the image, color calibration or background neutralization. There are a number of powerful tools in PI, Maxim DL (and Photoshop) to sample the background levels and either subtract or divide the image with a correction to compensate for light gradients or vignetting respectively (figs.34). The more powerful tools automatically sample parts of the image that do not contain stars or bright nebula or ask the user to make that selection for them. The latter is often the better choice in the case of densely packed star fields as it allows one to avoid areas of pale nebulosity or the proximity of bright stars. In PI and Maxim DL there are two choices for background equalization: automatic and dynamic. It is important to note that these tools do not neutralize the background or correct color, they merely even out the background level. The color calibration happens afterwards. A good set of samples selects points in the background (avoiding stars, nebulosity or galaxies) over the entire image, including areas near the corners and edges of the field. If you prefer to use Photoshop, there are ways of extracting background information and creating complex curves, but a more convenient and reliable solution is the highly regarded GradientXTerminator plug-in, by Russel Croman, which costs about $50.

fig123_5.jpg

fig.5 Maxim DL has a simpler interface for flattening backgrounds. This is an automatic tool but it does allow manual placement of background samples. Unlike PI, there is no indication if the placement is good or bad and it is reliant on the user magnifying the view and ensuring each sample misses stars and nebula.

In practice, to identify the best sample areas of the background, apply a severe screen stretch to show the faintest detail in your stacked images. If the gradient removal program has a preview function of the correction, take a close look. It should be very smooth with gentle gradients. If it is complex and uneven, that indicates that some of the background samples are contaminated by faint nebulosity or light scatter from bright objects. In this case, look for where the anomalies lie and check the samples in this region and delete or move them.

Image processing is a journey. We make discoveries along the way that change the way we do things. In the beginning, most of us (myself included) will process a full color image (the combination of RGB and L) with global adjustments. The result is a compromise between bloated white stars, weak color, background color and sharpness. Depending on circumstances, the first revelation makes a distinction between luminance and color processing and the second between stars, bright details and the background.

After each image stack has been cropped and their gradients removed, the luminance and RGB images follow similar but distinct workflows before coming together for non-linear processing. In essence, the luminance image records the brightness, resolution and detail and the RGB data is used to supply color information. Similarly the processing places a different emphasis on the two types of data. In one case, the processing emphasizes detail without amplifying noise and in the other, maximizes color saturation and differentiation at the structure level but not at a pixel level.

Luminance Processing

Where this is most apparent is in the processing of the image detail. Increasing detail requires two things to fool the brain, an increase in local contrast and an overall brightness level that brings the details out of the shadows so that our eyes can detect them. It is really important to realize that increasing the luminance of an image affects color saturation: No matter how colorful the image is, as the luminance increases, the final image color becomes less saturated and ultimately, white. Good star color is an indicator of careful luminance processing.

Not everyone exposes through a filter wheel and so there may be no conveniently available luminance data per se. If you use a one-shot color camera, conventional digital camera or have joined the increasing number of astrophotographers that use R, G & B images binned 1×1 (in lieu of the conventional L binned 1×1 with RGB binned 2×2), the answer is to create an synthetic luminance image from the RGB information. Providing the images are registered, this is a simple average of the three separate stacks (if the noise level is about the same) or averaged in proportion to their signal to noise ratio, which yields the cleanest signal. PixelMath in PI can apply a formula of the form (R+G+B)/3 or the standard stacking tools that integrate frames are another method, using the simplest average function, or one can use the ImageIntegration tool.

fig123_6.jpg

fig.6 This luminance mask on a small preview is protecting the background (red) but leaves the bright nebulosity and stars unprotected for modification. The final setting is applied to the entire image.

Different strategies are used to improve the resolution of small singular objects (stars), resolve details in mid tones (galaxies and bright nebula), enhance the details of dim nebula and at the same time suppress noise in the background. Some imaging tools automatically distinguish between these entities; others need a little help in the form of a mask. Masks are incredibly useful and vary slightly depending upon their application. Understanding how to generate them is an essential skill to improve your imagery.

Star Mask and Masking

A star mask is a special image file that is used selectively to protect parts of the image from being affected by the imaging tool. It is either binary or, more usefully, has varying degrees of protection with smooth boundaries between different protected and unprotected areas to disguise its application. (The best image edits are those that are undetectable.) Photoshop masks vary from white (transparent) through to black (obscure); other programs resort to the retro Rubylith look from the lithography industry. The general luminance mask is generated from a clone of the image itself. At its most basic, it uses a copy of the image, normally with an applied curve and blurred to create a distinct and smooth mask. The more sophisticated algorithms analyze the image spatially, as we do instinctively, as well as the general luminance to just select stars (a star mask). This is particularly useful when stars overlap bright galaxies and nebula. A further option is to combine masks. In PixInsight, the two masks are mathematically combined, using a tool called PixelMath. Photoshop can combine two masks by the “apply image” function.

Mask generation, often by default, protects the dark areas of a print and allows the brighter areas to be manipulated. Inverting the mask does the opposite. Inverted masks are commonly used to protect bright stars during the image-stretching process. If the extreme stretch necessary to show faint detail is equally applied to bright stars, they become whiter, brighter and grow in size.

As an aside, in practice a star mask is just one way to selectively apply imaging tools. Those using Photoshop have another option that removes the stars altogether from an image using repeated application of the dust and scratches filter. This tool fills in the voids with an average of local pixel values to create a starless version of the original image. (In short, create a duplicate layer of the original image and apply the dust and scratches filter several times to remove the stars. Save this starless imaging for later use. Now change the blend mode to “difference” to create a star-only image for separate processing.) Once the star and background layers are fully processed, they are combined as layers using the “screen” blend mode.

fig123_7.jpg

fig.7 Maxim has simpler controls for doing deconvolution. It is particularly difficult to get a good result if there are very bright stars in the image. These acquire dark halos and some users simply resort to retouching these in later using Photoshop. By reducing the PSF radius by 50% and with fewer iterations, the dark halos disappeared but the larger stars can bloat easily.

fig123_8.jpg

fig.8 After some careful adjustments, a star mask to protect the brightest stars from dark rings, a luminance mask to protect the deep background and using a PSF from suitable stars in the image, the deconvolution worked its magic. The PSF was generated by sampling medium brightness stars in the luminance image using the DynamicPSF tool. The result is very sensitive to the deringing dark control.

(The need for masks occurs many times during the imaging manipulation process and now is an ideal time to generate a mask, save it for later and for the next step, deconvolution.) The various applications all have slightly different tool adjustments but since they show a visual representation of the mask, their various effects soon become intuitive. In PixInsight, the essential mask parameters are the scale, structure growth and smoothness. This determines what the tool identifies as a star, how much of the star core’s surroundings is included and the smoothness of the transition. A mask should extend slightly beyond the boundaries of the star and fade off gently. It is not uncommon to discover during processing that a star mask’s parameters need an adjustment and often requires several attempts to achieve Goldilocks perfection.

Deconvolution

If there is one tool that begs the question “why do I exist?”, this is it. I wonder how many have abandoned it after a few hours of frustration. Everyone would agree that deconvolution is particularly difficult to apply in practice. We are just going to touch on the process here since it has its own in-depth chapter later on. Deconvolution is extremely sensitive to small changes in its settings and at the same time these are dependent upon the individual image. What it is meant to do is slightly sharpen up image details and compensate for the mild diffraction caused by the optical system and the seeing conditions, commonly referred to as the Point Spread Function (PSF). The PSF is a mathematical model of the blurring of a focused point light source. The final image is the convolution of the star and the PSF. Deconvolution, as the name implies, is an inverse transformation that compensates for the effect. The tool is designed to work with well- or over-sampled images; that is, stars spanning several pixels. It is also designed to work on linear images, before they have been stretched.

When it is used effectively, dim stars become more distinct, brighter stars appear more tightly focused and details in bright galaxies and nebula are improved. When the settings are wrong, the background becomes “curdled” and ugly black halos appear around bright stars. To prevent these issues, strong deconvolution is best only applied where needed. In PixInsight, this is achieved by protecting the background and bright stars with masks. In the case of the background, a luminance mask excludes the dark tones. This particular mask is created by stretching and slightly clipping the shadows of a duplicate image and applying a small blur. This makes the background obscure and the stars and nebula see-through. To stop black halos around the brightest stars, a special star mask that obscures these plus their immediate surroundings is created with the StarMask tool. This mask is selected as “local support” within the PixInsight deringing section of the deconvolution tool. This is our first application of a star mask, with holes in the mask for the stars and feathering off at the edges. A typical set of PixInsight settings are shown in fig. 8. Maxim DL’s implementation (fig.7) is similar, but essentially, it is applied to the entire image. If there are unpleasant artefacts, try using a smaller PSF radius or fewer iterations to tame the problem. Additionally, if a PixInsight deconvolution increases noise levels (curdles), enable and increase “Wavelet Regularization” to just counter the effect. Less is often more! After an hour of experimentation, the final result does leave a smile on your face and a sense that you have overcome physics!

fig123_9.jpg

fig.9 The MultiscaleMedianTransform tool has multiple uses and it can be used to selectively reduce noise (blur) at selected image scales as well as lower contrast. In this example it is reducing noise at a scale of 1 and 2 pixels. In this image, the noise reduction is only being applied to the background and a luminance mask is protecting the brighter areas. The mask has been made visible in this image to confirm it is protecting the right areas.

fig123_10.jpg

fig.10 The same tool set to reducing noise at a scale from 1 to 8 pixels and increasing the bias (detail) at the 2- and 4-pixel levels. A structure can be de-emphasized by making the bias level for that scale less than 0. Noise parameters can also be set for each scale. The mouse-overs explain the individual controls. Thankfully this tool has a real-time preview (the blue donut) that enables quick evaluation. The brown coloring of the image tabs indicates that a mask has been applied to the image and its preview.

fig123_11.jpg

fig.11 A quick look at the combined RGB image, either direct from the camera or after combining the separate files, reveals the extent of light pollution. The green and blue filters used in front of the CCD exclude the main 589-nm light pollution wavelength emitted from low-pressure sodium lamps. Even so, in this case, the light pollution level is still problematic (though I have seen worse) and the three files need some preliminary adjustment before using the more precise background and general color calibrations tools. Help is at hand with PI’s LinearFit tool that does the brute work of equalizing the channels.

fig123_12.jpg

fig.12 The image in fig.11 is constructed from separate red, green and blue files. Here, these have been resurrected and the LinearFit tool applied to the green and blue channels using the red file as a reference and with the tool set to default parameters. To show the difference, these three files are combined once again with equal weight to show the difference using the ChannelCombination tool. This image is now ready for a fine tweak to neutralize the background and calibrate the overall color balance.

Luminance Sharpening and Noise Reduction

The luminance information provides the bite to an image. In our retina the color insensitive rods do the same; they are more numerous and have more sensitivity. In the pursuit of better clarity, deconvolution can only go so far and to emphasis further detail, we need different sharpening tools. Sharpening is most effective when applied to the luminance information; if we were to sharpen the RGB color information it would decrease color saturation and increase unsightly color (chrominance) noise. Sharpening the luminance information sharpens the final image. At the same time, sharpening noise emphasizes it further and so the two processes go head to head. Many of the tools which are based on analyzing the image in a spatial sense can equally sharpen or blur by emphasizing or deemphasizing that particular scale. In its armory, Maxim has Kernel and Fast Fourier Transform (FFT) filters. They have several modes, two of which are low pass and high pass. When set to “low pass”, these filters soften the image and the opposite for “high pass”. The Kernel filters are applied globally but more usefully, their FFT filter implementation has a further control that restricts its application to a pixel value range. In practice, if you were sharpening an image, this pixel range would exclude the background values and conversely, noise reduction would exclude nebulosity and stars luminance values.

Standing back from the problem, our brain detects noise on a pixel scale. It compares the pixels in close proximity to each other and if they are not part of a recognized pattern, rejects the information as noise. Simply put, it is the differences that call attention to themselves. Effective sharpening works at a macro level. The earliest sharpening algorithms are a digital version of the analog process unsharp masking: In the darkroom, a low contrast, slightly blurred positive image is precisely registered and sandwiched with the original negative and the combination is used to make the print. This print is brimming with micro detail, at a scale determined by the degree of blurring. Unsharp masking has two problems if applied without great care; they leave tell-tale halos around objects and the brightness and distribution of tones is altered and can easily clip. In the early days of digital photography it was common to see prints that had a universal application. It is more effective when it is only applied where needed.

The latest algorithms identify differences between groups of pixels at different scales (termed “structures”) and increase their apparent contrast without increasing general pixel-to-pixel differences. This concept is increasingly used for sharpening photographic images and I suspect is the concept behind the Nik® Photoshop plug-ins used for enhancing structures at various scales. The most advanced programs use something called wavelet transforms to identify structures. This technology is used in the communications industry, and like Fourier transforms, expresses a linear-based data set into a frequency-based set. In effect it is an electronic filter. In an audio context a wavelet filter is conceptualized by a graphics equalizer. When the same principles are applied to an image, it can boost or suppress details at a certain spatial scale. PixInsight and Maxim both have wavelet-based filter tools. Wavelet technology forms the basis of many PixInsight tools to enable selective manipulations and can uniquely reduce noise and sharpen structures at the same time. Photoshop users are not to be left out; the High Pass Filter tool performs a broadly similar task. When it is applied to a duplicate image, with its blend mode set to overlay or soft light, it enhances structures at a scale determined by the tool’s radius setting.

Unfortunately all sharpening tools create problems of one kind or another. Different image structures will work best with certain algorithms but all create other artefacts as part of their process. The goal is to find the best combination and settings that reduce these to acceptable levels. As with deconvolution, an excellent way to direct the action to the right areas is to use a star mask to protect the background and / or very bright stars.

RGB Processing

RGB Combination

RGB or color processing deviates from luminance processing in two ways; color calibration and sharpening. After running a gradient removal tool on the individual filtered image files, you are ready to combine the images into a single color image. Maxim, Nebulosity and PI all have similar tools that combine separate monochrome files into a color image. They each prompt for the three files for the RGB channels. Some also have the ability to combine the luminance too at this stage but resist the temptation for a little longer. Most offer controls on channel weighting too. This can be useful to achieve a particular predetermined color setting. Some photograph a white surface in diffuse sunlight and determine what values give equivalent R, G and B values. PI has the ChannelCombination tool and Maxim has its Combine Color tool, which is able to perform a number of alternative combinations from monochrome files.

fig123_13.jpg

fig.13 Maxim DL’s channel combine tool can either assemble RGB or LRGB files. It provides further options for scaling and luminosity weighting during LRGB combination. In this example, I am using a predetermined RGB weighting and checked the background auto equalize to achieve a good color balance in the background.

Coarse Color Calibration

If you have low light pollution in your area, all may go well, but it is more likely the first attempt will look something like fig.11. Here, heavy light pollution dominates the red and green channels, even though the red and green filters exclude low-pressure sodium lamp emissions. PixInsight and other programs do have tools to remove color casts although these may struggle in an extreme case and leave behind a residual orange or red bias. One tool in PI that can assist, by balancing the channels, is LinearFit. It balances the channel histograms and is perhaps best thought of as determining a compatible setting of end points and gains for each of the channels. (Note, the images are still linear.) To do this manually would require some dexterity between changing the shadow point in a traditional levels tool and channel gain. The more convenient LinearFit tool operates on separate files rather than the channels within the RGB file. So, back up to before the ChannelCombination action, or, if the image is from a one-shot color camera, use the PI ChannelExtraction tool to separate the RGB channels into its constituent files. Select a reference image (normally the channel with the best signal) and apply the tool to the other two files, thereby equalizing them. The result is remarkably neutral and any residual color issues are easily handled with PI’s BackgroundNeutralization and ColorCalibration tools or their equivalents in Nebulosity and Maxim DL.

fig123_14.jpg

fig.14 In PI, the BackgroundNeutralization tool is simple and effective. In this case, the reference background is a preview, taken from a small area of background and unaffected by nebulosity. If one chose the entire image as a reference, the tool might compensate for the Ha emissions and turn the background green. The upper limit is carefully selected so as to exclude pixels associated with dim stars.

Neutralize the Background

Color calibration occurs in two distinct steps, the first of which is to neutralize the background. The second step is then to change the gain of the separate color channels so that a pleasing result is obtained. In this particular case we are color calibrating for aesthetics rather than for scientific purposes.

It makes life considerably easier if the background is neutral before image stretching. Any slight color bias will be amplified by the extreme tonal manipulation during stretching. The sequence of gradient removal and background neutralization should give the best chance of a convincing deep sky background. It only takes very small adjustments and it best done in a high bit depth (16-bit is a minimum). The mainstream astro imaging programs have similar tools that analyze the darkest tones in the RGB image and adjust their level setting so that red, green and blue occur in equal measure. The best tools do this from a select portion of the image that contains an area of uncontaminated background. There is often a setting that discriminates between background levels and faint nebulosity. In the ongoing example for this chapter, a small area away from the nebulosity and bright stars is selected as a preview. The tool’s upper limit level is set to just above the background value and the scale option selected. In this case the BackgroundNeutralization tool made a very small change to the image following the LinearFit application. Maxim DL and Nebulosity have simpler tools for removing background color. Nebulosity initiates with automatic values that are easily overridden and have convenient sliders for altering shadow values. Although it shows the histograms for the RGB channels these would benefit from a zoom function to show precise shadow values for the three channels.

fig123_15.jpg

fig.15 In Maxim DL, there are two key parameters that set the background neutralization, the threshold of what “background” is and a smoothing function that softens the transition between the background and surrounding objects. The threshold can be entered in manually (after evaluating a range of background values over the entire image) or with the mouse, using the sampling cursor of the information tool.

fig123_16.jpg

fig.16 Not be left out, Nebulosity has a similar control for neutralizing the background. The screen stretch in the top right corner is deliberately set to show faint details and the tool has manual sliders for each channel. The effect is shown directly on the image and in their histograms. When the tool is opened up, it is loaded with its own assessment.

Neutral Color Balance

The right color balance is a matter of some debate. There are two schools of thought on color calibration: to achieve accurate color by calibrating on stars of known spectral class, or to maximize the visual impact of an image by selecting a color balance that allows for the greatest diversity in color. My images are for visual impact and I choose the second way to calibrate my color. In PI, I essentially choose two regions of the image with which to calibrate color; one being a portion of background (for instance the preview used for the background calibration task) and a second area that contains some bright objects. These objects can be realized by a number of unsaturated stars or maybe a large bright object such as a galaxy. The idea behind this is that these objects contain a range of colors that average to a neutral color balance. In the example opposite, I chose an area containing many stars in the region of the loose cluster for white balance and a second preview for background levels.

fig123_17.jpg

fig.17 Again in PixInsight, the ColorCalibration tool makes small adjustments to the image. Two previews are used to sample a group of stars whose overall light level will be neutralized and a reference background level. In this case it is using stars as light references and the tool is set to detect structures. The two monochrome images at the bottom confirm what the tool is using for computing the white point and background references within the preview selections. This image just needs a little noise reduction and then it is ready for the world of non-linear image processing.

A third alternative is to determine the RGB weighting independently by imaging a neutral reference (for instance a Kodak grey card) in daylight and noting the relative signal levels of the three channels. The weighting factors are calculated to ensure that all three channels have similar values. With a strong signal, around half the well depth, an approximate channel weighting is the reciprocal of its average pixel value ratio to the value in the brightest of the three channels.

Noise Reduction on Color Images

One of the most unpleasant forms of noise is color noise or to use the right nomenclature, chrominance noise. If the screen stretched RGB image looks noisy, it can help to apply a little noise reduction at the pixel level to smooth things out before image stretching. In this particular image, there is an obvious bias pattern noise present in the image that the calibration process has not entirely removed. (It was later traced to a fault in the camera firmware and was fixed with a software update.) A small amount of noise reduction was traded off with definition to disguise the issue. Depending on the quality of the data, some RGB images may not require noise reduction at this stage and is entirely a case of experience and judgement.

fig123_18.jpg

fig.18 The latest noise reduction tool in PixInsight is TGVDenoise. It can be used on linear and non-linear images. The default settings are a good starting point for a stretched (non-linear) image but are too aggressive for this linear image. By backing off the strength by an order of magnitude and increasing the edge protection (the slider is counter-intuitive) the result is much better. As with most tools, it is quickly evaluated on a small preview before applying it to the full image. This is not the last time noise reduction is applied. Next time it will be to the stretched LRGB image. The non-linear stretching changes the noise mode and requires different noise reduction parameters. Following the noise reduction, the green pixels in the background are removed with the simple SCNR tool, set to green and with default settings.

Removing Green Pixels

Astrophotographers have a thing about green. As a color it is rarely seen in deep sky. As such, it should not appear in images either. The eye is very sensitive to green and strong green pixels should be removed from the image and replaced with a neutral pixel color. Photoshop users have the Color Range tool to select a family of green pixels, followed by a curves adjustment to reduce the green content. PixInsight users have the Selective Color Noise Reduction or SCNR tool. This is very simple to use and depending on the selected color, identifies noise of that color and neutralizes it.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset