Non-Linear Image Processing

These are the rules: There are no rules, but if it looks right, it probably is.

 

 

We have reached a critical point; our images are ready for initiation into the non-linear world. Their backgrounds are flat and neutral, deconvolved, color balanced and separated into separate color and luminance channels. The next step is to apply permanent tonal manipulations to reveal the details that lie within. Up until now many of the adjustments have been determined more by process than judgement, but from this point onwards, individual interpretation takes over to determine the look and feel of the final image.

fig124_1.jpg

fig.1 The HistogramTransformation tool in PixInsight is the mainstay stretching function. The real-time preview and the ability to zoom into the histogram allow accurate settings for end and mid points. It is not a good idea to clip pixels. The tool reports the number and percentage of pixels that will be clipped with the histogram setting. When the settings are right, apply the tool by dragging the blue triangle to the main image.

To recap, the starting point for non-linear processing is a set of fully calibrated, registered images with even backgrounds. The luminance image is carefully sharpened; emphasizing detail, reducing star size and minimizing background noise. The color image has a neutral background and an overall color balance that gives maximum latitude for color saturation on all channels. At this point many astrophotographers choose to abandon specialist imaging programs and adopt conventional Photoshop, GIMP and other photo editing suites. There are many existing resources that describe non-linear processing using Photoshop software and for that reason, it makes sense to continue the workflow in PixInsight and its unique method of working, highlighting substitute methods where appropriate. Other alternative workflows and unique processing techniques are explored at the end of the chapter and also occur in the practical sections.

During the linear processing phase a temporary stretch function is often applied to an image to indicate an effect, rather than to permanently alter an image. This is about to change with the HistogramTransformation and the CurvesTransformation tool in PixInsight.

Stretching

Histogram Stretching

Presently we have at least two image files, the monochrome luminance file and an RGB color file, and we stretch these non-linearly and independently before combining them. Non-linear stretching involves some form of tonal distortion. Changing the range of a file, via an endpoint adjustment, still results in a linear file. In PixInsight, stretching is achieved most conveniently with the HistogramTransformation tool (fig.1). In essence the mid tones slider is dragged to the extreme left-hand side creating a large increase in shadow contrast. It is important to not change the right-hand highlight slider. This ensures the transformation does not clip bright pixels. Drag the left-hand slider, which controls the black point very carefully to the extreme left-hand side of the image histogram, just before the readout shows clipped pixels. (This is shown along with the percentage of clipped pixels in the tool dialog box.) Putting aside clipped pixels, this manipulation is a purely subjective adjustment, judged from the screen appearance. For that reason it is essential to disable the screen transfer function (STF) to accurately assess the effect. The HistogramTransformation tool is one of those that has a real-time preview to assess the adjustment. It is also easier to obtain the desired effect by making the image stretch in two passes, with the first making about 80% of the total adjustment. Now reset the histogram tool and perform a second lesser stretch with greater finesse and finer control. To precisely place the sliders, home into the shadow foot of the histogram with the toolbox’s zoom buttons. Repeat this exercise for both the color image and the luminance image, noting that they may need a different level of adjustment to obtain the desired result. Like many other users you may find the screen transfer function automatically provides a very good approximation to the required image stretch. A simple trick is to copy its settings across to the histogram transfer tool, by simply dragging the blue triangle from the STF dialog across to the bottom of the histogram dialog box. Voilà!

This is just one way to perform a non-linear stretch in one application. If you prefer, one can achieve similar results using repeated curve adjustments in Photoshop, Nebulosity, Astroart or Maxim DL. Maxim DL also has a histogram remapping tool which you may find useful. In any every case, one thing is essential; the image must have 16-bit or preferably 32-bit depth to survive extreme stretching without tone separation (posterization). PI defaults to 32-bit floating point images during the stacking and integration process. If an image is exported to another application, it may require conversion to 16-bit or 32-bit integer files for compatibility.

When I started astrophotography I tried doing everything on my existing Mac OSX platform. My early images were processed in Nebulosity and I used the Curves tool, DDP and the Levels / Power Stretch to tease the detail out of my first images. To achieve the necessary stretch, the Bezier Curves and the Power Stretch tool were repeatedly applied to build up the result. Without masks to protect the background and stars, many of my early images featured big white stars and noisy backgrounds, which then required extensive work to make then acceptable.

Digital Development Process

No discussion on stretching can ignore this tool. DDP is an interesting dilemma. It is a combination of a sharpening and stretch tool originally designed to mimic the appearance and sensitivity of photographic film. When applied to a linear image, it gives an instant result, and for many, the allure of this initially agreeable image dissuades the astronomer from further experimentation. There are versions of this tool in Nebulosity, Maxim and PixInsight to name a few. It is very useful to quickly evaluate an im-age’s potential. The results, though, are very sensitive to the background threshold and sharpening settings and the result often lacks refinement. This includes excessive background noise and star bloat. The sharpening algorithms that sometimes accompany this tool are also prone to produce black halos around bright stars. With a little patience and experimentation superior results are obtained by more sensitive methods. It is interesting to note that PixInsight have relegated their implementation of the tool to its obsolete section.

fig124_2.jpg

fig.2 The basic building blocks of the non-linear workflow for an LRGB image. It follows on from the workflow in the previous chapter on Linear Processing and assumes separate luminance and color exposures. This general process applies equally to many applications, with a little variation.

fig124_3.jpg

fig.3 Maxim DL also has an option to use the screen stretch settings to set the endpoints on a logarithmic or non-linear gamma curve. This has produced a good initial stretch to the entire image, in this instance, on the luminance file.

Masked Stretching

Some images do not take kindly to a global image stretch. In particular, those with faint nebulosity and bright stars are particularly troublesome. The same manipulation that boosts the faint nebulosity emphasizes the diffuse bright star glow and increases their size. Any form of image stretching will enlarge bright stars and their diffuse halo to some extent. It is an aesthetic choice but most agree that excessive star size detracts from an image. Some star growth is perfectly acceptable but if left unchecked, it alters the visual balance between stars and the intended deep sky object. Some workflows tackle star size later on in the imaging process with a transform tool, for example the MorphologicalTransformation tool in PI or, one at a time, using the Spherize filter in Photoshop. An alternative method preferred by some is to stretch a starless image, either by masking the stars before applying the stretching tool or by removing them altogether. These advanced techniques take some practice and early attempts will likely produce strange artefacts around bright stars.

Not all applications have the ability to selectively stretch an image. Maxim DL 5 (fig.3) and Nebulosity presently only support global application and for instance would require an external manipulation to remove stars from the image prior to stretching. PixInsight and Adobe processes support selective application with masks. The latest version of PI introduces the new MaskedStretch tool. In our example image, it produces artefacts on bright stars, regardless of setting, and is more effective, however, when used on a partially stretched luminance file. In this case, apply a medium strength HistogramTransformation curve to the luminance channel and then follow with a masked stretch. Altering the balance between the two stretches gives a range of effects. Applied to our ongoing example gives better definition in faint details and less star bloat than the same luminance file simply processed using the HistogramTransformation tool in combination with a star mask. The results are subtly different in other ways too: I prefer the balance between the nebulosity and the star field in the luminance file with masked stretching. Initial trials show an improvement in the nebulosity definition at the expense of a little noise.

fig124_4.jpg

fig.4 Maxim DL, in common with other astro imaging programs, has a digital development process tool. This stretches the image, with options for smoothing and sharpening. In the case of Nebulosity 3 and Maxim DL 5, there is no mask support.

fig124_5.jpg

fig.5 A combination of a medium HistogramTransformation and a MaskedStretch produces a pleasing balance of detail in the nebulosity and with less star bloat. In the two previews above, the result of a standard stretch is on the left and the masked stretch on the right.

Star Reduction and Removal

Even a simple star field requires stretching to make the low-magnitude stars visible. It is likely that even with masked stretching, bright stars require taming. There are several tools that identify and shrink singular bright objects: In PI this is the MorphologicalTransformation tool. Used in conjunction with a star mask and set to erode, it shrinks stars by a set number of pixels. Star removal is effected by applying the MultiscaleMedianTransform tool with the first 4–5 layers disabled with the same star mask. (Star processing is covered in detail later on in its own chapter.)

Star Removal

One extreme approach is to remove the stars from the image, stretch what remains and then put the stars back. This is particularly useful with images that are exposed through narrowband filters. The extreme stretches involved to tease details out of SII and OIII wavelengths create ugly stars at the same time, often with unusual colors. In fact, some practitioners permanently remove stars from their narrowband files and, in addition to the many hours of narrow band exposures, they expose a few RGB files specifically for the star content. The RGB files are processed to optimize star shape and color and the stars are extracted from the file and added to the rainbow colors of the processed narrow band image.

To remove the stars temporarily there are several means to erode stars until they disappear. In Photoshop this is the dust and scratches filter, applied globally. Its original purpose is to fill in small anomalies with an average of the surrounding pixels. Luckily for us it treats stars as “dust” and in simplistic terms, if used iteratively, with different radii, diminishes star size until they disappear. The process is a little more involved than that and some process workflows have dozens of individual steps. This is a prime candidate for a Photoshop action and indeed, J-P Metsävainio at www.astroanarchy.blogspot.de has a sophisticated Photoshop plugin to do just that (donate ware). Another Photoshop method is to try the minimum filter. After creating a star mask using the Color Range’s dropper on progressively dimmer stars, expand the selection by one pixel and feather by two. To finish off, apply the Minimum Filter with its radius set to 1 pixel.

In PI, the MorphologicalTransformation tool mentioned earlier has a lesser effect than the dust and scratches tool, even after repeated application. A third alternative is a low cost utility called Straton (fig.7), available from the website www.zipproth.com. This is very effective and convenient to use. With the stars removed, a star-only image is created from the difference of the original image and the starless one. With two images, stretching and enhancement is easily tailored to the subject matter in each case, with less need for finessed masking. Once completed, the two images are added back together. Combining images is not that difficult: To add or subtract images, simply select the linear dodge or subtract blending modes between Photoshop layers containing the star and starless images. Alternatively, image combination is easily achieved with the PixelMath tool in PI or Maxim DL either to combine the two images or generate their difference. In this case a simple equation adds or subtracts pixel values from each other to create a third image.

Enhancing Structures

Some tools for enhancing structures work better with non-linear images. This is a good time to improve their definition and if needed, reduce background noise at the same time. In PI the tool of choice is the HighDynamicRangeMultiscaleTransform tool or HDRMT for short. (After a mouthful like that, the acronym is not so bad after all.) Like many other PI tools, the HDRMT tool is able to operate at different imaging scales. The user has complete control over the enhancement or suppression of image structures of a particular size. This may be a unique treatment to the spiral arms of the galaxy, emphasizing detail in a dust lane or increasing the contrast of large swathes of nebulosity. Enhancement also increases noise and the HDRMT tool is optimally deployed in association with a luminance mask to protect sensitive areas. It is a good time to recall that it is the luminance file that provides the definition in an image and to go easy when improving the definition of the RGB file. Since each scale has its own unique combination of noise reduction, emphasis and iterations, there are a great many possible combinations. It takes multiple quick trials to achieve the right result. Some users apply HDRMT after the luminance channel has been combined with the RGB information, others before. (If it is only applied to the lightness information in the LRGB file, the effect is similar.) Once established, the tool settings are excellent starting points for further images from the same camera and only require minor adjustment. This is one of those times when a saved project that records tool settings (or a large notebook) is particularly useful for the second time around with a new image from the same setup.

fig124_6.jpg

fig.6 Stretching causes stars to increase in apparent size. All images have to be stretched to some extent to show dim structures and stars. In the example above, an application of the MorphologicalTransformation tool shrinks the star size. The example shows the star field before and after. The highlighted brown preview tabs indicate a mask is in use; in this case, a simple star mask.

fig124_7.jpg

fig.7 The Straton star removal utility has a simple user interface with few adjustments. In practice, the application identifies what it believes are stars and when the cursor is placed over the image, it highlights this with a green coloring in the magnified pixel view. If the program believes it is a nebula, the bright pixels are white or grey in the pixel view. The default nebula detection setting considered the brighter parts of the bubble nebula a star. This was fixed with a small adjustment. Straton can currently accept up to 16-bit FITS and TIFF files.

fig124_8.jpg

fig.8 The HDRMultiscaleTransform tool changes the dynamic range of an image and creates what appears to be a flat image with lots of local detail. When used in combination with a subtle adjustment curve, the glowing blob of a galaxy or nebula is changed into a delicate structural form. This detailed but flat image is easily enhanced with a locally-applied contrast increase.

fig124_9.jpg

fig.9 Enhancing structures and sharpening both increase local contrast. In the case of the new MultiscaleLinearTransform tool, it makes a distinct difference in this example. As the name implies, it can work at different imaging structure scales and has the option to increase (or decrease) the emphasis of a particular image scale and apply selective noise reduction at that scale too. This is a multipurpose tool and there are many further options including deringing and linear masking to improve noise reduction when this tool is applied to linear images.

Increasing Saturation

When an RGB image is combined with a luminance file to make an LRGB combination, it will likely reduce the color saturation at the same time. Boosting color saturation is an ongoing activity at various stages during non-linear processing and another case of little and often. As the luminance of an image increases, the differences between the R,G & B values diminish and the saturation decreases as a result. Once color information is lost, lowering the brightness of say a white star only turns it grey. For that reason it is essential to maintain good color saturation throughout your workflow. It is easy to be caught out though; it is not just the stretching tools that increase star brightness, it is a by-product of any tool that sharpens or alters local contrast. The brightened areas have lower saturation as a result of contrast enhancement. (Noel Carboni sells a suite of Photoshop actions designed for astrophotography. One of these actions boosts star color by using the color information on unsaturated star perimeters to color the bleached core.)

In practice, boost the color saturation a little before combining it with the luminance information. Although there is a saturation tool in PI, the CurvesTransformation tool, with the saturation option enabled, provides more control over the end result. Increasing color saturation also increases chrominance noise and this tool is best used together with a luminance mask that protects the background. When used in this manner, the tool’s graph is a plot of input versus output saturation. Placing the cursor on the image indicates the level of saturation at that point on the curve. It is equally important to ensure that the individual red, green and blue channels do not clip (as well as the overall luminance value). If they do, the image loses color differentiation and has an unnatural “blotchy” appearance. If in doubt, click on the image to reveal the RGB values at that point. In the case of the CurvesTransformation tool, it additionally indicates its position on the tool’s graph.

fig124_10.jpg

fig.10 It is a good idea to boost the saturation of the color image a little prior to combination with the luminance channel. Here, the RGB and Luminance previews are at the top and a CurvesTransformation, set to saturation, is applied to the preview at the bottom, using a gentle curve.

Photoshop Techniques for Increasing Color Saturation

Photoshop is at home with color images and has many interesting methods to improve color saturation. They make use of the color blending mode and layers to enhance image color. Many of these techniques follow the simple principle of blending a slightly blurred boosted color image with itself, using the color blending mode. The blurring reduces the risk of introducing color noise into the underlying image. The differences between the methods lie in the method to achieve the boosted color image.

One subtle solution is to use a property of the soft light blending mode. When two identical images are combined with a soft light blending mode, the image takes on a higher apparent contrast, similar to the effect of applying a traditional S-curve. At the same time as the contrast change, the colors intensify. If the result of this intensified color is gently blurred and combined with another copy of the image using the color blend mode, the result is an image with more intense colors but the same tonal distribution. In practice, duplicate the image twice, change the top layer blending mode to soft light and merge with the middle layer. Slightly blur this new colorful and contrasty image to deter color noise and change its blending mode to color. Merge this with the background image to give a gentle boost to color saturation and repeat the process to intensify the effect. Alternatively, for an intense color boost, substitute the soft light blending mode used on the top layer for the color burn blending mode. If that result is little too intense, lower the opacity of the color layer before merging the layers.

The other variations use the same trick of blending a slightly blurred boosted color image with itself. At its simplest, duplicate the image, apply the vibrance tool (introduced into Photoshop several years ago), blur and blend as before using the color mode. Another more involved technique, referred to as blurred saturation layering, works on separate luminance and color files using the saturation tool. The color saturation tool on its own can be a bit heavy-handed and will introduce color noise. This technique slightly lowers the luminance of the RGB file before increasing its saturation and then combining it with an unadulterated luminance file. In practice, with the RGB file as the background image, create two new layers with the luminance image file. Set the blending mode of each monochrome image to luminosity, blur the middle layer by about a pixel and reduce its opacity to 40%. Merge this layer with the RGB layer beneath it and boost its color saturation to taste.

Both PixInsight and Photoshop exploit the LAB color mode and its unique way of breaking an image into luminosity and two color difference channels. Simply put, if the equal and symmetrical S-curves are applied to the two color difference channels a and b, the effect is to amplify the color difference and keep the overall color balance. Pix-Insight flits between color modes and uses this technique behind the scenes to increase color saturation. Photoshop users have to use a procedure: In practice, convert the image mode to LAB, duplicate it and select the top layer. As before, change the blending mode to color and apply a small Gaussian blur to it. Open its channel dialog and select the a channel. Apply a gentle s-curve adjustment to it. (Use the readout box in the Curves dialog to make equal and opposite adjustments on each side of the center point.) Save the curve as a preset and apply it to the b channel as well. As before, the trick is to blend a slightly blurred saturated image, using the color blend mode, with the original image. With the LAB technique, the steeper the curve as it crosses the middle peak of the a and b histograms, the greater the color differentiation.

Combining Luminance with Color

Once the files are fully stretched it is time to combine the color information with the sharpened and deconvoluted luminance data. In the particular case of separate L and RGB files, since the luminance and color information have been separately processed and stretched, as well as using different exposures, the general image intensity may be quite different. For best results, the luminance data in both should be balanced before combining the images. For instance, if the luminance signal is substantially stronger than the inherent luminance in the RGB file, the combination has weak colors. If the ScreenTransferFunction settings are used in each case to stretch both L and RGB images, the two images should have similar balance. If not, the distributions require equalizing with an adjustment to the endpoints and gain. Alternatively, the LRGBCombination tool in PixInsight has two controls that adjust the lightness contribution and the color saturation of the image. Moving these controls to the left increases the lightness contribution and saturation respectively. PI’s LRGBCombination tool’s default settings may produce a bland result. A boost in saturation and slight decrease in lightness bring the image to life. In practice, the luminance file is selected as the source image for L and the R,G and B boxes are unchecked. The tool is then applied to the RGB image. This has the effect of replacing its luminance information with a combination of the inherent luminance of the RGB image and the separate luminance file.

An alternative method is to deconstruct the RGB file, extract and balance its luminance to the stand-alone luminance file, assemble it back into the color file and then combine the separate luminance file with the RGB file. Thankfully this is easier than it sounds and uses the LinearFit tool and the color mode conversions in PI. In practice, extract the luminance channel from the RGB file with the tool of the same name and open the LinearFit tool. Select the original luminance file as the reference image and apply to the extracted luminance channel. This balances the two luminance histograms. The “RGB” file is now essentially treated as the a and b color difference channels of a LAB file. We repatriate it with its new balanced luminance information.

fig124_11.jpg

fig.11 After boosting saturation, the LRGBCombination tool applies the separate luminance image to the RGB image. The lightness and saturation slider positions are nominally 0.5. In this example, the saturation is slightly increased and the lightness contribution decreased to achieve a better balance between color and detail in the final image. (This was necessary as the luminance values in the RGB and luminance files had been separately stretched and were not balanced beforehand. The text explains how to avoid this and balance the two, before combination and achieve a better result with more finesse.) To apply, after selecting the luminance file and deselecting the R, G and B check-boxes drag the blue triangle of the tool to the RGB image.

To do this open the ChannelCombination tool (the same one used to assemble the RGB file in the first place) but this time select the CIE L*a*b* color space. Deselect the a and b check-boxes, select the now adjusted luminance file as the L source image and apply it to the RGB file. This effectively re-assembles the color information with a newly balanced luminance file. This luminance file is the original RGB luminance information that is balanced tonally to the separate luminance file (the one that was acquired through a luminance filter and that contains all the deconvolved and sharpened information). Returning to the LRGBCombination tool, select the separate luminance file and as before combine it with the RGB file. The result, even with default LRGBCombination tool settings, is much improved. If an increase in color saturation is required, drag the saturation slider slightly to the left in the LRGBCombination tool’s dialog. This marks another key point in the image processing work-flow and it is a good idea to save this file with “LRGB” in its name before moving on to the final tuning stages.

fig124_12.jpg

fig.12 In the continuing example, vertical 1-pixel-wide striations caused by a problem in the camera were noticeable in the LRGB image. To remove these, the ACDNR tool set to a scale of 1 is applied iteratively to the luminance information. The noise reduction is set by the standard deviation, amount and iterations values. A background threshold value is used in the bright sides edge protection section to prevent faint stars becoming blurred. Experimentation is again the key to success and the real-time preview helps enormously to find the Goldilocks value.

Noise Reduction and Sharpening

Noise Reduction

The example image suffers from excessive bias pattern noise, most noticeable in areas of faint nebulosity, even after calibration. In PixInsight, the ACDNR tool is an effective tool for reducing noise in non-linear images, as is the MultiscaleLinearTransform tool, both of which can work at a defined structure size. (PixInsight is a dynamic program with constant enhancements and developments; a year ago the ATrousWaveletTransform tool would have been the tool of choice.) In this example, the sensor pattern noise occurs between alternating pixel columns, that is, the smallest scale. It is selected with the structure size set to 1 and the noise reduced by averaging the values over a 3x3 pixel area with a weighted average. With many noise reduction algorithms, little and often is better than a single coarse setting. This tool has a dual purpose: It can emphasize structure and reduce noise at the same spatial scale. With so many choices, the real-time preview comes into its own and is invaluable when you experiment with different settings and scales to see how the image reacts. In the case of ACDNR tool, there are further options: It can remove luminance noise (the lightness tab) or color noise (chrominance tab). It also has settings that prevent star edges from being blurred into the background (Bright Sides Edge Protection).

A good starting point for selective noise reduction, say with the MultiscaleLinearTransform (MLT) tool, is to progressively reduce the noise level at each scale. One suggestion is to set noise reduction levels of 3, 2, 1 and 0.5 for scales 1 to 4 and experiment with reducing the noise reduction amount and increasing the number of iterations at each scale.

fig124_13.jpg

fig.13 A repeat of fig.9, the MLT tool can be used for noise reduction and for emphasizing structures at the same time. Here a combination of mild sharpening and a boost of medium size structures emphasize the nebulosity structure and sharpens up the stars at the same time. The first three layers have a decreasing level of noise reduction. Layers 2, 3 and 4 have a subtle boost to their structures. The tool also has options for deringing and noise sensitive settings that discriminate between good and bad signal differences. For even more control, it can be applied to an image in combination with a mask, to prevent unwanted side effects.

fig124_14.jpg

fig.14 The HDRMultiscaleTransform can also be applied to the LRGB image to enhance structure. In this case, it has been applied to the luminance information in the combined LRGB image with deringing and a lightness mask enabled to avoid unwanted artefacts.

Finally, if green pixels have crept back into the image, apply the SCNR tool to fix the issue, or the Photoshop process described in the last chapter, by using the color range tool and a curve adjustment.

Sharpening

Noise reduction and sharpening go hand in hand, and in PixInsight, they are increasingly applied in combination within the same tool. The MLT tool can not only be used for progressive noise reduction at different scales, but also to boost local contrast at a scale level, giving the impression of image sharpening. As with other multi-scale tools, each successive scale is double the last, that is 1, 2, 4, 8 pixels and so on. (There is also a residual scale, R, that selects the remaining items that make up the large scale structures within the image.) In the MLT tool, the bias and noise reduction level is individually set at each image scale. When the bias is set to zero, the structures at that scale are not emphasized. The noise settings on the other hand potentially blur structures at that scale and de-emphasize. There is a real-time preview to quickly evaluate the effect of a setting. I often prefer to apply it to a small preview at 50% zoom level and directly compare it with a clone preview. I have no doubt you will quickly discover the undo / redo preview button and its shortcut (cmd-Z or ctrl-Z, depending on platform).

A second tool that is used to create the impression of sharpening is the HDRMultiscaleTransform tool. This is applied either to the luminance file or to the luminance information in the LRGB image. In the running example it was applied to the luminance image (fig.8) and by way of comparison to the LRGB image (fig.14).

Curve Shaping and Local Contrast

The HistogramTransformation tool is not the last word in image stretching but it gets close. Curve adjustments, well known to Photoshop users, are the ultimate way to fine tune the local contrast in an image. Judicious use of a S-curve can bring an image to life and boost faint nebulosity without increasing background noise. This adjustment is used with or without a mask, depending on need. These curve tools have many options, including selective adjustments to one or more color or luminance channels, saturation and hue. These adjustments are often subtle and it is good practice to have an unadjusted duplicate image and a neutral desktop background to compare against. This calls for a properly calibrated monitor and accurate image color profiling. Curve tools exist in most imaging programs. Photoshop has the advantage of applying curves in an adjustment layer together with a mask. Nebulosity creates its curve shapes with a Bezier function, not unlike the drawing tool in Adobe Illustrator. By dragging the two “handles”, a wide variety of shapes are possible.

PixInsight has a further tool called LocalHistogramEqualization (you guessed it, LHE) that enhances structures in low contrast regions of the image (fig.15). In effect, it makes a histogram stretch but localizes the result. It is useful for enhancing faint structures in nebulosity and galaxies, although as mentioned before, it potentially reduces the saturation of the brighter parts of the enhanced image. It can be deployed at different kernel radii to select different structure sizes and at different strengths. No one setting may bring about the precise end result and multiple applications at different settings. The local adaptive filter in Maxim DL has a similar function but is prone to amplify noise.

fig124_15.jpg

fig.15 The LocalHistogramEqualization tool creates micro contrast over a defined scale. In this example it has been applied twice, at a small and medium scale to bring out the tracery within the bubble and the larger gas cloud structures around it.

fig124_15.jpg

fig.16 A selection of curve dialogs from Maxim DL (top left), Photoshop (above) and Nebulosity (left). Nebulosity uses Bezier curves which have the advantage of creating strong curve manipulations with a smooth transfer function.

Enhancing Structures with Photoshop

Whilst on the subject of local contrast enhancement, Photoshop users have a number of unique tools at their disposal, the first of which is the HDR Toning tool. This tool works on 32-bit images and is very effective. Indeed, it is a surprise to find out it was not designed for astro-photography in the first place. It is one of those things that you discover through patient experimentation and exploit for a purpose it was not designed for.

Real High Dynamic Range processing combines multiple exposures of different lengths into a single file. To compress all the tones into one image requires some extreme tonal manipulation. The obvious method simply compresses the image tonally between the endpoints of the extreme exposures. That produces a very dull result. More usefully, the HDR Toning tool in Photoshop can apply local histogram transformations to different areas of the image to increase contrast and then blend these together. On conventional images this creates a peculiar ghostly appearance that is not to everyone’s taste. In astrophotography, it is particularly effective since the groups of image elements are often separated by a swathe of dark sky that disguises the manipulation. Just as with the multi-scale operations in PixInsight, local contrast is applied at a certain user-selectable scale. In the example in fig.17 and fig.18, the scale was selected to emphasize the detail with the bubble and outer clouds. The vibrance setting increases color saturation without clipping.

The second Photoshop technique used for enhancing structures uses the high pass filter and layer blending modes. This method is able to emphasize contrast at different scales in a broadly similar manner to the multi-scale processing in PixInsight. At its heart is the high pass filter. When applied to an image on its own, it produces a very unpromising grey image. When you look in closer, you can just make out the boundaries of objects, picked out in paler and darker shades of grey. The radius setting in the high pass filter dialog determines what is picked out. The “a-ha” moment comes when you blend it with the original image using the overlay blending mode. Normality is resumed but the image structures are now generally crisper, with enhanced local contrast where there were faint lines of pale and darker grey in the high pass filter output. This enhancement is normally applied in conjunction with a luminosity mask, to protect the background. Just as with multi-scale processing, it is sometimes necessary to repeat the treatment at different pixel radii to show up multiple structures.

fig124_17.jpg

fig.17 When you first open the HDR Toning tool with a 32-bit image, set the method to Local Adaptation and reset the tone, detail and advanced sliders to their mid positions and the edge glow settings at a minimum. The tone curve should be a straight line. The tool settings interact and at first, just play with small changes to get a feel for the effects. The edge glow settings are particularly sensitive and define what image scales benefit from the contrast enhancement.

fig124_18.jpg

fig.18 After 10 minutes of experimentation a combination of a subtle tone curve, increased color vibrancy and increased detail make the image pop. A subtle amount of edge glow gives the nebula’s cloud fronts a 3D quality. In this example, the histogram starts at the midpoint as the TIFF output from PixInsight converts into an unsigned 32-bit integer.

Alternative Processing Options

One-Shot Color (OSC) / RGB

In the case where the image is formed of RGB data only, either from a sensor sandwiched to a Bayer filter (i.e. any conventional digital camera) or through separate RGB filters, there is no separate luminance channel to process. The same guidelines apply though; it is the luminance data that requires sharpening and deconvolution. In this case, extract the luminance information from the stacked RGB file and treat as a separate luminance file through the remainder of the process. (Each of the astro imaging programs have a dedicated tool for this and if you are a Photoshop user, you can accomplish the same by changing the image mode to Lab and extracting the luminance (L) channel from the channels dialog.)

Compared to straightforward RGB processing, image quality improves when a synthetic luminance channel is extracted, processed and combined later on. This is good advice for a color camera user. Those who use a filter wheel might be wondering, why not shoot luminance exposures too? After all, many texts confirm that it is only necessary to take full definition luminance files and that lower resolution color information is sufficient and the binning gives the added bonus of shorter exposures / improved signal to noise ratio. There is a school of thought, however, that believes that LRGB imaging (where the RGB data is binned 2x2) in a typical country town environment with some light pollution, may be improved upon by using RGB imaging alone and without binning. Unlike the RGB Bayer array on a color camera, separate RGB filter sets for filter wheels are designed to exclude the principal light pollution wavelengths. The yellow sodium emission wavelength falls neatly between the red and green filter responses. The argument proposes that for any given overall imaging time, a better result is obtained by using RGB at 1x1 binning than L (1x1) and RGB (2x2). It reasons that the exposures through the luminance filter include a good deal of light pollution and its associated shot noise. This shot noise exceeds that of the shot noise associated with faint deep sky objects. Over the same overall imaging time, the separate RGB exposures have a better combined signal to noise ratio and if binned 1x1, have the spatial information to provide a detailed image. At the same time, their narrower bandwidth is less likely to clip highlights on bright stars, as so frequently occurs in luminance exposures.

fig124_19.jpg

fig.19 The high pass filter in Photoshop can act as a multi-scale enhancement tool. From the top, the original file. Next, the background layer is duplicated twice and the high pass filter (filter>other>high pass) with a radius setting of 4 is applied to the top layer. The blending mode is set to overlay and the whole image sharpens up. In the third box, the two top layers are merged and a luminance layer mask is added. To do this, the background image is copied (cmd-A, cmd-C) and the mask is selected with alt-click. This image is pasted in with cmd-v. (These are Mac OSX keyboard shortcuts. The Windows shortcuts usually use ctrl instead of cmd.) This mask image is slightly stretched to ensure the background area is protected by black in the mask. Clicking on the image shows the final result. In the final image, the stars and nebula details are both sharpened to some degree. In practice, some use a star mask to protect the stars from being affected or remove the stars altogether before applying this technique.

I have tried both approaches from my back yard and believe that separate RGB images processed with synthetic luminance (channel combination weighted by their noise level) certainly give excellent results with rich star fields and clusters. The star definition and color was excellent. This interesting idea requires more research and two successive clear nights for a proper back to back comparison. This is not a frequent occurrence in the UK!

Tips and Tricks

Image Repairs

If things have gone well, image repairs should not be necessary. If some problems remain Photoshop is in its element with its extensive cosmetic correction tools. In the latest versions it has intelligent cloning tools that replace a selection with chameleon-like camouflage. One such tool uses the new “content aware” option in the fill tool. The problem area is simply lassoed and filled, ticking the content aware box. The result is remarkable. A similar tool, particularly useful for small round blemishes, is the spot healing brush. In practice, select a brush radius to match the problem area and select the “proximity match” option before clicking on the problem. These tools were originally designed for fixing blemishes, particularly on portraits. As the T-shirt slogan says, “Photoshop, helping the ugly since 1988”!

fig124_20.jpg

fig.20 If there is no separate luminance exposure, the trick is to create one. After processing the color information, up to the point of non-linear stretching, extract the luminance information and process it separately as a luminance file as in fig.2. (For users of one-shot color cameras, extract the luminance information just before non-linear stretching.) The quality improvement is substantial over RGB-only processing.

Correcting Elongated Stars

In addition to the MorphologicalTransformation tool in PixInsight (one of its options identifies and distorts stars back into shape) Photoshop users have a few options of their own to correct slight star elongation. If the image is just a star field, you may find the following is sufficient: Duplicate the image into a new layer, set its blending mode to darken and move the image 1 pixel at a time. This will only work up to a few pixels and may create unwanted artefacts in galaxies or nebulosity. Another similar technique is more selective: In the “pixel offset technique”, rotate the image so the elongation is parallel to one axis, duplicate it into another layer and select the darken blend mode. Using the color range tool, select bright stars in the duplicated layer and add to the selection, until most of the stars are identified. Modify the selection by enlarging by 1 or 2 pixels and feather by a few pixels to create a soft edged selection of all the stars. Now choose the offset filter (filter>other>offset) and nudge by 1 or 2 pixels. Once the desired effect is achieved, flatten the layers and de-rotate.

Big oblong stars pose a unique problem. One way to fix these is to individually blur them into a circle with the radial blur tool (filter>blur>radial blur)and then reduce their size with the spherize filter (filter>distort>spherize). In Adobe CS6, the image has to be in 8-bit mode for this tool to become available and for that reason, this cosmetic fix should be one of the last operations on an image. This tool will likely distort neighboring stars or move them. If you duplicate the image and apply the filter to the duplicate, you can paste the offending stars back into their correct positions from the background image.

Correcting Colored Star Fringing

Even though the RGB frames are matched and registered, color fringes may occur on stars as a result of focusing issues and small amounts of chromatic distortion. This is a common occurrence on narrowband images too, caused by the significant differences in stretching required to balance each channel. Photo-shop users have a few tools with which to tackle these, depending on the rest of the image content.

fig124_21.jpg

fig.21 Photoshop has several cosmetic defect tools. Here is an evaluation of a content-aware fill and a spot healing brush set to “proximity match”. The grey halo around a prominent star in the middle of the bubble is selected by two circular marquees and feathered by a few pixels. The image on the right shows the result of a content aware fill and the one on the left, the spot healing brush. Note the spot healing brush has also pasted in several small stars that can be seen at the 10 o’clock position.

If the color of the fringe is unique, one can select it with the color range tool and neutralize it by adjusting the selection by adding the opposing color. If the stars are mixed up with nebulosity of the same color, this technique will also drain the color from the nebulosity. In this case, a star mask may not work, as each star may end up with a small grey halo around it. An alternate solution is to try the chromatic aberration tools in Photoshop (filter>lens correction>custom>chromatic aberration) and adjust the sliders to remove the offending color. Be careful not to go too far, or it will actually introduce fringes.

Extending Faint Nebulosity

When an image has an extended faint signal it is useful to boost this without affecting the brighter elements of the image. The PI tool of choice for emphasizing faint details is the LocalHistogramEqualization process. If set to a large radius, it will emphasize the contrast between large structures rather than at a pixel level and emphasize noise. The trick is to apply the LHE to the image through a mask that excludes stars and brighter areas of the image. This is accomplished by a compound mask, made up of a star mask and one that excludes bright values. The combination of these two images breaks the ice with the PixelMath tool (figs.22, 23). In the ongoing example of the Bubble Nebula, we first check the settings in a small preview and then apply to the full image to see the overall effect (fig.24).

In the first instance, we use the StarMask tool to make a normal star mask. It should be distinct and tight; there is no need to grow the selection by much and select a moderate smoothness. If it is set too high, especially in a dense star field, the soften edges of the mask join up and there is too much protection of the intervening dark sky. Set the scale to ensure the largest stars are included. Having done that and checked that the resulting mask file, minimize it for later use. Now open the RangeSelection tool and click on the real-time preview. Increase the lower limit until the brighter areas of nebulosity show up in white. Apply a little smoothness to remove the hard edges and apply this tool to the image. We now have two separate masks and these are combined with PixelMath. You can see in fig.23 that combining the images in this case is a simple sum (or max) of the two images. Ensure the output is not re-scaled so the result combines both masks and clips them to black and white. (If the output was re-scaled, the mask would be tri-tone, white, black and grey.)

This mask is now applied to the image and inverted, to protect the light areas. With it in place, the subtle red nebulosity is boosted with the LocalHistogramEqualization tool, with a kernel radius set around 100–300 (to emphasize large cloud structures) and the contrast limit between 1 and 3. After checking the settings in a preview window, it is applied to the main image. Side by side, the effect is subtle and may not read well off the printed page. In this example, the faint wispy areas of nebulosity are more prominent in relation to the dark sky and the image has an overall less-processed look. This particular example uses a color image but it is equally effective when applied to the luminance file.

fig124_22.jpg

fig.22 The StarMask and RangeSelection tools are adjusted to select stars and bright areas of nebulosity. The critical area is the vicinity of the bubble and the preview is used to quickly check the mask extent.

fig124_23.jpg

fig.23 The two masks generated in fig.22 are combined using PixelMath. Click on the Expression Editor and select the filenames from the drop down list. Open up the Destination settings and check create new file and deselect re-scale output. Apply this mask to the image by dragging its tab to the image left hand border. Invert the mask to protect the highlights.

fig124_24.jpg

fig.24 With the mask in place, check the LHE tool settings using the real-time preview and then apply to the image. In this example, the original image is on the left for comparison. The right hand image shows lighter wispy detail in the star field.

fig124_25.jpg

fig.25 The best time to remove bad pixels is during image calibration. Sometimes a few slip through and may be obtrusive in lighter areas. The two tools opposite will detect a dark pixel and replace it with a lighter value. The CosmeticCorrection tool has a simple slider that sets the threshold limit for cold pixels. The real-time preview identifies which pixels will be filled in. Alternatively, PixelMath can do the same thing with a simple equation and give a little more control. Here, if a pixel is lower than 0.1, it is replaced with a blend of the pixel value and the median pixel value of the entire image (which is typically a similar value to the average background value). A little experimentation on a preview window determines the detection threshold and the degree of blending. If a dark pixel has spread, try slightly under-correcting the problem but repeat with two passes and with a slightly higher threshold on the second pass.

Removing Dark Pixels

Image calibration sometimes introduces black pixels into an image, or they occur with later image manipulation that creates an increase in local contrast. Even without dark frame auto-scaling, the image calibration in PixInsight or Maxim may over-compensate and conflict with the camera’s own dark processing. This introduces random dark pixels. While these are not noticeable in the background, they detract from the image when they occur in brighter areas or after stretching. Isolated black pixels also seem to resist noise-reduction algorithms. The solution is to replace these cold pixels with an average of their surroundings. The CosmeticCorrection tool has the ability to detect cold and hot pixels and has the convenience of generating a preview of the defect map. Dragging the Cold Sigma slider to the left increases the selection threshold and the number of pixels. These pixels are replaced with a blend of surrounding pixels.

An alternative is a simple conditional statement using PixInsight’s PixelMath tool. This selects pixels with a value lower than a defined threshold and substitutes them with an average background value. The sensitivity is determined by the threshold value in the equation. In this case it is 0.1. Both of these have no blending effect and so they literally substitute pixel values. For this reason, defects are best removed before the cold pixel boundary has blurred into neighboring pixels, or the fixed pixels may retain a small dark halo. Alternative blending equations can be used to combine the current pixel value with another. The tool can also be applied iteratively to great effect.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset