Narrowband Image Processing

For astrophotographers living in light-polluted areas, narrowband imaging is a savior and a thing of wonder to everyone else.

 

 

 

Following along the same lines as the previous chapter on CFA imaging, a whole new world opens up with the introduction of processing images taken through narrowband filters. These filters select a precise emission wavelength and almost completely reject light pollution (and moonlight) with the potential for lowering sky noise and hence deliver a better signal to noise ratio. This permits many astrophotographers to successfully image from light polluted urban areas. Images taken with nar-rowband filters are quite distinct; their raison d’être are gloriously colored nebulous clouds, punctuated by small richly colored stars. These particular goals require a unique approach and flexibility to image acquisition and processing. For starters, the exposures required to collect sufficient pixels are much longer than RGB imaging and will likely demand an entire night’s imaging to each filter. Even so, the relative signal strengths for the common emission wavelengths are quite different and are dominated by the deep red of hydrogen alpha (Hα). Another anomaly is the commonly imaged wavelengths do not correspond to red, green and blue and encourage individual interpretation. Image “color” is whatever you choose it to be. Typically exposures are made with two or more filters and the image files are assigned and / or combined to the individual channels of an RGB file.

The assignment of each image to a color channel is arbitrary. There are six possible combinations and swapping this assignment completely alters the hue of the end result. Two of the most famous assignments are the Hubble Color Palette (HCP) that maps SII to red, Hα to green and OIII to blue and the Canada France Hawaii Telescope palette (CFHT) that maps Hα to red, OIII to green and SII to blue. A simple assignment will likely produce an almost monochromatic red image and a larger part of the processing workflow balances the relative signal strengths to boost the color gamut. Without care, the more extreme image stretches required to boost the weaker OIII and SII signals can cause unusual star color and magenta fringes around bright stars. Some imagers additionally expose a few hours of standard RGB images to create a natural color star field. They neutralize and shrink or remove the stars altogether in the narrowband image and substitute the RGB stars, typically by using the RGB star layer set to color blend mode (Photoshop).

In practice, there are many options and alternative workflows. Some of the most common are shown in fig.1, with 2- or 3-channel images, with and without separate luminance and RGB star image workflows. The first light assignments highlight some additional twists. The cosmos is your oyster!

Color Differentiation

The most striking images maximize the visible differences between the common emission wavelengths. With two reds and a turquoise, assigning these to the three primary colors on the color wheel already has a big visual impact. Of course, these assigned colors channels are just a starting point. One can alter selective hues of the image to increase the visual impact (color contrast) between overlapping gas clouds. These amazing opportunities and challenges are well met with Photoshop blending modes and re-mapping selective color hues to emphasize subtle differences. Pix-Insight tackles the challenges with different tools, which although they are broadly equivalent, may steer the final image to a slightly different conclusion. There is no “right” way and once you understand the tools at your disposal, the only obstacle is one’s own imagination and time.

Narrowband and RGB

Before diving into the detail it is worth mentioning further options that combine narrowband exposures with RGB information in the main image. For those astrophotographers unlucky enough to image from light-polluted areas, the subtle colored details of the heavens are often masked by the overall background light level and accompanying shot noise. One way of injecting some more detail into these images is to enhance the RGB channels with narrowband information. A popular combination is Hα (deep red) with red and OIII (turquoise) with green and blue. In each case, the narrowband information has greater micro contrast and it is this that adds more bite to the RGB image and at the same time without adding much shot noise from light pollution. This is not quite as easy as it sounds. Hα emissions are far more abundant than OIII and unless this is taken into account during the image channel balancing, the Ha/red channel dominates the final image and an almost monochromatic red image will result.

fig125_1.jpg

fig.1 The first steps in narrowband imaging often begin with introducing narrowband exposures into existing LRGB data that has got to the stage of being non-linearly stretched. Here, the narrowband data is used to enhance the RGB data using a selected blend mode. Most commonly, abundant Hα data is combined with the red channel to enhance faint nebulosity but OIII and SII data can be used too, if available. The particular method of blending is a matter of personal preference and experimentation.

Processing Fundamentals

In many ways processing a narrowband image follows the same path as a conventional RGB image, with and without luminance information. The main differences lie in the treatment and balance of the color channels, and additionally, the choices concerning luminance generation and processing. In some cases the luminance information is sourced from the narrowband data itself, or it may come from additional exposures using a clear filter. Narrowband imaging still follows the same calibration and linear processing paths up to a point. The challenges lie in the comparative strengths of the common narrowband signals and the fact that the colors associated with these narrowband emissions do not fall conveniently into red, green and blue wavelengths. This chapter concentrates on the unique processing steps and assumes the reader is familiar with the concepts outlined in the previous linear and non-linear processing chapters.

Combining RGB and Narrowband Data

Let us start by introducing narrowband data into an RGB image. The unique processing step here is to enhance the RGB data with another data source. This occurs after the separate images have been stretched non-linearly. The individual R, G and B filters typically have a broad pass-band of about 100 nm each, and even if the red and green filters exclude the dominant yellow sodium vapor lamp wavelength they will still pass considerable light pollution from broadband light sources and shot noise. The narrow filter passbands are typically 7–3 nm and pass more than 90% of the signal and at the same time blocks about 90% of the light pollution bandwidth of a normal color filter with an associated 4x reduction in shot noise. (This is significant, to achieve a similar improvement in SNR would require 16x the number or length of exposures.)

The key processing step is to blend the narrowband and RGB data together. Most commonly this involves blending Hα data with the red channel, although there is no limitation here and if you have the sky time, OIII and SII image data can be blended with the other color channels too. There are a number of options on how to combine the data, principally around the Hα data. Again, there is no right way and you will need to experiment with different options and decide which gives you the desired effect. In increasing sophistication, three common methods are used to combine the channels in proportion to their noise level, use the narrowband to increase local contrast or employ the lighten blending mode. The formulas below respond well to PixelMath in PixInsight. In each case, the channel names are substituted for the open image filenames.

Lighten Blend Mode

The first of the three example combining modes is adopted from a popular Photoshop action. In Photoshop, the red layer has the Hα layer placed above it and the blending mode set to lighten. In this blending mode, after flattening the layers, each red pixel R is replaced by the maximum of the corresponding pixels in the red and Hα images. In mathematical terms:

equation

One issue that arises from the lighten blend mode is that it also picks up on noise in the red channel’s background. A common temptation is to over-stretch the Hα data before combining it with the red channel. Although its contribution to the red channel is controlled by the opacity or scaling factors in the above equations, it is better to go easy on the non-linear stretch and use similar factors for all the narrowband inputs.

Proportional Combine

This concept combines the Hα and R channels with an equivalent light pollution weighting. In mathematical terms, the contributions of the Hα and R channels are inversely proportional to their approximate filter bandwidths. In the following example, a Hα filter has a 7-nm bandwidth and an R filter is 100 nm. Here, we use the “*” symbol to denote multiply, as used in PixInsight’s PixelMath equation editor. Some simplify this to:

equation

Although this approach has some logic, it ultimately discounts a large proportion of the red channel’s data and it is easy for small stars in the R channel to disappear.

Hα Contrast Enhancement

An alternate scheme enhances the R channel contrast by adding in a weighted Hα contrast value. The Hα contrast value is calculated by subtracting the median value from each pixel, where f is a factor, typically about 0.5:

equation

This is often used in conjunction with a star mask and additionally can be used with a mask made up of the inverted image. The effect emphasizes the differences in the dim parts of the image and I think improves upon the two techniques above. A mask is effectively a multiplier of the contrast adjustment. The operator “~” inverts an image, so ~R is the same as (1-R) in the equation:

equation

In PixInsight, there is a script that can do this for you and not surprisingly, it is called the NarrowBandRGB script or NBRGB for short. This does the pixel math for you and has a facility to evaluate different combination factors and filter bandwidths. In each case it allows you to enhance a color RGB file with narrowband data. In the case of OIII data, since it lies within the blue and green filter bandwidths, it is often used to bolster each. It uses a complex and non-linear algorithm that takes into account filter bandwidths, beyond the scope of this discussion.

fig125_2.jpg

fig.2 Most of Photoshop’s blending modes have a direct equivalent PixelMath equivalent, or at least a close approximation. These equations are using PixelMath notation. The “~” symbol denotes the inverse of an image (1-image) and the “- -” symbol is the magnitude operator.

PixInsight Iterative Workflow

Never one to be complacent, the folks at Pleiades Astrophoto have devised yet another approach that captures the best of both worlds. This revised workflow keeps control over the image and retains accurate color representation of both line emission and broadband emission objects. At the same time, it minimizes the SNR degradation to narrowband data. The workflow can be automated and it is planned for release in a future PixInsight update. In essence the process has three steps, which are repeated to keep the finer stellar detail from the broadband red image:

1) intermediate image, C = R / Hα

2) apply strong noise reduction to intermediate image

3) new R = C * Hα

4) repeat 1–3 for the desired effect

This is a good example of where a process container can store the three steps and repeatedly applied to an image.

Blending Modes and PixelMath

At this point it is worth taking a short detour to discuss Photoshop blending modes. For me, PS blending modes have always had a certain mystery. In reality, blending modes simply combine layers with a mathematical relationship between pixels in the image. The opacity setting proportionally mixes the global result with the underlying layer and a mask does the same but at a pixel level. The same end result can be achieved in PixelMath using simple operators, once you know the equivalent equation for the Photoshop blending mode. Using PixelMath may take a few more grey cells at first, but crucially offers more extensive control. For instance, PixelMath can also use global image statistics (like median and mean) in its equations as well as work on combining more than two images at a time.

The two equations for combining R and Hα above blend the two channels together using simple additive math and some statistics. For Photoshop users, there is no simple equivalent to the two equations above but I dare say it could be done with a combination of layer commands if one feels so inclined. More frequently, a simple blending mode is used to similar effect. Of these, the lighten blending mode is perhaps the most popular choice to combine narrowband and color images. Several Internet resources specify the corresponding mathematical formula for the Photoshop blending modes and it is possible to replicate these using the PixelMath tool. Some of the more common ones used in astrophotography are shown in fig.2, using R and Hα as examples.

Narrowband Imaging

It is almost impossible to impart a regime on full nar-rowband imaging. The only limitation is time and imagination. Many of the tools for non-linear RGB processing equally apply, as well as the principles of little and often and delicacy, rather than searching for the magic wand. The unique challenges arise with the combination of weak and strong signals and the balancing and manipulation of color. After orientating ourselves with the general process, these will be the focus of attention.

Luminance Processing

The basic processing follows two paths as before; one for the color information and the other for the luminance. The luminance processing is identical to that in RGB imaging with one exception: the source of the luminance information. This may be a separate luminance exposure or more likely luminance information extracted from the narrowband data. When you examine the image stacks for the narrowband wavelengths, it is immediately apparent that the Hα has the cleanest signal by far. This makes it ideal for providing a strong signal with which to deconvolve and sharpen. The downside is that it will favor the Hα channel information if the Hα signal is also used solely as the information source for a color channel. Mixing the narrowband images together into the RGB channels overcomes this problem. Alternatively, if the intention is to assign one image to each RGB channel, extracting the luminance from a combination of all the images (preferably in proportion to their noise level to give the smoothest result) will produce a pseudo broadband luminance that boosts all the bright signals in the RGB image. Sometimes the star color arising from these machinations is rather peculiar and one technique for a more natural look is to shrink and de-saturate the stars in the narrowband image and replace their color information with that from some short RGB images, processed for star color, as depicted in fig.4.

fig125_3.jpg

fig.3 An example of a narrowband processing workflow. After preparing the linear images, a copy of the H(SII, OIII) is put aside for luminance processing (uniquely, deconvolution and sharpening). The narrowband channels are blended together, or simply assigned to the RGB channels and the resulting color image is processed to maximize saturation and color differential. Noise reduction on both the chrominance and luminance is repeated at various stages. Just before combining the stretched images, the RGB data is blurred slightly.

Color Processing

The broad concepts of color processing are similar to RGB processing with the exception that, as previously mentioned, the SII and to some extent the OIII signals are much weaker than the Hα. The separate images still require careful gradient removal and when combined, the RGB image requires background neutralization and white point (color balance) before non-linear stretching. The OIII and SII data requires a more aggressive stretch to achieve a good image balance with Hα, with the result that their thermal and bias noise becomes intrusive. To combat this, apply noise reduction at key stages in the image processing, iteratively and selectively by using a mask to protect the areas with a stronger signal. As with broadband imaging, once the separate RGB and luminance images have been separately processed and stretched, combine them using the familiar principles used in LRGB imaging, to provide a fully colored image with fine detail. In many instances though, the narrowband data will be binned 1×1 as it will also be the source of the luminance information.

Fortunately spatial resolution is not as critical in the color information and the RGB image can withstand stronger noise reduction. Even so, strict adherence to a one image, one channel application may still produce an unbalanced colored result. To some extent the degree of this problem is dependent upon the deep sky target, and in each case, only careful experimentation will determine what suits your taste. As a fine-art photographer for 30 years I have evolved an individual style; to me the initial impact of over-saturated colored narrowband images wanes after a while and I prefer subtlety and detail that draw the viewer in. You may prefer something with more oomph. The two approaches are equally valid and how you combine the narrowband image data into each RGB channel is the key to useful experimentation and differentiation. With the basic palette defined, subsequent subtler selective hue shifts emphasize cloud boundaries and details. The narrowband first light assignments have some good examples of that subtlety.

fig125_4.jpg

fig.4 This shows a simplified workflow for narrowband exposures, with the added option of star color correction from a separate RGB exposure set. If using Photoshop, place the RGB image above the narrowband image, with a star mask, and select the color blending mode.

Color Palettes

Just as with an artist’s color palette, mixing colors is a complex and highly satisfying process. The unique aim of narrowband imaging is to combine image data from the narrowband images and assign them to each color channel. The remainder of this chapter looks at the unique techniques used to accomplish this.

Photoshop users’ main tool is the Channel Mixer. This replaces one of the color channels with a mixture of all three channels levels. By default, each of the R, G and B channels is set to 100% R, G or B with the other channels set to zero contribution. Unlike an artist’s palette, it can add or subtract channel data. The result is instantaneous and even if Photoshop is not part of your normal workflow, the channel mixer is a remarkably quick way of evaluating blending options. This freedom of expression has a gotcha, however. Photoshop has no means to auto-scale the end result and it is easy to oversaturate the end result by clipping the highlights of one of the color channels. Fortunately there is a warning flag, a histogram display and a manual gain control. Check the histogram does not have a peak at the far right, as it does in fig.5. Even so, the histogram tool is only an indicator. The most accurate way to determine clipping is to use the info box and run the cursor over the brightly colored parts of the nebula and ensure all values are below 255. Mixing it up not only changes the color but it also changes the noise level of the image. For most of us with limited imaging time, blending some of the stronger Hα signal with the OIII and SII may dilute the color separation but it will improve the signal to noise ratio. It is all about trade-offs.

fig125_5.jpg

fig.5 Photoshop’s Channel Mixer is an effective way to mix and match the narrowband images to the RGB channels. It is good practice to check for saturation with the histogram and eyedropper tools. If a channel starts to clip, reduce the overall level by dragging the constant slider to the left.

This is only the start; once the basic separation and color is established, the Selective Color tool in Photoshop provides a mechanism for fine tuning the hue of the different colors in the image (fig.6). The Selective Color tool selects image content based on one of six primary or secondary colors and the color sliders alter the contribution of the secondary colors in that selection. In this way, a particular hue is moved around the color wheel without affecting the others in the image. In one direction, a red changes to orange and yellow, in the other, it moves to magenta and blue. Additionally, the Hue/Saturation tool has the ability to select image content based on a color range and alter is hue and saturation. With the preview button enabled, Photoshop is in its element and there is no limit to your creativity.

PixInsight has equivalent controls, but without the convenience of a live preview in all cases. In the first instance, PixelMath provides a simple solution to effect a precise blending of the three narrowband images into a color channel. Having used both programs, I confess to exporting a simply-assigned HαSIIOIII image to a RGB JPEG file and played with Photoshop’s Channel Mixer settings to establish a good starting point for PixInsight. If you transfer the slider percentage settings back to a PixelMath equation to generate the initial red, green and blue channels. Having done that, the CurvesTrans-formation tool, unlike its cousin in Photoshop, provides the means to selective change hue and saturation based on an image color and saturation. The ColorSaturation tool additionally changes the image saturation based on image color. I think the graphical representations of the PixInsight tools are more intuitive than simple sliders and fortunately both these tools have a live preview function. The examples in figs.7, 8 and 9 show some sample manipulations. Although a simple curve adjustment is required in each case, behind the scenes PixInsight computes the complex math for each color channel. The key here is to experiment with different settings. Having tuned the color, the luminance information is replaced by the processed luminance data; in the case of PixInsight, using the LRGBCombination tool or in Photoshop, placing the luminance image in a layer above the RGB image and changing the blending mode to “luminosity”. (This has the same effect of placing a RGB file over a monochromatic RGB file and selecting the “color” blending mode.)

fig125_6.jpg

fig.6 The Selective Color tool in Photoshop, as the name suggests, is designed to selectively change colors. The color selection in this case is red and the slider setting of -57 cyan reduces the cyan content of strong reds in the image, shifting them towards orange and yellow. To avoid clipping issues, ensure the method is set to “Relative”. When a primary color is selected, an imbalance between the two neighboring secondary colors in the color wheel will shift the hue.

fig125_7.jpg

fig.7 The ColorSaturation tool in PixInsight can alter an image’s color saturation based on overall color. Here, the yellows and blues have a small saturation boost and the greens are lowered. It is important to ensure the curves are smooth to prevent unwanted artefacts.

fig125_8.jpg

fig.8 The PixInsight CurvesTransformation tool is very versatile. Here it is set to hue (H) and the curve is adjusted to alter pixel colors in the image. In this instance, yellows and blues are shifted towards green and turquoise respectively. This has an equivalent effect to the Selective Color tool in Photoshop but in graphical form.

fig125_9.jpg

fig.9 By selecting the saturation button (S), the CurvesTransformation tool maps input and output saturation. This S-curve boosts low saturation areas and lowers mid saturation areas. This manipulation may increase chrominance noise and should be done carefully in conjunction with a mask to exclude the sky background.

So, what kind of image colors can you get? All this theory is all well and good. The following page has a range of variations by altering the assignment and the mix between the channels. My planned narrowband sessions were kicked into touch by equipment issues and so this example uses data generously supplied by my friend Sam Anahory. The images were captured on a Takahashi FSQ85 refractor on an EQ6 mount using a QSI683 CCD camera from the suburbs of London. These images are not fully processed; they don’t need to be at this scale, but they show you the kind of color variation that is possible. Each of the narrowband images was registered and auto stretched in PixInsight before combining and assigning to the RGB channels using PixelMath. A small saturation boost was applied for reproduction purposes.

fig125_10.jpg

fig.10 Canada France Hawaii palette: R=Hα, G=OIII, B= SII, a classic but not to my taste.

fig125_11.jpg

fig.11 Classic Hubble palette: R=SII, G=Hα, B=OII (note the stars’ fringes are magenta due to stretched OIII & SII data).

fig125_12.jpg

fig.12 R=OIII, G=Hα, B=SII; swapping the OIII and SII around makes a subtle difference (note the stars‘ fringes are magenta due to stretched OIII & SII data).

fig125_13.jpg

fig.13 R=Hα +(SII-median(SII)), G=OIII+(Hα/20), B=OIII-(SII/4); the result of just playing around occasionally produces an interesting result which can be developed further. Star colors look realistic without obvious fringes.

fig125_14.jpg

fig.14 R=SII+Hα, G=80%OIII+10%Hα, B=OIII; SII and H are both red, so mixing together is realistic; OIII is given a little boost from Ha to make the green; OIII is used on its own for blue; stars’ colors are neutral.

fig125_15.jpg

fig.15 R=SII+Hα, G=OIII+Hα, B=SII+OIII; each channel is a mixture of two narrowband images and produces a subtle result with less differentiation between the areas of nebulosity.

PixInsight Narrowband Tools

A single 10-minute Hα exposure can differentiate more object detail than an hour of conventional luminance. This will, however, favor red structures if used as a straight substitute. As such, astrophotographers are continually experimenting with ways of combining nar-rowband and wideband images to have the best of both worlds. The NBRGB script described earlier enhances an existing RGB image with narrowband data. Several astrophotographers have gone further and evaluated more radical blending parameters and developed their own PixInsight scripts to conveniently assess them. These are now included in the PixInsight script group “Multichannel Synthesis”. Of these I frequently use the SHO-AIP script. (Just to note, the references to RVP are equivalent to RGB, since the French word for “Green” is “Vert”.) The script uses normal RGBCombination and LRGBCombination tools along with simple PixelMath blending equations. It also uses ACDNR to reduce noise for the AIP mixing option, which as a noise reduction tool, has largely been replaced by TGVDenoise. This is a playground for the curious and there are a number of tips that make it more effective:

The files should preferably be non-linear but can be linear, provided the individual files have similar background levels and histogram distributions (LinearFit or MaskedStretch operations are recommended).

The star sizes should be similar in appearance to avoid color halos. This may require Deconvolution or MorphologicalTransformation to tune first.

When mixing the luminance, either using the Mix-ing Luminance tab or by some other means, avoid using strong contributions from weaker signals as it will increase image noise (e.g. SII).

Process the luminance as required and set to one side.

Find the right color using the Mixing SHONRVB button.

When supporting a narrowband exposures with RGB data, start with proportions that add up to 100% and are in proportion to their respective SNR level.

When satisfied, try the Mixing L-SHONRVB button to add in the processed luminance.

If you enable AIP mixing, noise reduction is applied to the image in between progressive LRGBCombina-tion applications, but the processing takes longer.

Avoid using STF options.

Extract the luminance from the outcome and combine with a RGB star field image. Use a simple PixelMath equation and a close-fitting star mask, to replace the star’s color with that of the RGB star field.

The output of the script should have good color and tonal separation. If one wishes to selectively tune the color further within PixInsight, the ColorMask utility script creates a hue-specific mask. Applying this mask to the image then allows indefinite RGB channel, hue and saturation tuning with CurvesTransformation. With care, a color can be shifted and intensified to a neighboring point on the color wheel.

Narrowband imaging in false color is liberating. There is no “right” way. It does, however, require plenty of image exposure (for many objects, especially in SII and OIII) to facilitate expressive manipulation, as well as some judgement. Some examples resemble cartoons to my mind and lack subtlety and depth.

fig125_16.jpg

fig.16 The SHO-AIP script can handle the combination of 8 files in a classic RGBCombination mix, LRGBCombination mix or using the AIP method, that progressively combines Luminance with the generated RGB image, with noise reduction in between each step. In practice, this sacrifices some star color in return for a smoother image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset