C27 (Crescent Nebula) in Narrowband

A first-light experience of an incredible nebula, using new imaging equipment and software.

Chris Woodhouse

Equipment:

Refractor, 132 mm aperture, 928 mm focal length

TMB Flattener 68

QSI683 (Kodak KAF8300 sensor)

QSI 8-position filter wheel and off-axis guider

Starlight Xpress Lodestar (guide camera)

Paramount MX mount, Berlebach tripod

USB over Cat5 cable extender and interface box

Software: (Windows 7)

Sequence Generator Pro, PHD2, TheSkyX, AstroPlanner

PixInsight (Mac OSX)

Exposure: (Hα OIII, RGB)

Hα, OIII bin 1; 12 × 1,200 seconds each

RGB bin 1; 10 × 300 seconds each

This faint nebula was my real first-light experience of my upgraded imaging equipment and software, using Sequence Generator Pro (SGP) as the image capture program, TheSkyX for the telescope control, PHD2 for guiding and PinPoint 6 for plate solving. (SGP can also use Elbrus and PlateSolve 2 for astrometry as well as astrometry.net, for free.) Plate solving is used for automatic accurate target centering, at the beginning of the sequence and after a meridian flip.

The nebula itself is believed to be formed by the stellar wind from a Wolf-Rayet star catching up and energizing a slower ejection that occurred when the star became a red giant. The nebula has a number of alternative colloquial names. To me, the delicate ribbed patterns of glowing gas resemble a cosmic brain. It has endless interpretations and I wanted to show the delicacy of this huge structure.

This fascinating object is largely comprised of Hα and OIII. There is some SII content but it is very faint and only the most patient imagers spend valuable imaging time recording it. Even so, this subject deserves a minimum of three nights for the narrowband exposures and a few hours to capture RGB data to enhance star color. The image processing here does not attempt to make a false color Hubble palette image but creates realistic colors from the red and blue-green narrowband wavelengths.

fig134_302_1.jpg

Equipment Setup

This small object is surrounded by interesting gaseous clouds and the William Optics FLT132 and the APS-C sized sensor of the QSI683 were a good match for the subject. The 1.2” / pixel resolution and the long duration exposures demanded accurate guiding and good seeing conditions. Sequence Generator Pro does not have an inbuilt autoguiding capability but interfaces intelligently to PHD2. PHD2 emerged during the writing of the first edition and continues to improve through open source collaboration. It works with the majority of imaging programs that do not have their own autoguiding capability.

The mount and telescope setup is not permanent and was simply assembled into position. Prior tests demonstrated my mounting system and ground locators achieve repeatability of ~1 arc minute and maintain polar alignment within 2 arc minutes. Guiding parameters are always an interesting dilemma with a new mount since the mechanical properties dictate the optimum settings, especially for the DEC axis. Fortunately the Paramount MX mount has no appreciable DEC backlash, and guided well using PHD2’s guiding algorithm set to “hysteresis” for RA and “resist switching” for DEC (fig.3). To ensure there were no cable snags to spoil tracking, the camera connections were routed through the mount and used short interconnecting looms. The only trailing cable was the one going to the dew heater tape, on account of its potential for cross-coupling electrical interference. At the beginning of the imaging session, the nebula was 1 hour from the meridian and 2 hours from the mount limit. Although the Paramount does not have an automatic meridian flip, SGP manages the sequence of commands to flip the mount, re-center the image, flip the guider calibration and find a suitable guide star. It even establishes if there is insufficient time before flipping for the next exposure in the sequence and provides the option to flip early. If it was not for the uncertainty of the English weather, this automation would suffice for unsupervised operation with the exception of a cloud detector.

fig134_1.jpg

fig.1Having bought Sequence Generator Pro a year ago, I had continued to persevere with Maxim DL 5. With the new camera and mount, I decided to use SGP for this chapter. It is powerful yet easy to learn. In the above screen, it is in the middle of the imaging sequence. It is set up here to autofocus if the temperature changes and automatically center the target after a meridian flip, which it fully orchestrates.

Acquisition

When focusing, I normally use the luminance filter for convenience. This is not necessarily the same focus setting for other filters. To achieve the best possible focus, the exposures were captured one filter event at a time, focusing for each filter change and between exposures for every 1°C temperature change. This marks a change in my approach. Previously, my hardware and software were not sufficiently reliable for extended imaging sessions, and that encouraged a round-robin approach to ensure an LRGB image from the briefest of sessions. (If you establish the focus offsets for each filter, cycling through filters has little time penalty.)

fig134_2.jpg

fig.2 This setup is the outcome of several upgrades and successive optimizations, using the techniques and precautions indicated throughout the book. The cabling is bundled for reliability and ease of assembly into my interface box. This event was a first for me. After three successive nights of imaging, just three little words come to mind … It Just Works.

fig134_3.jpg

fig.3 PHD2, the successor to the original, has been extensively upgraded. Its feature set now includes alternative display information, equipment configurations, DEC compensation and a more advanced interface with other programs. One of the contributors to this project is also responsible for Sequence Generator Pro. A marriage made for the heavens.

fig134_4.jpg

fig.4 The Hα shows the brain-like structure of the inner shock wave as well as general background clouds.

fig134_5.jpg

fig.5 The OIII signal appears as a wispy outer veil. The levels were balanced to the Hα channel, before non-linear stretching, using the LinearFit tool in PixInsight. This helps with the later emphasis of the OIII details in the color image.

fig134_6.jpg

fig.6 The synthetic luminance file picks out the prominent details of both narrowband images.

Exposure

A theoretical exposure calculation based on the background level versus the sensor bias noise suggests a 60-minute exposure, rather than the 5-minute norm for a luminance filter (such is the effectiveness of narrowband filters to block light pollution). That exposure, however, causes clipping in too many stars and the possibility of wasted exposures from unforeseen issues. I settled on 20-minute exposures in common with many excellent images of C27 on the Internet. I set up SGP to introduce a small amount of dither between exposures, executed through the autoguider interface. This assists in hot pixel removal during image processing. After the narrowband acquisition was complete, three hours of basic RGB exposures rounded off the third night, in shorter 5-minute exposures, sufficient to generate a colored star image without clipping.

Image Calibration

Taking no chances with image calibration this time, calibration consisted of 150 bias frames and 50 darks and flats for each time and filter combination. Previous concerns of dust build up with a shuttered sensor were unfounded. After hundreds of shutter activations I was pleased to see one solitary spot in a corner of the flat frame. QSI cleverly house the sensor in a sealed cavity and place the shutter immediately outside. With care, these flat frames will be reusable for some time to come with a refractor system. The light frames were analyzed for tracking and focus issues and amazingly there were no rejects. I used the BatchPreProcessing script in PixInsight to create master calibration files and calibrate and register the light frames. The light integration settings of the script sometimes need tuning for optimum results and I integrated the light frames using the ImageIntegration tool but with altered rejection criteria tuned to the narrowband and RGB images. The end result was 5 stacked and aligned 32-bit images; Hα, OIII, red, green and blue. (With the very faint signals, a 16-bit image has insufficient tonal resolution to withstand extreme stretching. For instance, the integration of eight 16-bit noisy images potentially increases the bit depth to 19-bit.) These images were inspected with a screen stretch and then identically cropped to remove the small angle variations introduced between the imaging sessions.

Image Processing Options

The Psychology of Processing

Confidence is sometimes misplaced and it is tempting to plunge into image processing, following a well-trodden path that just happens to be the wrong one. I did just that at first before I realized the image lacked finesse. Even then it was only apparent after returning from a break.

An important consideration for processing is to review the options and assess the available interpretations before committing to an imaging path. When one is up close and personal to an image on the screen for many hours (or in the darkroom with a wet print) it is also easy to lose perspective. One simple tip is to check the image still looks good in the morning or have a reference image to compare against (and do not discard the intermediate files and save as a PixInsight project).

In this particular case, we assume the final image will be made up of a narrowband image combined with stars substituted from a standard RGB image. Astrophotography is no different to any other art form in that there are several subliminal messages that we may wish to convey: power, majesty, scale, isolation, beauty, delicacy, bizarre, to name a few. Echoing an Ansel Adams quote, if the exposure is the score, the image processing is the performance.

In this particular case, the goal is to show the nebula with its two distinct shock waves, the brain-like Ha structure and the surrounding blue-green veil. The two narrowband image stacks in fig.4 and fig.5 target these specific features. Combining these to make a color image and a synthetic luminance file is the key to the image. Not only do different combinations affect the image color, but the balance of these in the luminance file controls the dominance of that color in the final LRGB image. This point is worth repeating; even if the RGB combination reproduce both features in equal measure, if the Ha channel dominates the luminance file, the OIII veil will be less evident.

Alternative Paths

We can break this down into two conceptual decisions: the color balance of the RGB image, through the mixing of the two narrowband signals over the three color channels, and the contribution of each to the luminance channel, to emphasize one color or more. The balance equation for both does not have to be the same. In the case of image color, a Hubble palette did not seem appropriate since Ha and OIII are naturally aligned to red and blue for a realistic rendition. The green channel is the open question. I evaluated a number of options, including leaving it blank and using simple PixelMath equations and ChannelCombination to assess a simple 50:50 blend of Ha and OIII and OIII on its own (since OIII is a turquoise color). To emphasize the fainter OIII signal and to prevent the entire nebula turning yellow, I selected a blend of 30% Hα and 70% OIII for the green channel.

fig134_7.jpg

fig.7 The “simplified” PixInsight processing sequence. (ATWT has now been superseded.)

The synthetic luminance channel needs to pick out the dominant features of both channels. After a similar number of blending experiments, I hit upon a novel solution to select the brighter pixel of either the Ha or OIII channel in each case, using a simple PixelMath equation:

equation

When this image (fig.6) was combined later on with the processed color image, the blue veil sprang to life, without diminishing the dominant Ha signal in other areas. Since the two narrowband channels were balanced with the LinearFit tool, the synthetic green channel had similar levels too, which ensured the star color avoided the magenta hue often seen in Hubble palette images. To improve the star color further, one of the last steps of the process was to substitute their color information with that from an ordinary RGB star image.

Image Processing

The image processing in fig.7 follows three distinct paths: the main narrowband color image that forms the nebula and background; a synthetic luminance channel used to emphasize details; and a second color image route, made with the RGB filters and processed for strong star color. These three paths make extensive use of different masks, optimized for the applied tool. Generating these masks is another task that requires several iterations to tune the selection, size, brightness and feathering for the optimum result.

Narrowband Processing

After deciding upon the blend for the green channel, the narrowband processing followed a standard workflow. After carefully flattening the background (easier said than done on the Hα channel, on account of the copious nebulosity), the blue channel (OIII) was balanced to the red channel (Hα) by applying the LinearFit tool. The two channels were added together with the PixelMath formula below to form the synthetic green channel, before combining the three channels with the RGBCombination tool.

equation

This still-linear RGB image has a small clear section of sky, free of nebulosity and this was used as the background reference for the BackgroundNeutralization tool. Similarly, a preview window dragged over a selection of bright stars of varying hue was then used as the white reference for the ColorCalibration tool. Before stretching, a small amount of noise reduction was applied to the entire image and then again, using a range mask to protect the brighter areas of nebulosity. Stretching was applied in two rounds with the HistogramTransformation tool, using the live preview to ensure the highlights were not over-brightened and hence desaturated. This image was put aside for later combination with the luminance data.

Luminance Processing

Linear Processing

A new file for luminance processing was synthesized by PixelMath using the earlier equation. A temporary screen stretch shows an extensive star field that pervades the image and in this case I decided to only deconvolve the stars to prevent any tell-tale artefacts in the nebula. To do this I needed a star mask that excluded the nebulosity. After some experimentation with the noise threshold and growth settings in the StarMask tool, I was able to select nearly all the stars. About 20 stars were selected for the DynamicPSF tool to generate a point spreading function (PSF) image. This in turn was used by the Deconvolution tool to give better star definition. Deconvolution can be a fiddle at the best of times. To prevent black halos, the image requires de-ringing. The result is very sensitive to the Global Dark setting. I started with a value of 0.02 and made small changes. Once optimized for the stars, this setting will almost certainly affect the background. The application of the star mask prevents the tool affecting the background. It took a few tries with modified star masks (using different Smoothness and Growth parameters) to ensure there was no residual effect from the Deconvolution tool to surrounding dark sky and nebula.

Having sharpened the stars, noise reduction was applied to the background with the ATWT tool (now superceded) using a simple range mask. This mask is created with the RangeSelection tool: First a duplicate luminance image was stretched non-linearly and the upper limit tool slider adjusted to select the nebulosity and stars. I then used the fuzziness and smoothness settings to feather and smooth the selection. This range mask was put aside for later use.

Non-Linear Processing

This image has many small stars and just a few very bright ones. In this case, I chose to stretch the image non-linearly with two passes of the HistogramTransformation tool, taking care not to clip the nebulosity luminance. To address the inflated bright stars I used the MorphologicalTransformation tool, set to Erosion mode. This shrinks big stars and reduces their intensity, which allows them to be colorful. At the same time, small stars disappear. After establishing a setting that realistically shrank the inflated stars, I generated a star mask that only revealed the bright stars. This was done by carefully selecting a high noise threshold value in the StarMask tool (but not the bright nebulosity) and a larger scale setting that identifies the largest stars. With an inverted mask in place, the MT tool tames the excess blooming on the brightest stars. The last luminance processing step was to enhance the detail in the nebulosity using the MMT tool. Sharpening a linear image is difficult and may create artefacts. The MMT tool, working on a non-linear image, does not. In this case, a mild bias increase on layer 2 and 3 improved the detail in the nebulosity. This fully processed image was then used as the luminance channel for both RGB images, the narrowband image and the star image. In this way, when the two LRGB images were finally combined, they blended seamlessly.

fig134_8.jpg

fig.8 I selected the brightest stars with a star mask and applied the MorphologicalTransformation tool to them. This tamed the star bloat to some extent and lowered the luminance values. These lower luminance values also help later on with color saturation in the LRGB composite.

RGB Star Processing

After the complex processing of the narrowband images, the RGB star image processing was light relief. The separate images had their backgrounds flattened with the DBE tool before being combined into an RGB file. The background was neutralized and the image color calibrated as normal. The transformation to non-linear used a medium histogram stretch. This image is all about star color, and over-stretching clips the color channels and reduces color saturation. The star sharpness was supplied by the previously processed luminance file and so the ATWT tool was used to blur the first two image scales of the RGB file to ensure low chrominance noise before its color saturation was boosted a little.

fig134_9.jpg

fig.9 The wide-field shot, showing the nebula in context of its surroundings; the red nebulosity in the background and the myriad stars of the Milky Way.

Image Combination

Bringing it all together was very satisfying. The narrow-band color image and the luminance were combined as normal using the LRGBCombination tool; by checking the L channel, selecting the Luminance file and applying the tool to the color image by dragging the blue triangle across. This image was subtly finessed with the MMT tool to improve the definition of the nebula structure and further noise reduction on the background using TGVDenoise, both using a suitable mask support to direct the effect. (In both cases these tools’ live preview gives convenient swift feedback of its settings, especially when they are tried out first on a smaller preview window.)

Similarly, the RGB star image was combined with the same luminance file with LRGBCombination to form the adopted star image. Bringing the two LRGB files together was relatively easy, provided I used a good star mask. This mask selected most if not all the stars, with minimal structure growth. This mask was then inverted and applied to the narrowband image, protecting everything apart from the stars. Combining the color images was surprisingly easy with a simple PixelMath equation that just called out the RGB star image. The mask did the work of selectively replacing the color information. (As the luminance information was the same in both files, it was only the color information that changed.) Clicking the undo/redo button had the satisfying effect of instantly changing the star colors back and forth.

After examining the final result, although technically accurate, the combination of OIII and Hα luminosity in the image margins resembled poor calibration. Using the CurvesTransformation tool, I applied a very slight S-curve to the entire image luminance and then, using the ColorSaturation tool, in combination with a range mask, increased the relative color saturation of the reds slightly in the background nebulosity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset