IC1396A (Elephant’s Trunk Nebula)

An astonishing image, from an initially uninspiring appearance.

 

 

 

Equipment:

Refractor, 132 mm aperture, 928 mm focal length

TMB Flattener 68

QSI683wsg, 8-position filter wheel, Astrodon filters

Paramount MX

SX Lodestar guide camera (off-axis guider)

Software: (Windows 10)

Sequence Generator Pro, ASCOM drivers

TheSkyX Pro

PHD2 autoguider software

PixInsight (Mac OS)

Exposure: (RGB Hα, SII, OIII)

Hα, OIII, SII bin 1; 40 × 1200 seconds each

RGB bin 1; 20 × 300 seconds each

Just occasionally, everything falls into place. It does not happen very often but when it does, it gives enormous satisfaction. It is certainly a case of “fortune favors the prepared mind”. This image is one of my favorites and is the result of substantive, careful acquisition coupled with the best practices during image calibration and processing. A follow-on attempt to image the NGC 2403 (Caldwell 7) galaxy challenged my acquisition and processing abilities to the point of waiting for another year to acquire more data, and with better tracking and focusing.

This nebula is part of the larger IC1396 nebula in Cepheus and its fanciful name describes the sinuous gas and dust feature, glowing as a result of intense ultraviolet radiation from the super-massive triple star system HD 206267A. For astrophotographers, it is a favorite subject for narrowband imaging as it has an abundance of Hα, OIII and SII nebulosity. The interesting thing is, however, when one looks at the individual Hα, OIII and SII acquisitions, only the Hα looks interesting. The other two are pretty dull after a standard screen stretch (fig.1). Surprisingly, when the files are matched to each other and combined using the standard HST palette, the outcome is the reverse; with clear opposing SII and OIII gradients, providing a rainbow-like background and with less obvious detail in the green channel (assigned to Hα).

fig145_362_1.jpg

Acquisition

This image received a total exposure of 45 hours, the combination of an unusual run of clear nights and all-night imaging with an automated observatory. The acquisition plan used 6 filters: R, G, B, Hα, SII and OIII. By this time I had learned to be more selective with exposure plans and this one did not waste imaging time on general wide-band luminance exposure. All exposures were acquired with a 132-mm refractor and field flattener, mounted on a Paramount MX. The observatory used my own Windows and Arduino applications and ASCOM dome driver and the images were acquired automatically with Sequence Generator Pro. It was cool to set it going, go to bed and find it had parked itself, shut down and closed up by morning. Guiding was ably provided with PHD2, using an off-axis guider on the KAF8300-based QSI camera. A previously acquired 300-point TPoint model and ProTrack made guiding easy, with long-duration guide exposures that delivered a tracking error of less than 0.4 arc seconds RMS. Only four light frames were rejected out of a total of 180.

In my semi-rural environment, a 45-hour integration time overcomes the image shot noise from light pollution. A darker site would achieve similar results in less time.

fig145_1.jpg

fig.1During acquisition, a single screen-stretched Hα narrowband exposure looked promising, but the OII and SII looked dull in comparison (the light frames shown here are in PixInsight, before image calibration and with a simple screen stretch). The combination, however, provides stunning rainbow colors due to a subtle opposing gradation in the narrowband intensities.

Processing

The processing workflow (fig.8) used the best practices that have evolved over the last few years. Initially, all the individual lights were batch-preprocessed to generate calibrated and registered images and then integrated into a stack for each filter. These in turn were cropped with DynamicCrop and had noise reduction applied, in the form of MURE Denoise, according to the sensor gain, noise, integration count and interpolation algorithm during registration (in this case, Lanczos 3). These 6 files flowed into three processing streams for color, star and luminance processing.

Luminance Processing

As usual deconvolution, sharpening and enhancement is performed on luminance data. In this case, the luminance information is buried in all of the 6 exposure stacks. To extract that, the 6 stacks were integrated (without pixel rejection) using a simple scaling, based on MAD noise levels, to form an optimized luminance file. (This is one of the reasons I no longer bin RGB exposures, since interpolated binned RGB files do not combine well.) After deconvolution, the initial image stretch was carried out using MaskedStretch, set up to deliberately keep clipping to a minimum. On the subject of deconvolution, after fully processing this image, I noticed small dark halos around some stars and I returned to this step to increase the Deringing setting. Before rolling back the changes, I dragged those processing steps that followed deconvolu-tion from the luminance’s History Explorer tab into an empty ProcessContainer (fig.2). It was a simple matter to re-do the deconvolution and then apply the process container to the result to return to the prior status quo. (It is easy to see how this idea can equally be used to apply similar process sequences to several images.)

One of the key lessons from previous image processing is to avoid stretching too much or too early. Every sharpening or enhancement technique generally increases contrast and to that extent, push bright stars or nebulosity to brightness levels perilously close to clipping. When a bright luminance file is combined with a RGB file, no matter how saturated, the result is washed-out color. With that in mind, three passes of LocalHistogramEqualization (LHE) at scales of 350, 150 and 70 pixels were applied. Before each, I applied an application of HistogramTrans-formation (HT) with just the endpoints extended out by 10%. On first appearance, the result is lackluster, but it is easy to expand the tonal range by changing the endpoints in HT back again at the end. These three passes of LHE emphasized the cloud structures in the nebulosity. To sharpen the features further, the first four scales were gently exaggerated using MultiscaleMedianTransform (MMT) in combination with a linear mask. With the same mask (inverted this time to protect the highlights) MMT was applied again, only this time, set to reduce the noise levels for the first 5 scales. The processed luminance is completely filled with nebulous clouds (fig.3) and so crucially, neither this or any of the color channels had its background equalized with DynamicBackgroundExtraction (DBE). This would have significantly removed these fascinating features. It is useful to note that MaskedStretch sets a target background level and if the same setting is kept constant between applications, the images will automatically have similar background median values after stretching.

fig145_2.jpg

fig.2 To retrace one’s steps through a processing sequence, drag the processes from the image’s History Explorer tab into an empty ProcessContainer. Apply this to an image to automatically step through all the processes (including all the tool settings).

fig145_3.jpg

fig.3The fully processed luminance file, comprising data from all the 6 filters. It is deliberately subtle and none of the pixels are saturated, which helps to retain color saturation when applied to the RGB file.

Star Processing

The purpose of processing the RGB stacks was to generate a realistic color star field. To that extent, after each of the channels had been stretched, they were linear-fitted to each other and then combined into a RGB file. One thing I discovered along the way is to use the same stretch method for the color and luminance data. This helps achieve a neater fit when the two are combined later on. Linear-fitting the three files to each other before combining generally approximates a good color match. I went further; with an application of BackgroundNeutralization and ColorCalibration to a group of stars to fine-tune the color fidelity. After removing green pixels with SCNR, the star color was boosted with a gentle saturation curve using the CurvesTransformation tool.

Color Processing

The basic color image was surprisingly easy to generate using the SHO-AIP script. This utility provides a convenient way to control the contribution of narrowband, luminance and color channels into a RGB image. In this case, I used the classic Hubble palette, assigning SII to red, Hα to green and OIII to blue. After checking the noise levels of the RGB files versus their narrowband counterparts, I added a 15% contribution from those into the final result (fig.4). This improves star appearance and color. There are endless combinations and it is easy to also blend image stacks across one or more RGB channels (as in the case of a bi-color image). There are two main options: to generate a file with just color information, or combine it with luminance data too. In the latter, the luminance file created earlier was used as the luminance reference during the script’s internal LRGBCombination operation.

fig145_4.jpg

fig.4 This script allows one to quickly evaluate a number of different channel mix and assignment options, allowing up to 8 files to be combined. Here, I blended a little of the RGB channels with their Hubble palette cousins to improve star rendition and color.

Backing up a bit, the relative signal strengths from three narrowband channels are often quite different. The same was true here and, although I did not equalize them in any explicit manner during their processing, another by-product of the MaskedStretch operation is to produce similar image value distributions.

fig145_5.jpg

fig.5 The natural-color RGB file, processed to enhance star color, ready for combining with the final image, using PixelMath and a star mask. This image takes its color information from the RGB combination process and uses LRGBCombination to adjust it to the luminance from the master color image.

fig145_6.jpg

fig.6 After applying Morphological transformation to reduce star sizes, the overall image had some bite and twinkle added by a small dose of sharpening on the smaller scales.

I viewed these stretched narrowband files and applied the SHO-AIP script without further modification. Pleased with the result, I saw no reason to alter the balance. The image showed promise but with intentional low contrast, on account of the subtle luminance data (fig.3). To bring the image to life I used CurvesTransformation to gently boost the overall saturation and applied a gentle S-curve, followed by selective saturation with the Color-Saturation tool. Finally, the HistogramTransformation tool was applied to adjust the endpoints to lift the mid-tones slightly for reproduction.

Star Substitution

At this point the stars had unusual coloring and needed replacing. The dodge here was to extract the luminance from the main RGB image and then use LRGBCombi-nation to apply this to the RGB star image (fig.5). This matched the intensity of both files and it was then a simple matter to apply a star mask to the color nebulosity image and overwrite this file with RGB star data, using a simple PixelMath equation to effectively replace the star color. Well, almost. The crucial step here was the star mask. It needed to be tight to the stars, otherwise they had natural-colored dark boundaries over the false-color nebulosity. The solution was to generate the star mask with low growth settings and then generate a series of versions with progressive applications of MorphologicalTransformation, set to erode. It was very quick to try each mask in turn and examine a few stars of different sizes at 100% zoom level.

fig145_7.jpg

fig.7The MorphologicalTransformation tool set to subtly shrink stars.

After viewing the final image at different zoom levels I decided to alter the visual balance between the stars and nebulosity and blend the star’s luminance and color boundary at the same time. With the aid of a normal star mask, an application of MorphologicalTransformation (set to Morphological Selection) drew in the star boundaries and lessened their dominance (fig.7). To put some twinkle back and add further crispness, I followed by boosting small-scale bias in the lighter regions using the Multiscale-MedianTranform tool together with a non-inverted linear mask (fig.6).

It normally takes two or three tries with an image data-set before I am satisfied with the result, or return later to try something I have learned, to push the quality envelope. Though this processing sequence looks involved, I accomplished it in a single afternoon. A rare case where everything just slotted into place and the base data was copious and of high quality. It is a good note to end the practical assignments section on.

fig145_8.jpg

fig.8 The processing workflow for this image only uses data from colored filters and uniquely does not equalize background levels on account of the abundant nebulosity. The color information is shared across the three main branches that process the luminance, RGB star and the nebulosity data. This workflow also uniquely uses MaskedStretch rather than HistogramTransformation and S-Curves to reduce background levels (and apparent noise). It also keeps luminance levels low until the final fine-tuning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset