M27 (Dumbbell Nebula)

A lesson in observation and ruthless editing to improve definition.

 

 

Equipment:

250 mm f/8 RCT (2,000 mm focal length)

QSI 683 (Kodak KAF8300 sensor)

QSI filter wheel (Astrodon filters)

Starlight Xpress Lodestar off-axis guider

Paramount MX mount

Software: (Windows 10)

Sequence Generator Pro, ASCOM drivers

TheSkyX Professional , PHD2

PixInsight (Mac OSX), Photoshop CS6

Exposure: (HOLRGB)

Hα, OIII bin 1; 25 × 1,200 seconds,

L bin 1; 50 × 300 seconds, RGB bin 2; 25 × 300 seconds each

Planetary nebula, formed by an expanding glowing shell of ionized gas radiating from an old red giant, are fascinating subjects to image. Rather than form an amorphous blob, many take on amazing shapes as they grow. M27 is a big, bright object and was the first planetary nebula to be discovered and an ideal target for my first attempt. The object size fitted nicely into the field of view of a 10-inch Ritchey-Chrétien telescope. Ionized gases immediately suggest using narrowband filters and a quick perusal of other M27 images on the Internet confirmed it is largely formed from Ha and OIII and with subtle SII content. This assignment then also became a good subject to show how to process a bi-color narrowband image.

Lumination Rumination

The acquisition occurred during an unusually clear spell in July, and aimed to capture many hours of L, Hα and OIII, as well as a few hours of RGB for star color correction. As it turned out, this object provided a useful learning experience in exposure planning, for as I later discovered during image processing, the extensive luminance frames were hardly used in the final image and the clear skies would have been better spent on acquiring further OIII frames. It was a lesson in gainful luminance-data capture.

fig139_328_1.jpg

Image Acquisition

The main exposure plan was based on 1×1 binned exposures. Although this is slightly over-sampled for the effective resolution of the RCT at 0.5 arc seconds / pixel, I found that binning 2×2 caused bright stars (not clipped) to bloom slightly and look like tear-drops. The plan delivered 25 x 20-minute exposures of Hα and OIII and 50 x 5-minute luminances, along with 6 hours worth of 2×2 binned RGB exposures to provide low resolution color information. During the all-important thrifting process (using the PixInsight SubframeSelector script) I discarded a short sequence of blue exposures, caused by a single rogue autofocus cycle coincident with passing thin cloud. (I set Sequence Generator Pro’s autofocus to automatically run with each 1°C ambient change. With the loss of half a dozen consecutive frames, I have changed that to also trigger autofocus based on time or frame count.)

During the image acquisition, it was evident from a simple screen stretch that both the Hα and OIII emissions were strong (fig.1). This is quite unusual; the OIII signal is usually much weaker and requires considerably more exposures to achieve a satisfactory signal to noise ratio. At the start of one evening I evaluated a few 20-minute SII exposures. These showed faint and subtle detailing in the core of M27 (fig.1) but for now, I disabled the event in Sequence Generator Pro. At the same time, I should have looked more carefully at the luminance information: When you compare the definition of the three narrowband images with the luminance image in fig.1, it is quite obvious that the luminance does not really provide the necessary detail or pick up on the faint peripheral nebulosity over the light pollution. This is in contrast to imaging galaxies, in which the luminance data is often used to define the broadband light output of the galaxy and any narrowband exposures are used to accentuate fine nebulosity.

fig139_1.jpg

fig.1 Visual inspection of the luminance candidates shows a remarkable difference in detail and depth. The Hα and OIII images both show extensive detail structures within the nebula and peripheral tendrils too. The SII data is faint by comparison and the luminance file, itself an integration of over 4 hours, admittedly shows more stars but lacks the definition within the core and misses the external peripheral plumes altogether. This suggests creating a luminance image by combining Hα and OIII.

Linear Processing

I used the BatchPreprocessing script to calibrate and register all frames. Just beforehand, I defined a CosmeticCorrection process on an Hα frame to remove some outlier pixels and selected its process instance in the batch script. The narrow field of view had less incidences of aircraft or meteor trails and the image integration used comparatively high settings for pixel rejection. The MURE noise reduction script was applied to all the image stacks using the camera gain and noise settings determined from the analysis of two light and dark frames at the same binning level. (This is described in detail in the chapter on noise reduction.) This algorithm has the almost remarkable property of attacking noise without destroying faint detail and is now a feature of all my non-linear processing, prior to deconvolution.

In normal LRGB imaging, it is the luminance that receives the deconvolution treatment. In this instance, the narrowband images are used for color and luminance information and the Ha and OIII image stacks went through the same treatment, with individual PSF functions and optimized settings for ringing and artefact softening. This improved both the star sizes and delineated fine detail in the nebula at the same time.

Non-Linear Processing

M27 lies within the star-rich band of the Milky Way and the sheer number of stars can easily compete and can obscure faint nebulosity in an image. To keep the right visual emphasis, a masked stretch was used as the initial non-linear transformation on the three “luminance” files, followed by several applications of LocalHistogramEqualization applied at scales between 60 and 300 to emphasize the structures within the nebula. This combination reduced star bloat. The RGB files, intended for low-resolution color support and star color were gently stretched using the MaskedStretch tool and put to one side. One useful by-product of this tool is that it also sets a target background level that makes image combination initially balanced, similar in a way to the LinearFit tool.

Creating the Luminance File

A little background reading uncovered the useful Multichannel Synthesis SHO-AIP script. The SHO stands for SulfurHydrogenOxygen and it is from the French association of Astro Images Processing. The script by Bourgon and Bernhard allows one to try out different combinations of narrowband and broadband files, in different strengths and with different blending modes, similar to those found in Photoshop. In practice I applied it in two stages; the first to establish a blended luminance file and the second to mix the files into the RGB channels and automatic LRGB combination.

It is also worth experimenting by bypassing LRGB combination with the luminance channel if your RGB channels are already deconvoluted and have better definition. In this case manipulate the color file using its native luminance. This works quite well if the channels are first separately deconvoluted and had MaskedStretch applied to make them non-linear.

fig139_2.jpg

fig.2 After some experimentation, I formed a luminance channel by just combining Hα OIII. As soon as I included luminance information, the stars became bloated and the fine detail in the sky was washed out.

fig139_3.jpg

fig.3 Bubble, bubble, toil and trouble. This is the mixing pot for endless experimentation. On the advice of others, I did not enable the STF (Screen Transform Function) options. Altering the balance of the OIII contribution between the G and B channels alters the color of the nebula to a representative turquoise.

In the first instance I tried different weighting of nar-rowband and luminance to form an overall luminance file. The aim was to capture the detail from both narrow-band stacks. There are various blending modes, which will be familiar to Photoshop users. I chose the lighten mode, in which the final image reflects the maximum signal from the weighted files (fig.2). The final combination had a very minor contribution from luminance (to just add more stars in the background) and was almost entirely a blend of Hα and OIII (after than had been LinearFitted to each other). This concerned me for some time until I realized that a luminance file is all about definition and depth. In future, when I am planning an image acquisition, I will examine the luminance and narrowband files for these attributes and decide whether luminance acquisition through a clear filter is a good use of imaging time. When imaging dim nebula, I increasingly use the luminance filter for autofocus / plate-solving and use a light pollution filter for broadband luminance and combine with narrow band data to manufacture a master luminance file.

Creating the RGB File

The second stage is the melting pot of color. Here I blended the OIII data into both Green and Blue channels (Green is Vert in French, hence the “V”) along with some of the RGB data too (fig.3). After setting the OIII contribution to Green and Blue channels, I balanced the background with Green and Blue broadband data to keep the background neutral. This script does not have a live preview but hitting either of the two mixing buttons refreshes an evaluation image. The other options on the tool perform noise reduction and LRGB combination using existing PixInsight tools. Some typical settings are shown in fig.3.

Fine Tuning

The image still required further improvement to star color, background levels, sharpening, color saturation, noise reduction, and star size. The RGB channels were combined and tuned for star color before being linear-fitted to the nebula RGB image. This matching of intensities makes the combination process seamless. With a simple star mask and a PixelMath equation the star color in the nebula was quickly replaced with something more realistic.

The main image background was then treated with DynamicBackgroundExtractionand with the help of a range mask, the nebula sharpened using the MultiscaleMedianTransform tool (fig.5). Saturation was increased slightly and background noise reduced using TGVDenoise in combination with a mask. The star sizes were then reduced by making a star contour mask (fig.4) and applying a reducing Morphological Transform to the image. A gentle “S-curve” was then applied to the image to make it pop and as the image still had some intrusive background noise, further noise reduction, in the form of the MMT tool was carefully applied to dark areas. The resulting workflow is outlined in fig.6.

fig139_4.jpg

fig.4 The myriad stars in the image can detract from the nebula. Here, I made a star contour mask, characterized by small donut mask elements. When applied to the image and the Morphological Transformation tool is set to erode, the star sizes shrink without their cores fading too much.

fig139_5.jpg

fig.5 The details within the nebula were emphasized with the MMT tool, increasing the bias to medium-sized scales. Here, I used it with an external range mask that was fitted to the nebula and also excluded the brightest stars.

Summary

This is an interesting case study, that has encouraged me to think more carefully about image capture, taught me new tools and the subtleties of balancing truthful rendition with aesthetics. A key part of this was the realization that I had to optimize and process the separate narrowband files prior to combining them. There is no perfect interpretation and I explored some interesting variations before settling on the final image. For this version I simply used MaskedStretch to transform the linear images and after using the Multichannel Synthesis script to product the color image, I used HighDynamicRange-MultiscaleTransform (HDRMT) to enhance the details. This produced excellent definition in the nebula without resorting to multiple applications of LHE (interspersed with small range extensions with HT to avoid clipping). The image just needed a small amount of sharpening, a gentle S-curve to emphasize the faint peripheral nebulosity, a subtle increase in saturation and some gentle noise reduction in the background.

It may sound easy but the various processing attempts took many hours, so one should expect to try things out several times before being satisfied with the end result. If you have not already realized, the trick here is to save each major processing attempt as a PixInsight project, which makes it possible pick it up again on another day, or duplicate and try out different parallel paths from an established starting point. This may be useful if, in the future, I decide to dedicate a few nights to record SII exposures and create a tri-color image version.

fig139_6.jpg

fig.6 The image processing workflow is deceptively simple here and hides a number of subtleties. First, all the files are de-noised with MURE immediately following integration and before any manipulation. The narrowband files are also treated schizophrenically, both as color files and also as the source of the luminance detail. Unlike many other examples in the book, the left hand workflow handles both the luminance and color information simultaneously. In the workflow opposite I have included the original luminance file for completeness, though in this case it was not actually used, but it may be of service with a different subject. The RGB broadband files were principally used for star color but were useful in balancing the background color by including them in the SHO-AIP script (fig.3).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset