A reminder of how it used to be …
Equipment:
Refractor, 132 mm aperture, 928 mm focal length
TMB Flattener 68
Starlight Xpress SXVR-H18 (Kodak KAF8300 sensor)
Starlight Xpress 2” Filter Wheel (Baader filters)
Starlight Xpress Lodestar off-axis guider
Skywatcher NEQ6 mount
Software: (Windows 7)
Maxim DL 5.24, ASCOM drivers
EQMOD
PixInsight (Mac OSX), Photoshop CS6
Exposure: (LRGB)
L bin 1; 23 × 300 seconds, RGB bin 2; 20 × 200 seconds each
The Whirlpool Galaxy is a popular target for astronomers. It is a perfect jewel in the sky; measuring only about 10 arc minutes square, it is especially intriguing due to the neighboring galaxy with which it appears to be interacting. The beautiful spiral structure was the first to be observed in a “nebula”. It is amazing to think that this occurred in 1845, by the Earl of Rosse, using a 72-inch telescope. The term “galaxy”, however, did not replace the term “nebula” for these small, deep-sky objects until the 1920s. Armed with a small refractor and a digital sensor, it is easy for us to underestimate the extraordinary efforts that past astronomers made to further science.
This early example is a reminder of what it is like to start imaging and took place over three nights, at the end of each, the equipment was packed away. The camera remained on the telescope for repeatability and simple repositioning (in the absence of plate solving) placed the galaxy in the middle of the frame. The Starlight Xpress camera was screw-thread coupled to my longest focal length refractor and for this small galaxy used a non-reducing field-flattener. The KAF8300 sensor has obvious thermal noise and warm pixels. A small amount of dither was introduced between each exposure to help with their later statistical removal. After adjustment, the NEQ6 mount still had a small amount of residual DEC backlash and a deliberate slight imbalance about the DEC axis minimized its effect on tracking. Very few of the exposures were rejected.
This is one of the last images I took with my original imaging setup. I sold the camera and mount shortly afterwards and before I had completed the image processing. Although I retained the Maxim DL master calibration files, I foolishly discarded the unprocessed exposures. The masters comprised of sets of 50 files each and with hindsight really required a larger sample set to improve the bias frame quality. As we shall see later on, this was not the only issue with the master files to surface, when they were used by PixInsight for the image processing.
The eighty or so light frames and the Maxim master calibration files were loaded into the PixInsight batch preprocessing script for what I thought would be a straightforward step. After cropping, the background was equalized using the DynamicBackgroundExtraction tool, making sure not to place sample points near the galaxy or its faint perimeter. Viewing the four files with a temporary screen stretch showed immediate problems; the background noise was blotchy and, surprisingly, dust spots were still apparent in the image. Undeterred, I carried on with the standard dual processing approach on the luminance and color information, following the steps summarized in fig.1. Of note, the luminance information required two passes of the noise-reducing ATWT tool, using a mask to protect galaxy and stars (fig.2). To increase the detail within the galaxy I employed the HDRMT tool; also applied in two passes at different layer settings to emphasize the galaxy structure at different scales. Color processing was more conventional, with the standard combination of neutralizing the background, color calibration and removing green pixels. Finally, these were combined into a single RGB file for non-linear processing.
A temporary screen stretch of the RGB file showed some worrying color gradients that for some reason had escaped the color calibration tools. Zooming into the image showed high levels of chroma noise too. Cropping removed the worst gradients and the background noise levels were improved and neutralized with noise reduction at a scale of one and two pixels (with a mask) followed by another background neutralization. Even so, I was not entirely happy with the quality of the background although I knew that its appearance would improve when I adjusted the shadow clipping point during non-linear stretching.
All four files were stretched with the HistogramTransformation tool in two passes, with mask support for the second stretch to limit amplification of background noise. The shadow clipping point was carefully selected in the second pass to clip no more than a few hundred pixels and to set the background level. After stretching, the background noise was subdued a little more with an application of the ACDNR tool (now effectively superceded by MLT/MMT/TGVDenoise).
Prior to combining the RGB and L files, the color saturation of the RGB file was increased using the ColorSaturation tool (fig.3). This particular setting accentuates the saturation of yellows, reds and blues but suppresses greens. Finally, the luminance in the RGB file and master luminance were balanced using the now familiar LinearFit process: In this process, the extracted RGB luminance channel is balanced with the processed luminance file using LinearFit tool. It is then combined back into the RGB file using ChannelCombination using the CIE L*a*b* setting and then the processed luminance is applied using the LRGBCombination tool on the RGB file, with fine adjustments to saturation and lightness.
This came in the form of a masked desaturation and a soft edged mask, tuned to remove some of the chroma noise in After applying LRGBCombination, the galaxy looked promising but the background still needed more work. This came in the form of a masked desaturation and a soft edged mask, tuned to remove some of the chroma noise in the background and the faint tails of the galaxy. As there were still several dust spots in the cropped image, I used the PixInsight clone tool (fig.5) to carefully blend them away before increasing the saturation of the galaxy with the saturation option of the CurvesTransformation tool (with mask support). The background had another dose of smoothing, this time with the TGVDenoise tool, set to CIE mode, to reduce luminance and chroma noise. Finally a fruitful half hour was spent using the MMT tool to enhance small and medium structures a little, protecting the background with a mask. The settings were perfected on an active preview (fig.4).
There was something bothering me about this image and the quality of the original files. They required considerable noise reduction, more than usual. I went back to my very first efforts using Maxim DL for the entire processing sequence only to find that the background noise was considerably lower and the dust spots were almost invisible. As an experiment I repeated the entire processing sequence, only this time using Maxim DL to generate the calibrated and registered RGB and luminance files. Bingo! There was something clearly wrong with the combination of Maxim master calibration files and the PixInsight batch preprocessing script. A web search established the reason why: Master darks are created differently by Maxim DL and PixInsight. I had in effect introduced bias noise and clipping by subtracting the master bias from the master dark frames twice.
A little more research established key differences between PixInsight and Maxim DL (fig. 6). This outlines the assumptions used for Maxim’s default calibration settings and PixInsight best practice, as embedded in their BatchPreProcessing script. The differences between the calibrated and stretched files can be seen in fig.7. These files are the result of calibrated and stacked frames, with a similar histogram stretch and a black clipping point set to exactly 100 pixels. These images also indicate that the PixInsight registration algorithm is more accurate than the plate solving technique that Maxim used. The final image using the Maxim stacks were less noisy and flat.
My processing and acquisition techniques have moved on considerably since this early image. This example is retained, however, as it provides useful diagnostic insights and a healthy realism of working with less than perfect data. Its challenges will resonate with newcomers to the hobby. In this case it is easy to add the bias back onto the Maxim master darks, using PixelMath, before processing in PixInsight. Better still, was to keep the original calibration files and regenerate the masters in an optimized all-PixInsight workflow. Hindsight is a wonderful thing.
type | PixInsight (batch preprocessing) | Maxim DL5 (default setting) |
master bias | master bias is simple average: output normalization = none rejection normalization = none scale estimator = median absolute deviation from the median (MAD) |
master bias is simple average, sigma clipped |
master darks | master darks are a simple scaled aver- age and do not have bias subtracted: output normalization = none rejection normalization = none scale estimator = median absolute deviation from the median (MAD) |
master dark is a simple average of sigma clipped values and has master bias subtracted normalization options include scale according to exposure time or RMS noise, useful for when the exposure time is not in the FITS file header |
master flats | flat frames are an average of scaled values, with bias and dark subtracted: output normalization = multiplicative rejection normalization = equalize fluxes scale estimator = iterative k-sigma / biweight midvariance (IKSS) |
master flat is a simple average, of sigma clipped values and has master bias and dark subtracted (dark is often ignored for short flat expo- sures) |
calibrated lights | calibrated light = (light - dark) / flat these are normalized using additive and scaling, estimated by iterative k-sigma / biweight mid variance |
calibrated light = (light - dark - bias) / flat there are several normalization options, including scale and offset for exposure and changing background levels |