M3 (Globular Cluster), revisited

The journey continues; the outcome of 3 years of continual refinement.

 

 

 

Equipment:

250 mm f/8 RCT (2,000 mm focal length)

QSI 680 wsg-8 (KAF8300 sensor) Astrodon Filters

Starlight Xpress Lodestar guide camera

Paramount MX mount, on pier

Mk2 interface box, remote-controlled NUC

AAG Cloudwatcher, Lakeside Focuser

Software:

Sequence Generator Pro (Windows 10)

PHD2 autoguiding software (Windows 10)

TheSkyX Pro (Windows 10)

PixInsight (OSX)

Exposure: (LRGB)

L bin 1; 33 x 300 seconds, RGB bin 1; 33 x 300 seconds each

I think I said somewhere in Edition 1 that astrophotography was a journey and a continual learning process. Like many hobbies it has a diminishing return on expenditure and effort and I thought it worthwhile to compare and contrast the incremental improvements brought about by “stuff” and knowledge over the last three years.

Globular clusters appear simple but they ruthlessly reveal poor technique. If the exposure, tracking, focus and processing are all not just so, the result is an indistinct mush with overblown highlights. Moving on, nearly everything has changed: the camera, filters, mount, telescope, acquisition and processing software and most importantly, technique. In the original, I was keen to show that not everything goes to plan and suggested various remedies. Those issues are addressed at source in this new version, and some. It is still not perfect by any means but it certainly does convey more of the serene beauty of a large globular cluster. I had been using a tripod-mounted telescope and although I had optimized the setup times to under 20 minutes, with the vagaries of the English weather I was not confident to leave the equipment out in the open and go to bed. This placed a practical limit on the imaging time for each night.

This version of M3 was the first image from a pier-mounted Paramount MX in my fully automated observatory. The new setup enables generous exposures and at the same time M3 is an ideal target to test out the collimation of the 10-inch Ritchey Chrétien telescope. At my latitude M3 is available earlier in the summer season than the larger M13 cluster and provides a more convenient target for extended imaging.

Acquisition (Tracking)

The Paramount is a “dumb” mount in so much that it depends upon TheSkyX application for its intelligence. This software appears as a full planetarium, complete with catalogs. Under the surface of the PC and Mac versions, it controls mounts too and in the case of their own Paramount models adds additional tracking, PEC and advanced model capabilities. TheSkyX is also a fully-fledged image acquisition program, with imaging, focusing, guiding and plate solving. It has an application interface or API that allows external control too. The ASCOM telescope driver for TheSkyX uses this API to enable external programs to control any telescope connected to TheSkyX. In my configuration, I connect PHD2, Sequence Generator Pro (SGP) and my observatory application to this ASCOM driver.

The MX’s permanent installation makes developing an extensive pointing and tracking model a good use of a night with a full moon. With the telescope aligned within 1 arc minute of the pole, it created a 200-point TPoint model. TheSkyX makes this a trivial exercise as the software does everything; from determining the sync points to acquiring images, plate solving, slewing the mount and creating and optimizing the models. Al-though the resulting unguided tracking is excellent it is not 100% reliable for a long focal length during periods of rapid atmospheric cooling.

Some consider unguided operation as a crusade; I prefer to use clear nights for imaging and I use the improved tracking as an opportunity to enhance autoguiding performance, using a low-aggression setting or

long 10-second exposures. The Paramount has negligible backlash and low inherent periodic error and is a significant step up in performance (and in price) from the popular SkyWatcher NEQ6. It responds well to guiding and when this is all put together, the net effect effectively eliminates the residual tracking errors and the random effects of atmospheric seeing. In practice PHD2’s RMS tracking error is typically less than 0.3 arc seconds during acquisition and well within the pixel resolution.

fig140_1.jpg

fig.1 A worthwhile improvement in noise level is achieved by combining the luminance information from all four image stacks, weighted by their image noise level.

Acquisition (Exposure)

By this time a QSI camera replaced my original Starlight Xpress H18 camera system, combining the off-axis guider tube, sensor and a 8-position filter wheel in one sealed unit. Although both camera systems use the stalwart KAF8300 sensor, the QSI’s image noise is better, though its download times are noticeably longer. More significantly, the sensor is housed in a sealed cavity, filled with dry Argon gas and the shutter is external, eliminating the possibility of shutter-disturbed dust contaminating the sensor surface over time.

For image acquisition I used SGP, testing its use in unattended operation, including its ability to start and finish automatically, manage image centering, meridian flips and the inclusion of an intelligent recovery mode for all of life’s gremlins. In this case, after selecting a previously defined RCT equipment profile and selecting the LRGB exposure parameters, I queued up the system to run during the hours of darkness and left it to its own devices. The ASCOM safety monitor and additional rain sensors were then set up to intervene if necessary and once the sequence had run out of night, set to save the sequence progress, park the mount and shut the roof. I monitored the first few subframes to satisfy myself that PHD2 had a good calibration and a clear guide star, good focus and to check over the sequence options one more time. (Most sequence options can be changed on the fly during sequence execution.) The next morning, to my relief, my own roof controller had done its stuff and the mount was safely parked, roof closed and the camera cooling turned off. On the next clear night, I simply double-clicked the saved sequence and hit “run” to continue.

fig140_2.jpg

fig.2 The overall image processing workflow for this new version of M3 is extensive and makes best use of the available data. It features a combination of all the image stacks to create a deeper “superLum” with lower noise and, during the image stretching process, ensures that the highlight range is extended to give some headroom and avoid clipping. This image also benefits from a more exhaustive deconvolution treatment, optimized for star size and definition. RGB color saturation is also enhanced by adding saturation processes before mild stretching and blending star cores to improve even color before combining with the luminance data.

The subframe exposures were determined (as before) to just clip the very brightest star cores. This required doubling the exposures to 300 seconds to account for the slower f/8 aperture. I also decided to take all the exposures with 1x1 binning, partly as an experiment and partly because the KAF8300 sensor has a habit of blooming on bright stars along one axis in its binned modes. The overall integration time, taking into account the aperture changes was 3.5x longer, which approximately doubled the signal to noise ratio. The sub-exposures were taken one filter at a time and without dither between exposures, both of which ensured an efficient use of sky time. In this manner I collected 11 hours of data over a few weeks and with little loss of sleep.

Acquisition (Focus)

It is critical with this image to nail the focus for every exposure. I quickly established that if the collimation is not perfect, an RCT is difficult to focus using HFD measurements. During the image acquisition period there were several beta releases of SGP trialling new autofocus algorithms. These improved HFD calculation accuracy, especially for out-of-focus images from centrally-obstructed telescopes. These new algorithms are more robust to “donuts” and exclude hot pixels in the aggregate HFD calculation. To ensure the focus was consistent between frames, I set the autofocus option to run after each filter change and for an ambient temperature change of 1°C or more since the last autofocus event. Of the 132 subframes I discarded a few with large FWHM values which, from their appearance, were caused by poor guiding conditions.

Image Calibration

During the image calibration process I discovered that my camera had developed additional hot pixels since creating an extensive master bias and dark library. I also became more familiar with the PixInsight calibration process, which does not necessarily simply subtract matching dark frames from subframes of the same exposure and temperature. The optimize option in the Master Dark section of the ImageCalibration tool instructs PI to scale the darks before subtraction to minimize the overall noise level. This sometimes has the effect of leaving behind lukewarm pixels. For those, and the few new hot pixels, I applied the CosmeticCorrection tool. Its useful real-time preview allows one to vary the Sigma sliders in the Auto Detect section to a level that just eliminates the defects. (An instance of this tool can also be used as a template in the BatchPreprocessing script to similar and convenient effect.) The master flat frames for this target used my new rotating A2 electroluminescent panel, mounted to the observatory wall (described in Summer Projects). Although the RCT has a even illumination over the sensor, its open design does attract dust over time. I used to expose my flat frames indoors using a white wall and a diffuse tungsten halogen lamp. Moving the heavy RCT potentially degrades its collimation and the pointing/ tracking model and I now take all flat frames in situ.

These calibrated and registered frames were then integrated carefully, following the processes described in the Pre-Processing chapter and using the noise improvement readout at the end of the integration process to optimize rejection settings. The resulting four image stacks were then combined using the ImageIntegration tool once more to form a “superLum”. The tool settings in this case perform a simple average of the scaled images, weighted by their noise level but with no further pixel rejection (fig.1). This superLum and the RGB linear data then passed into the image processing workflow laid out in fig.2.

Image Processing Highlights

By now, I am assuming your familiarity with PixInsight excuses the need to explain every step of the workflow. There are some novel twists though, to ensure that stars are as colorful as possible and the star field is extensive and delicate. In the luminance processing workflow, after careful deconvolution (using the methods described in the Seeing Stars chapter) the non-linear stretching is divided between HistogramTransformation and MaskedStretch tools, with additional highlight headroom introduced during the first mild stretch. Stars are further reduced in size using the MorphologicalTransformation tool through a contour star mask. This shrinking process also dims some small stars and their intensity is recovered using the MMT tool, to selectively boost small-scale bias (using a tight star mask). These techniques are also described, in detail, in the “Seeing Stars” chapter. At each stage, the aim was to keep peak intensities below 0.9. A few bright stars still had a distinctive plateau at their center and these were selected with a star mask and gently blurred for a more realistic appearance.

The RGB processing followed more familiar lines, calibrating the color and removing green pixels. Noise reduction on the background and a light convolution (blur) was applied to the entire image followed by a more aggressive blur, through a star mask, to evenly blend star color and intensity.

The LRGBCombination process has the ability to change brightness and color saturation. It always takes a few goes to reach the desired balance. After LRGBCombination, the contrast was tuned with CurvesTransformation, using a subtle S-shaped luminance curve. The relative color saturation of the blue and red stars were balanced using the ColorSaturation tool. Finally, a blend of noise reduction and bias changes in the MultiscaleMedianTransform balanced the image clarity and noise levels.

If you compare the images in fig.3 and fig.4, you will notice a big difference in resolution and depth. One might think this is a result of the larger aperture and longer focal length of the RCT. In practice, the resolution of the 250-mm RCT is not significantly better than the excellent 132-mm William Optics refractor, on account of the diffraction introduced by the significant central obstruction and the limits imposed by seeing conditions. What is significant, however, is the generous 11-hour exposure, accurate focus, better tracking and the sensitive treatment of the image processing to prevent the core of the cluster blowing out.

Those imagers who have the benefit of dark skies will certainly be able to achieve similar results with considerably less exposure. My semi-rural position still has appreciable light pollution. The associated sky noise adds to the dark and bias noise and necessitates extended exposures to average out. I also need to have a word with a neighbor, who turns on their 1 kW insecurity light so their dog can see where it is relieving itself!

fig140_3.jpg

fig.3 The original image from 2013, taken with a 132 mm f/7 refractor, 1.5 hours exposure, KAF8300 sensor, NEQ6 mount, processed in Maxim DL and Photoshop.

fig140_4.jpg

fig.4 The revisited image from 2016, taken with a 10-inch truss model RCT, 11 hours exposure, KAF8300 sensor, Paramount MX mount, automated acquisition in Sequence Generator Pro and processed in PixInsight.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset