My first mosaic, using special tools in Sequence Generator Pro and PixInsight
Equipment:
Refractor, 132 mm aperture, 928 mm focal length
TMB Flattener 68
QSI683wsg, 8-position filter wheel, Astrodon filters
Paramount MX
SX Lodestar guide camera (off-axis guider)
Software: (Windows 10)
Sequence Generator Pro, ASCOM drivers
TheSkyX pro
PHD2 autoguider software
PixInsight (Mac OSX)
Exposure (for each tile): (LRGBHα)
LPS P2 bin 1; 15 × 300 seconds, Hα bin1; 5 × 1,200 seconds
G&B bin 2; 10 × 300 seconds, R bin2; 15 × 300 seconds
It is one thing to take and process a simple mosaic, comprising of a few images, and quite another when the tile count grows and each is taken through 5 separate filters. Not only does the overall imaging time increase dramatically but so does the time taken to calibrate, register and process the images up to the point of a seamless linear image. This 5-tile mosaic image in LRGBHα had over 300 exposures and 25 integrated image stacks, all of which had to be prepared for joining. When you undertake this substantial task (which can only be partially automated) one quickly develops a healthy respect for those photographers, like Rogelio Bernal Andreo, who make amazing expansive vistas.
One reason for taking mosaics is to accomplish a wide-field view to give an alternative perspective, away from the piecemeal approach of a single entity. The blue nebulosity of M45 for instance is all the more striking when seen in context to its dusty surroundings of the Integrated Flux Nebula (IFN). In this assignment, I chose to do a mosaic to create an unorthodox framing, with 5 images in a line, following the form of the long thin California Nebula (NGC1499) some of which is shown opposite. With a range of star intensities and varying intensities of nebulosity, it is a good example to work through in order to explore mosaic image acquisition and processing in practice.
The images were taken with a 132-mm refractor, fitted with a non-reducing field flattener. Using the mosaic tool in SGP (fig.1) I planned the mosaic over a monochrome image from the Deep Sky Survey (DSS) using 5 overlapping tiles. SGP then proceeded to create a sequence with five targets. I chose to image through an IDAS LPS-P2 light pollution and Hα filters (unbinned) and added general coloration with binned RGB exposures (fig.2). The nebula emerges end-on over my neighbor’s roof. By reversing the target order, it allowed me to maximize the imaging time; by starting each session with the emerging end and finish on the trailing one, as it disappeared over the western horizon. I used moonless nights for the LRGB images and switched to Hα when there was more light pollution. To ensure accurate framing to within a few pixels, I used the new sync offset feature in SGP. (SGP has always offered a synchronization feature that uses the ASCOM sync command. Some mounts, however, do not support this command, as it can interfere with pointing and tracking models, and in common with some other acquisition packages, SGP now calculates the pointing error itself and issues a corrective slew to center the image to the target.)
The overall process flow is shown in fig.9. It concentrates on the specific mosaic-related processes up to what I would consider are normal LRGB processing activities (starting with color calibration, deconvolution and stretching). Following the standardized workflow for merging images, outlined in the mosaic chapter, each image was calibrated, registered and integrated to form 25 individual traditional stacks. (The binned RGB images were registered using B-Spline interpolation to avoid ringing artefacts.) After applying MureDenoise to them (noting the image count, noise, gain and interpolation method in each case) I cropped each set and carefully calibrated the background levels using DynamicBackgroundExtraction. There were only a few background samples selected on each image on account of the extensive nebulosity. I placed sample points near each image corner, on the assumption that sampling equivalent points in each tile helps with imaging blending during the mosaic assembly process.
There are a number of ways forward at this point. Mosaic images can be made with the individual image stacks for each filter, or a combination of stacks for simplicity. In this case I chose to have a luminance and color image for each tile. I used the Hα channel to enhance both. In one case, I created a “superRed” channel from a combination of Red and Hα (before using RGBCombination) and in the other a “superLum”, from Hα and Luminance channels. In each case I used LinearFit to equalize images to the Hα channel before combination. This makes the combination more convenient to balance and allows simpler and more memorable ratios in PixelMath of the form:
Image integration, removes the RA and DEC information from the FITS header. The ImageSolver script re-instates it. Using an approximate center and accurate image scale, it reports the center and corner coordinates as well as the image orientation. I ran the script on each of the image stacks assuming it helps with later registration. I then created a blank star canvas, using the Catalog Star Generator script, with dimensions slightly exceeding the 5 × 1 tile matrix and centered on tile 3 (fig.3). Tile registration to the canvas followed, in this case using the StarAlignment tool, with its working mode set to the Register/Union-Separate (fig.4). This produced two images per registration. The plain star canvas is discarded leaving behind a solitary tile on a blank canvas (fig.6).
Before combination, the registered image tiles require further equalization. As described in the mosaic chapter, the LinearFit algorithm struggles with images that have black borders. To equalize the tiles, I used David Ault’s DNA Linear Fit PixInsight script, which effectively matches tile intensity between two frames in the overlap region and ignores areas of blank canvas. This was progressively done tile to tile; 1 ≫ 2, 2 ≫ 3 and so on for the luminance views and then for the RGB views (fig.5).
To create both mosaic images one uses GradientMergeMosaic (GMM). It works best if there are no non-overlapping black areas. My oversize star canvas had a generous margin all round and I needed to crop all canvases down to the overall mosaic image area. This was done by creating a rough image using a simple PixelMath equation of the form:
I then used the DynamicCrop tool on this image to trim off the ragged edges and applied an instance of the tool to all 5 pairs of tiles.
The next stage was to create two mosaic images (one for luminance and one for color) using GradientMergeMosaic. I first saved the five pairs of tiles to disk as this tool works on files, rather than PixInsight views. The outcome from GMM looked remarkable, with no apparent tile intensity mismatches and good blending, but upon close examination, a few stars on the join required some tuning (fig.8). I fixed these by trying GMM again using an increased Feather radius. I had a single remaining telltale star and this was fixed by removing it from one of the overlapped images (by cloning a black circle over it) and running GMM once more. As far as registration was concerned, this refractor has good star shape and low distortion throughout the image and the registration technique that matched the images to a star canvas ensured good alignment in the overlap regions.
In this particular image the nebulosity fades off at one end and the prior equalization processes had caused a gradual darkening too. This was quickly fixed with another application of DynamicBackgroundExtraction on both mosaic images, taking care to sample away from areas of nebulosity.
Armed with two supersized images, I carefully color-calibrated the RGB image, after removing green pixels with SCNR. I lightly stretched the image, reduced the noise and blurred it with the Convolution tool. (This also reduces the noise and improves star color information.) Colors were enhanced with the saturation control in the CurvesTransformation tool.
The luminance file was processed normally; first with Deconvolution and then stretched with a combination of HistogramTransformation and MaskedStretch. After a little noise reduction, the cloud details were enhanced with LHE and MMT tools. Between each stretching or sharpening action, the Histogram-Transformation tool was applied to extend the dynamic range by 10% to reduce the chance of image clipping.
It is sometimes difficult to find the right settings for LRGBCombination for color and luminosity. One way to make it more predictable is to balance the luminance information beforehand. To do this I first converted the RGB to CIE L*a*b*, using ChannelExtraction and applied LinearFit, to the L channel using the Luminance as the reference image. Using ChannelCombination, I recreated the RGB file and then used LRGBCombination to create the final image.