Some useful plugins

Most of the plugins listed in this section have been obtained from http://rsb.info.nih.gov/ij/plugins/index.html, which at the time of writing hosted about 500 of them. This is consequently only a small subset of those, but we hope our selection allows you to understand how versatile ImageJ can be. Please note that the following sections are not a brief user manual, as even a small set of instructions for each plugin would take a lot of space. Consider them just as a personal compilation to have a taste of what can be accomplished by using different plugins.

LOCI Bio-Formats

As we commented in the second chapter, there are a number of image formats that ImageJ can read natively. There are plugins that allow it to read other formats, and indeed the original installation package includes several of them by default under the plugins/Input-Output folder.

One particular powerful plugin for file input and output operations is the Laboratory for Optical and Computational Instrumentation (LOCI), a biophotonics research laboratory at the University of Wisconsin-Madison. The Bio-Formats library allows us to read and write several dozens of different imaging formats with different levels of support, depending on the specific format. The entire list of supported formats can be accessed from http://loci.wisc.edu/software/bio-formats. The good thing about this library for us, ImageJ users, is that it is also offered as an ImageJ plugin (http://loci.wisc.edu/bio-formats/imagej). When this plugin is installed, a LOCI menu will appear. We can then use this plugin's capabilities to open a greater number of different formats.

Another great thing about this plugin is that we can use it in our own macros or plugins, so if we are working with formats unsupported in the raw version of ImageJ, we can still automate our analysis procedures.

Image segmentation

The term image segmentation refers to the general concept of dividing the image into regions of interest and background elements we do not want to measure. There exist many different methods for doing this, and in fact the simple act of drawing a rectangular selection over the image is itself a form of segmentation. Some ImageJ plugins allow us to perform more fine-grained selections or run semi-automatic processes based on prior learning.

Auto Threshold and Auto Local Threshold

In the previous chapters, we learned that objects in the image can be segmented by classifying background or object pixels depending on their level of intensity (thresholding). There are several methods for calculating this threshold automatically. If you want to display in a single step, the result of applying every method on your image, use the plugin called Auto Threshold (http://fiji.sc/wiki/index.php/Auto_Threshold), which will create a montage with the result of automatic thresholding with every available method applied to your image. One limitation of these methods is that they calculate a global threshold that is the same for all pixels in the image. Other methods apply a different threshold for every pixel, based on the surrounding values (this is called local thresholding). You can also test all the available local thresholding methods on your image in a single step with the Auto Local Threshold plugin (http://fiji.sc/Auto_Local_Threshold).

The trainable Weka segmentation

This plugin uses the Weka machine learning library (http://www.cs.waikato.ac.nz/ml/weka/) and allows ImageJ to perform segmentation operations through machine learning techniques. It can be downloaded from http://fiji.sc/Trainable_Weka_Segmentation. In that same page there are instructions for working with it, but if we summarize them, it is a two-step process:

  1. Train the classifier. You will have to manually segment your image and assign the different regions to different classes (two are created by default, but you can add more) and then click on the Train classifier button. So then it learns the features of the different classes. By default, it uses a random forest method, but several more are available.
  2. Once the classifier has been trained, you can apply it to other similar images to obtain the proposed segmentation.

SIOX (Simple Interactive Object Extraction)

SIOX (http://www.siox.org/) is an advanced algorithm used to extract the foreground objects in an image from the background elements that are of no interest. It works in a similar way to the previous plugin: the user needs to train the method using the native ImageJ ROI tools, so that it can learn which elements should be considered foreground and which ones background. The plugin then proceeds to isolate the objects that we consider relevant and creates a mask that can be applied to the original image to remove the background. The trained classifier can be saved, as in the case of the Weka tool, in order to apply it to several similar images.

Clustering

There are also a number of plugins that help to segment an image using unsupervised machine learning techniques. These methods try to separate automatically the different image regions based on the pixel values without the need for user intervention. Two of these plugins are k-means clustering (http://ij-plugins.sourceforge.net/plugins/segmentation/k-means.html), which can be used for static images, or jClustering (https://github.com/HGGM-LIM/jclustering), which is intended as a framework for the implementation of unsupervised clustering algorithms for dynamic images (2D + time or 3D + time) and groups image regions according to their time activities.

Image registration

This is the process of aligning two or more images so that the corresponding features can easily be related. If you have two images that represent the same object in the real world but seem to appear with different parameters (acquisition technique differs or sample has been imaged at two time points), you can try synchronizing both windows by navigating to Analyze | Tools | Synchronize Windows, but the coordinates will not match if they are not properly aligned. This alignment process is called image registration, and it consists of calculating the geometrical transformation that has to be applied to one of the images to be aligned to the other.

Geometrical transformations are divided in two main groups: linear and non-linear. A geometrical transformation modifies the coordinates of the pixels in your image, so it changes pixel positions in space. If the transformation applied to an image is linear, that means that the straight lines in the image will still be straight after the transformation. Non-linear ones, on the other hand, have no restrictions on how pixel coordinates are modified, so straight lines could be transformed into curves. You may be thinking why do you need to know all this. The answer is that depending on your registration problem, a different transformation maybe the right solution. The following diagram shows different types of linear transformations:

Image registration

The linear transformations in the diagram are sorted from less to more parameters involved. A rigid transformation only allows translating and rotating the image, we can then add scaling (that can be different in every direction) and also shears (also one in every direction). If you try to align two images of the same object in which the acquisition device is just displaced, a rigid transformation may be enough. If the distance from the sample to the camera has changed between acquisitions, you may need to apply some scaling, and if the sample has been deformed in one of the images, a linear transformation won't be enough, and you will have to look for a non-linear one.

If image registration is the process of finding the geometrical transformation necessary to align the images, registration plugins will try to help you in this searching process. They can do this automatically or manually. Automatic image registration will measure if images are correctly aligned (with a function called similarity measure) and will modify the geometric transformation in an iterative process until the value for the similarity measure is the maximum. All this process is transparent to the user. Manual registration plugins will ask the user for some features in both the images and calculate the transformation that aligns those points. Let's take a look at one example of the first type.

Stackreg

This plugin (available from http://bigwww.epfl.ch/thevenaz/stackreg/) aligns a stack of images, using every image in the stack as a template to align the next one. You also need to install a second plugin called Turboreg (http://bigwww.epfl.ch/thevenaz/turboreg/), since it provides functions used by Stackreg.

From the previous chapters you know that an image stack can represent a 3D object, so every 2D image represents a slice of that object. Imagine you acquire every slice as an independent picture, and consequently you were not able to place every 2D slice of your sample exactly in the same position as in the previous one. The resulting stack will have the images misaligned when the Z coordinate changes. This plugin is the right one to solve this problem.

If you want to test how Stackreg works, load the test image called tuberculosis_stack_unreg.tif. You will probably notice that there is something wrong with this image: the three color channels are not correctly aligned (open tuberculosis.tif to compare with the original image if you don't believe this). Stackreg can solve this problem, but it accepts a stack as input and not a multichannel image. You can convert this example image into a stack by navigating to Image | Color | Split Channels (three images are created) and then by navigating to Image | Stacks | Images to Stack. Now select this new stack and run Stackreg. You only need to select the type of transformation that the registration algorithm will look for. In this example, the images are only translated. So translation would be enough. If you select a transformation with more parameters than needed, the worst that could happen is that the solution is incorrect, so better select the simplest transformation that matches your problem. After you click on OK, the plugin will run and when finished (less than one minute) the misaligned stack will be substituted by an aligned one. You can check whether the result is correct by creating again a composite image by navigating to Image | Colors | Make Composite.

3D volume rendering

Displaying stack data with the Orthogonal Views tool (accessible by navigating to the Image | Stacks menu) is helpful when trying to explore your 3D data data. But if you want a 3D representation of your data that you can interact with, Volume Viewer is your plugin.

Volume Viewer

Although this plugin is already installed in ImageJ distribution, we recommend you download the latest version (http://rsbweb.nih.gov/ij/plugins/volume-viewer.html), as the improvements are worth it. You need to delete the old version from your plugins folder before copying the new one, or ImageJ will complain about duplicities. Load the sample image T1 head and navigate to Plugins | 3D | Volume Viewer in order to test the plugin. Any volume rendering technique creates a two dimensional image from your 3D data by passing rays through your image, and applying some function to the pixel values that the ray encounters in its trip. Depending on your data, the proper values for that function may be difficult to set.

Volume Viewer has several modes for creating the render. Slice and Slice & Borders do not pass any rays through your 3D data, they just create a slice with unrestricted orientation, and you can modify this orientation by clicking and moving the mouse on the 3D display. Max projection and Projection are real rendering modes, since the value of the final image will be the maximum or the sum of all the pixels encountered by every ray. These modes are not especially interesting for our sample image (that is a magnetic resonance study), but are very useful if we are interested in very bright pixels in our data or if the sum of all the pixel values is meaningful, as for instance in computed tomography. Finally, the Volume mode is the one that offers more possibilities. There are many options that you can play with. We will give you some initial help on the basics:

  • Distance is used to select part of your original volume to be used in the rendering process. If you set it to the minimum value, the whole stack will be used.
  • The Scale parameter zooms in or out your 3D view.
  • The main parameter in order to change the way your volume render is created is the alpha value in Transfer Function (the orange line in the plot on the right). It indicates the transparency of every pixel depending on its intensity. Pixels with low intensity values are usually more transparent (low alpha), while those pixels with high intensity values are more opaque (high alpha). You can modify this behavior with the mouse.
  • You can also add some Light to your 3D view, with several parameters that will affect the result.

The following screenshot shows the T1 head image loaded and rendered in the Volume mode. Some parameters have been modified from the default values: scale has been increased, alpha values have been modified (low intensity values are completely transparent), and light has been turned on:

Volume Viewer

Other utilities

There are ImageJ plugins available for almost every operation you can perform on an image. Some of them do not fit in the preceding categories, so in this section we present a brief list of other plugins that we have found to be useful or are excellent examples of the adaptability of ImageJ.

MosaicJ

MosaicJ (http://bigwww.epfl.ch/thevenaz/mosaicj/) implements the necessary algorithms to perform image stitching, that is, the composition of a mosaic from individual images.

FigureJ

ImageJ is so versatile that some of the plugins serve for purposes you would not have imagined. FigureJ (http://imagejdocu.tudor.lu/doku.php?id=plugin:utilities:figurej:start) is an ImageJ plugin designed to ease the sometimes tedious job of preparing figures for scientific publications.

Study anonymization

For those of us working with medical images, the removal of a patient's data is a very important step prior to sharing or processing images. There are several ImageJ plugins that remove sensitive information from the DICOM headers. Two of them are DICOM Rewriters (http://rsb.info.nih.gov/ij/plugins/dicom-rewriter.html) and others are Anonymize IJ DICOM (http://rsbweb.nih.gov/ij/plugins/anonymize-ij-dicom/index.html).

Note

We are only listing these plugins here as an example to prove what ImageJ can do. We are not encouraging their use in any way, so if you are going to use them to anonymize your studies, please first check that the resulting files do not contain any sensitive information.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset