Filters

First of all, remember, a digital image is a matrix in which each cell contains a different value, a number or group of numbers (think RGB images, which contains a triplet). That number defines an intensity within a certain scale (8 bit, 16 bit, 32 bit images) and certain conventions (black is 0, white is the maximum value). When you change the value of a pixel, it reflects in a change in the grayscale or color representation on the screen. Filters are mathematical functions that generate new pixel values from old ones. There are many applications to this, from noise removal to edge detection.

Image filtering in the spatial domain

The process of assigning new pixel values depending on the values of each pixel and its neighbors is called filtering in the spatial domain, and is achieved through a mathematical operation called convolution. In our context, it consists of taking the original image and a second, smaller one, called kernel or mask. The values in the mask are called weights. The convolution operation takes every pixel in the original image, places the mask on top of it with the mask center on the pixel whose value we wish to change, and calculates the new pixel value as the weighted sum of the original one and its neighbors.

The weights for every pixel are the values of the mask. The process is depicted in the following diagram:

Image filtering in the spatial domain

Original image to be processed (right) and convolution mask (left) that contains the weights to be applied to every surrounding pixel.

The original image is shown to the right. If we want to calculate the new value for the shaded pixel, that has current value 106, we will need to apply the following formula:

Image filtering in the spatial domain

This process has to be applied to every pixel in the image. Without specifying the values of the weights, you would probably not understand the purpose of this process. But let's suppose that all weights (w1, w2, …, w9) are equal to Image filtering in the spatial domain. In that case, for every pixel the new value is the average of the original pixel and its eight neighbors. Depending on the values that we introduce in the mask, a different process will be applied to the original image.

You can notice that there may be a problem when computing the new value of a pixel placed on the image edge, as part of the kernel falls outside the limits of the image. In those cases, several solutions can be applied: either we imagine that the outside of the image is composed of black pixels (value = 0) or we artificially duplicate the edge pixel values, so that the mask has some value under its weights. ImageJ uses this second solution internally, so you do not need to worry about it.

Let's work a little on the meaning of a mask that has all weights equal to Image filtering in the spatial domain. This filtering is called smoothing: when the original pixel has an intensity value similar to its neighbors, the resulting value will not be very different to the original. But those pixels with values very different from their neighbors will be substituted by a value more similar to their neighborhood. Overall, the image will look blurred. Although this blurring may not seem interesting, it will remove noise, converting noisy areas in the image into homogeneous ones. As a side effect, it will also smooth the edges.

The larger the mask, the greater will be the number of neighboring pixels that will be included in the convolution. All the weight values need not be the same. If we want to give more weight to central pixels and less to the surrounding ones, we will be calculating a weighted average, and consequently the image will be smoothed but with less degradation of the edges than when using the standard average.

There are several ways to apply smoothing filters in ImageJ, which are as follows:

  • Navigating to Process | Smooth applies a smooth average with 3 x 3 mask size.
  • Navigating to Process | Filters | Convolve... shows an interface that allows creating your own odd-sized mask specifying the weights. For instance, three rows with values 1, 1, and 1 is equivalent to a smooth average with 3 x 3 mask size.
  • Navigating to Process | Filters | Mean... lets you specify the radius of the mask, and also has a Preview checkbox.
  • One common weighted average is available by navigating to Process | Filters | Gaussian Blur... In this case, a gaussian function centered in the current pixel is used to calculate the weights. You will be asked to provide the standard deviation (sigma) of this gaussian function. Higher values will result in more smoothing. The sigma value is by default in pixels, although you have a checkbox to indicate if you want to use scaled image units.

Let's put all concepts together in the following exercise. Open the CT_rat_head.tif test image four times (navigate to File | Open four times to obtain four equivalent windows or open it once and navigate to Image | Duplicate... (Ctrl + Shift + D) to make three copies). The first one will be kept unchanged. Process the second with a mean filter with radius 1.5 pixels, and the third with a mean filter with radius 3.0 pixels (available by navigating to Process | Filters | Mean...). Finally, apply a gaussian blur with sigma = 1.0 to the last one (Process | Filters | Gaussian Blur...). The last step is to measure the difference between the filters. For this, we need to draw an oval selection which is exactly the same on the center of the images. Draw the selection on one of the images, select the window of the next one, and navigate to Edit | Selection | Restore Selection (Ctrl + Shift + E) to create the same selection.

Finally, we want to measure the standard deviation of the intensity values inside the selection for every image (original and processed). Make sure that standard deviation is one of the parameters you will measure (by navigating to Analyze | Set Measurements...). Now navigate to Analyze | Measure (Ctrl + M) after selecting every image. The Results window should look similar to the following screenshot:

Image filtering in the spatial domain

The CT_rat_head_slice.tif test image (top left) and filtered versions: mean with radius 1.5 (top right), mean with radius 3.0 (middle left) and gaussian blur with sigma 1.0 (middle right). The Results window at the bottom corresponds to the measures in the green selection.

What we can learn from these results is that all smoothing filters decrease the standard deviation in the measured selection, because that area is more homogeneous after filtering. For the mean filter, larger radius corresponds to larger filter and consequently lower standard deviation (more noise removed), but also more blurring of the edges. Finally, with the gaussian blur we obtain a similar standard deviation than with mean with radius 1.5, but with slightly more edge preservation.

The last filter that may be very useful for certain kinds of noise is the median filter. In this case, instead of applying the mask to the image with convolution, the original pixel value is substituted by the median value of the pixels under the mask. This is a non-linear process (while convolution is linear) that has two main advantages:

  • Median filter removes pixels that have much higher or much lower value than its neighbors (what is usually called the salt and pepper noise)
  • Median filter preserves edges, because it does not introduce new values in the image (as the mean filter does)

Open the CT_rat_head_Salt_and_Pepper.tif test image. We have to recognize that this image is really noisy. Now apply a median filter with radius 1.0 by navigating to Process | Filter | Median.... You will be surprised with the following result:

Image filtering in the spatial domain

Image with salt and pepper noise (left) and the result of a median filter with radius 1.0 (right).

Tip

The spatial filters that we have discussed can also be applied to stacks in 3D. This means that the mask has three dimensions, and consequently not only the pixels in X and Y directions are averaged (for the mean filter), but they are also averaged in the Z direction. You can find them by navigating to Process | Filters with a prefix 3D in their names.

By navigating to the Process | Noise | Remove Outliers… menu option, the median filter of the specified radius is applied when the pixel under the mask deviates from the surrounding ones more than a configurable threshold.

Edge detection

One particularly useful application of spatial filtering is edge detection. An edge in a grayscale image is a discontinuity that separates two regions. There are several ways we can detect edges, the most immediate of them being a filter in the spatial domain.

Consider the next image (the vborder.tif test image):

Edge detection

As you can see in the information line located on top of the image, this is a very small matrix (10 x 9, note the zoom level!), but it is enough for us to comment on simple edge detection methods. The image has only two different pixel values: 0 on the left side (black) and 255 on the right side (white). The border in this image is exactly in the middle of the image, dividing the black from the white region.

The most direct way to detect a border of this kind is to use a simple convolution kernel. Consider the following case, written in the format you would use when running the Process | Filter | Convolve… tool:

-1 0 1

-2 0 2

-1 0 1

This is called the (vertical) Sobel operator. You have already seen how spatial filtering works: the original image is convolved with this matrix and the resulting pixel values are used to form a new image with the filtered values. Now think about what this matrix does as it moves over the preceding screenshot:

  • When the whole matrix is over a homogeneous area, the resulting value is zero. You can check that this is true, independently of the value of the pixels under the kernel.
  • When there is a difference in the values on the right-most column (or the rightmost column and central column; suppose the kernel moves from left to right), the value is nonzero. When the mask passes completely to the other side, it is zero again.

You can test this yourself by navigating to Process | Filters | Convolve... and writing the preceding matrix. You should get a result similar to the following one:

Edge detection

As you can see, the vertical edge in the original image has been detected by the filter. There is an equivalent horizontal Sobel operator, which is like the previous one but rotated 90º. That filter can be used to detect horizontal edges. By navigating to the Process | Find Edges menu option we can run both the operators which returns the square root of the squared results:

Edge detection

This operation finds all the edges in the image. You can try running that command on the CT_slice_test.dcm test image; this image is much more complex than the simple one we have used to explain how these filters work, but the underlying mechanism remains exactly the same. The following screenshot shows both the original image on the left and the edges detected by the Find Edges option on the right. Note that all the edges have been found. As an exercise, try convolving the original image only with the horizontal or vertical kernels, and see what result you obtain:

Edge detection

Tip

Probably, when you applied the Find Edges operation, the resulting image was completely black. This is due to the window/level settings of the image, so you will have to adjust the contrast (by navigating to Image | Adjust | Brightness/Contrast...(Ctrl + Shift + C) and by clicking on Auto). This is something quite common, so always take a look at this before concluding that your analysis erases the original image if you encounter this at some point.

There are other operators that you might want to try:

  • The Prewitt kernel is very similar to the Sobel one, but does not weigh more heavily on the center row/column.
    -1 0 1      1 1 1
    -1 0 1      0 0 0
    -1 0 1      -1 -1 -1

These two kernels are, in effect, differential operators: they compute the first order derivative of the image in the axis they are applied to. There is a kernel that computes the second order derivative, and that is the Laplacian operator. It detects more subtle borders, but precisely because of that, it is much more sensitive to noise. There are several different formulations, depending on whether they include the diagonals or not and where the sign is included:

0 1 0		0 -1 0			1 1 1		-1 -1 -1
1 -4 1		-1 4 -1			1 -8 1		-1 8 -1
0 1 0		0 -1 0			1 1 1		-1 -1 -1

In the following screenshot, you can see several different applications of these kernels on the CT image shown previously. The window/level settings have been set to exactly the same values (from -32741 to -32684) for all images, so you can see the difference in effects:

Edge detection

The lower row corresponds to the kernel that includes the diagonal directions; the left images have the center pixel with the negative value.

Tip

All the different spatial filters that we have presented (smoothing, median, edge detection, and so on) can be applied only to a certain region of the image. Draw a selection before running the filtering process, and the only pixels affected will be those inside the selection.

The Fourier transform

The Fourier transform, named after Jean Baptiste Joseph Fourier (1768-1830), is a mathematical operation that gives us another way to represent our image. Instead of using the spatial coordinates as arguments in our image, in the Fourier space, the argument is frequency. Every position corresponds to the weight that a certain frequency has in our image. The algorithm that ImageJ (and many other programs) uses to convert our image to the frequency space is called FFT (Fast Fourier Transform).

You are probably wondering what the entire previous paragraph meant. As we have done several times before, let's explain it with an example. Open the sin_pattern_1.tif and sin_pattern_2.tif images:

The Fourier transform

Both images represent a sinusoidal pattern of intensity levels that change when we move in the vertical direction over the image. Let's calculate now the Fourier transform value of each image by navigating to Process | FFT | FFT:

The Fourier transform

The result of calculating FFT for the sin_pattern_1.tif (left) and sin_pattern_2.tif (right) images. The window and level settings have been adjusted to improve visualization.

The bright spots in the FFT represent frequencies that are present in the image. For the first image, the sinusoidal pattern varies more rapidly than in the second, which means higher frequency. The zero frequency (the origin) is in the center of Fourier space, and more distance from the center means higher frequency. Fourier transform has some symmetries (the explanations for this are beyond the scope of this book), so the bright spots appear always in pairs with respect to the origin. What you have to understand is that in Fourier space: the positions near the center are of low frequencies, and longer distances from this origin represent higher frequency.

What happens if we rotate the original image? In our example, the bright spots in the FFT appeared in the vertical axis, because the sinusoidal intensity pattern had vertical direction. If it was in the horizontal direction, the FFT would also have rotated. Apart from these basic images, we can also calculate the FFT of real images, as we will see in the next section.

Image filtering in the frequency domain

Not only can the FFT of any image can be obtained, but also the inverse operation. In the previous examples, if you select the FFT of one image and navigate to Process | FFT | Inverse FFT you will recover the original image. But what if we modify the FFT before coming back to the spatial domain? In this section we will learn some applications of this process.

Repeat the previous steps (navigating to File | Open… and navigating to Process | FFT | FFT) with sinc_pattern_1.tif. Now we will remove the two bright spots far from the center, which correspond to the sinusoidal pattern. In order to do this, draw two circular selections (press SHIFT to add the second one) over the areas we want to remove. We want to fill those two areas with zero value. First, we need to select a color to be used to fill by navigating to Image | Color | Color Picker... (Ctrl + Shift + K) and selecting the black color (red = 0, green = 0, blue = 0). Finally, navigate to Edit | Fill (Ctrl + F) to fill the selection with zero value. Now, calculate the inverse FFT (by navigating to Process | FFT | Inverse FFT). Your original image, FFT and its inverse should look as follows:

Image filtering in the frequency domain

In our result (the right-most image in the screenshot), the high sinusoidal pattern has disappeared, because we have removed that frequency from the FFT and only the frequency components with lower frequency remain.

Another application may be the removal of sinusoidal patterns on an image that is a result of some interference. The following screenshot shows the MR_interf.tif test image to the left, and its FFT in the middle:

Image filtering in the frequency domain

In order to remove the interference, draw the green selection on the FFT as it is shown in the screenshot (use a rectangular selection, and then remove the center pressing the Alt key while you draw another rectangle surrounding the center of the FFT). When you calculate the inverse FFT, you will obtain the right-most image in the previous screenshot. As you can see, the artifact has been completely removed without degrading the details in the image.

We have learned what the FFT of an image is, and that if we manipulate the Fourier space the whole image is processed. This has been only an introduction to the possibilities that the frequency domain offers to image processing, but with these concepts you have the basics for further analysis.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset