Immediate Mode Imaging Model

Because the Image class was primarily set up for asynchronous handling of image data, many times it cannot easily provide the functionality required for advanced image processing tasks. For this reason, we've been using PixelGrabbers to collect all the data before processing it. For simple processing this worked well, but as things became more complex, we required Rasters, ColorModels, and a series of other classes necessary for data storage and interpretation. In practice, this extra code not only can make your software more difficult to write and understand, but it can also provide opportunities for software errors to occur. For these reasons, the immediate mode imaging model and its associated classes were developed and introduced in the Java 2D package. Basically, this model provides memory allocation and storage of all image data, thus making it available to the programmer at all times just as if you collected all the pixel data using a PixelGrabber in the older push model. Also, there are new classes of predefined image filters that provide much more functionality than the ImageFilter subclasses. These filters allow the processing of image data in ways that permit a particular destination pixel to be a function of more than one source pixel. This wasn't easily done in the push model of image processing.

BufferedImages

Unlike its parent (java.awt.Image), a java.awt.image.BufferedImage allows easy access to the underlying pixel data. This is achieved by having each BufferedImage contain both a Raster and a ColorModel. Therefore, you can obtain the color components of a particular pixel location directly from the BufferedImage without having to worry about the underlying detail involving DataBuffers, SampleModels, and so on.

Note that because it extends the Image class, a BufferedImage can be used anywhere an Image is used (for example, in the Graphic classes' drawImage methods). On the other hand, the conversion from an Image to a BufferedImage isn't as simple because a BufferedImage contains all the image data. The following list illustrates the required steps (also see Listing 4.7):

1.
Make sure that all the image data is loaded.

2.
Create a new BufferedImage using the Image width, height, and image data type (usually BufferedImage.TYPE_INT_ARGB).

3.
Obtain the BufferedImage's Graphics2D object.

4.
Using this graphics object, draw the Image onto the BufferedImage (as done earlier in the double buffering section).

Listing 4.7 createBufferedImage
package ch4;

import java.awt.Graphics;
import java.awt.Label;
import java.awt.Image;
import java.awt.MediaTracker;
import java.awt.image.BufferedImage;

/**
   BufferedImageConverter.java -- static class containing
   a method to convert a java.awt.image.BufferedImage into
   a java.awt.Image
*/
public final class BufferedImageConverter {

    // default version of createBufferedImage
    static public BufferedImage createBufferedImage(Image imageIn) {
        return createBufferedImage(imageIn,
                                   BufferedImage.TYPE_INT_ARGB);
    }

    static public BufferedImage createBufferedImage(Image imageIn,
                                                    int imageType) {
        //you can use any component here
        Label dummyComponent = new Label();
        MediaTracker mt = new MediaTracker(dummyComponent);
        mt.addImage(imageIn, 0);
        try {
            mt.waitForID(0);
        }
        catch (InterruptedException ie) {
        }
        BufferedImage bufferedImageOut =
            new BufferedImage(imageIn.getWidth(null),
                              imageIn.getHeight(null), imageType);
        Graphics g = bufferedImageOut.getGraphics();
        g.drawImage(imageIn, 0, 0, null);

        return bufferedImageOut;
    }
}

Step 2 mentions that the Image type needed to be specified. This is so the correct SampleModel, DataBuffer, and ColorModel subclasses can be used. For example, if a set of image pixels represent ARGB color components packed into a single integer, a DirectColorModel object, a SinglePixelPackedSampleModel object, and a DataBufferInt object will be used (see Table 4.5).

Table 4.5. Some Basic BufferedImage Types
BufferedImage Type Description
TYPE_INT_RGB 8-bit RGB color components packed into an integer (1 pixel/int)
TYPE_INT_ARGB 8-bit ARGB color components packed into an integer (1 pixel/int)
TYPE_BYTE_BINARY A byte packed binary image (8 pixels/byte)
TYPE_USHORT_555_RGB 5-bit RGB color components packed into an unsigned short (1 pixel/ ushort)
TYPE_BYTE_GRAY An unsigned byte grayscale image (1 pixel/byte)

For the complete list of image types, see the BufferedImage documentation on the Java Web site (http://java.sun.com/j2se/1.4/docs/api/java/awt/image/BufferedImage.html).

Filtering

During our earlier discussion of the push imaging model, we described filter classes that could be used for image processing. Some examples of such classes are the CropImageFilter and the RGBImageFilter. Now that we are discussing the immediate mode imaging model, we will also discuss filter classes. Because of the fact that in the immediate mode imaging model the image data is always available, there are many more types of filters than there are for the push model. For instance, filter classes for performing convolution and geometric transformations are available.

Interpolation

Before image filter classes are discussed, it is important to understand the concept of interpolation. To begin, assume that we have a very small (1x3) grayscale image with pixel values of 50, 100, and 150. Next assume that a destination image is set equal to this source image translated a distance equivalent to one third of a pixel horizontally (see Figure 4.9). Now, with respect to the middle destination pixel, the center of the source pixel containing a value of 50 lies two thirds of a pixel away from it and the center of the source pixel with a value of 100 lies one third of a pixel away from it. The question that interpolation attempts to solve is what value do we give this middle destination pixel. One technique would be to simply give it the value of whichever source pixel value was closest (in this case 100). This technique is referred to as nearest neighbor interpolation. Another technique would be to come up with a pixel value based on the linear average of all surrounding source pixel values (in this case .333*100 + .666*50 = 83). This technique is referred to as linear interpolation. In Figure 4.9, (a) represents the destination pixel values using nearest neighbor interpolation, whereas (b) represents the destination pixel values using bilinear interpolation. In the second case, there is not enough information to calculate a value for the first destination pixel so it is left blank. In Java 2D, the default value for these types of pixels is 0.

Figure 4.9. After a source array of pixel values gets translated, interpolation must be used to estimate the destination pixel values.


Each of these techniques can be useful depending on the situation. Nearest neighbor interpolation is very fast, but tends to appear choppy. Bilinear interpolation (which is linear interpolation in two dimensions) appears smoother, but can increase the image rendering time. For most cases, the increased image quality is worth the extra time required for bilinear interpolation. In the left image of Figure 4.10, nearest neighbor interpolation was performed, whereas in the right image, bilinear interpolation was performed.

Figure 4.10. A sheared white and black checkerboard. These images were scaled by a factor of 4 in both the x and the y direction for display purposes.


Tip

This isn't to say that bilinear interpolation is the best interpolation algorithm available: It is just the best choice out of the given two. In general, bilinear interpolation can cause the destination image to appear blurry.


Of course, because we've previously explained that pixel samples are the smallest unit of interest and not pixels, the idea of pixel interpolation can be confusing. What is actually occurring is that all pixel sample bands representing color components are interpolated separatly. In other words, if you are using a packed integer representing RGB bands, the value used for interpolation isn't the integer value of the pixel, but instead, the interpolation is done three times, once for each band.

As you'll soon see, many types of image filtering involve interpolation. For these filtering classes, the interpolation type can usually be specified by explicitly stating which type of interpolation to use or by providing an instance of a java.awt.RenderingHints object that contains information regarding the preferred interpolation method.

Tip

When using a RenderingHints object, the KEY_INTERPOLATION hint does have a possible value of VALUE_INTERPOLATION_BICUBIC, but the Java 2D filter methods do not support it. The supported choices are VALUE_INTERPOLATION_NEAREST_NEIGHBOR and VALUE_INTERPOLATION_BILINEAR.


Tip

Most places that require an object of type RenderingHints will take a null value. This will be interpreted as setting all hints to their default values.


Filtering with Alpha Components

Often, the alpha (transparency) channel is treated as a color component because pixels often have samples representing alpha as well as samples representing color components. In these cases, it is of interest to consider what happens to the alpha channel during image filtering. In many cases, filtering the alpha channel doesn't make sense, such as in the case of color scaling. If you set up a filter to make the color components higher, thus the image brighter, it doesn't mean that you necessarily want the image to be more opaque. In the next few sections when we discuss filters for BufferedImages and Rasters, we will describe how the alpha channel is handled for each type of filter.

As a quick introduction, filters for BufferedImages tend to give alpha special consideration whereas filters for Rasters don't. This is because BufferedImages contain a ColorModel that allows interpretation of the color components, and with a Raster no such interpretation is possible. If, for some reason, the special treatment imposed by the BufferedImage filter is unwanted, you can filter the Raster instead of the BufferedImage. The way to obtain the BufferedImage's Raster is as follows:

public WritableRaster getRaster()

BufferedImageOp and RasterOp Interfaces

When performing filtering using the Image class, much of the functionality of the used filter was defined in its parent class (that is, ImageFilter). When performing BufferedImage filtering, much of the functionality of the used filter will be defined by the BufferedImageOp interface. Similarly, when performing Raster filtering, much of the functionality of the used filter will be defined by the RasterOp interface.

In these latter two cases, there can always be a destination object that is separate from the source object. Thus, the filters can use any combination of source pixels to compute destination pixel values, making 2D convolution filters and 2D affine transformation filters possible.

It is of interest to take a closer look at the method that the BufferedImageOp uses to filter BufferedImages (see Figure 4.11):

ImageBuffer filter(BufferedImage src, BufferedImage dest)

Figure 4.11. BufferedImageOp's filter method.


This method takes a source BufferedImage and converts it into a destination BufferedImage. Often, the alpha components are not filtered or are filtered differently than the color components. If the source and destination BufferedImages have different ColorModels, a color conversion will automatically occur. The reason this method also returns an ImageBuffer is to provide the added functionality of cascading filters so that the destination of one filter can be the source object for another. If a destination BufferedImage is provided, the returned BufferedImage will simply refer to the destination BufferedImage. If the destination BufferedImage is null, an appropriate BufferedImage will be allocated and returned. This saves the user from having to create the destination BufferedImage in advance. Another feature of classes implementing this interface is that for certain filtering classes, it is possible to have the same BufferedImage object for the source and the destination. This subset of classes is analogous to the set of classes described by the ImageFilter class for use in the push model in that a destination pixel can only be dependent on its original pixel value and its location.

The RasterOp interface is similar to the BufferedImageOp interface except that it allows filtering of Rasters instead of BufferedImages (see Figure 4.12). The method that RasterOp classes use to filter Rasters is the following:

WritableRaster filter(Raster src, WritableRaster dest)

Figure 4.12. RasterOp's filter method.


This method converts all components from the source Raster into the components for the destination Raster. The alpha component is not given special treatment.

The main difference between filtering BufferedImages and filtering Rasters is that a BufferedImage contains a ColorModel, which allows interpretation of the pixel samples. Therefore, a BufferedImage filter can process the alpha component differently than the color components. With a Raster, all components are treated equally.

The following five classes: AffineTransformOp, RescaleOp, ConvolveOp, LookupOp, and ColorConvertOp all implement both the BufferedImageOp and the RasterOp interfaces; and as we discuss them, we'll point out how they perform both Raster and BufferedImage filtering. The last class we will examine, BandCombineOp, only implements the RasterOp interface, so it can only filter Rasters.

AffineTransformOp

One class that implements both the RasterOp and the BufferedImageOp interfaces is the java.awt.image.AffineTransformOp class. Objects of this class contain an affine transformation (java.awt.geom.AffineTransform) that will either be applied to a source BufferedImage to create a destination BufferedImage or to a source Raster to create a destination Raster.

In order to best explain an affine transformation, it is beneficial to first review two more restrictive groups of transformations; the Euclidean transformation, group and the similarity transformation group. The Euclidean group of transformations is characterized by the fact that distance and area don't change. In other words, if the distance between two points is 5 units, after a Euclidean transformation that distance will still be 5 units regardless of the Euclidean transformation used. Such transformations consist of rotations and translations. The equations representing a 2D Euclidean transformation are as follows:

x' = cosθ x - sinθ y + tx

y' = sinθ x + cosθ y + ty

where x, y is the location of the source point, x', y' is the location of this point after the transformation, θ is the rotation angle, tx is the translation in the horizontal direction, and ty is the translation in the vertical direction. Note that rotation angles are represented in radians, with the conversion from degrees to radians being

angle in radians = angle in degrees * (Math.PI/180.0)

Similarity transformations extend this group to include global scaling. Under this group of transformations, distance can change, but shape can't. In other words, a square will remain a square after a similarity transformation. The equations representing a 2D similarity transformation are as follows:

x' = S(cosθ x - sinθ y + tx)

y' = S(sinθ x + cosθ y + ty)

where S is the global scaling factor.

By increasing the generality of the transformation group once again, you arrive at the group of affine transformations in which shape and area can change, but linearity and parallelism can't. In other words, a line will remain a line after an affine transformation, and two lines that are parallel will remain parallel after an affine transformation. The two addition types of transformation allowed are general scaling and shearing. For example, a transformation that only contains general scaling (as opposed to global scaling where the x and y scale factor are the same) would be as follows:

x' = Sxx

y' = Syy

with Sx and Sy being the two scaling coefficients. Likewise, a transformation that only contains shearing would be as follows:

x' = x + Shxy

y' = Shyx + y

with Shx and Shy being the two shearing coefficients. An example of a transformation involving shearing components of (.2, 0) is shown in Figure 4.10.

Tip

In Java, the coordinate system's origin is the top left corner with x increasing as you move right and y increasing as you move down.


Thus, the affine transformations contain all the transformations in the Euclidean group(translations, rotations), plus those of the similarity group (global scaling), along with general scaling and shearing. The equations representing a 2D affine transformation are as follows:

x' = m00 x + m01 y + m02

y' = m10 x + m11 y + m12

where mrc is the array element at row r and column c in the selected AffineTransform array.

Tip

Affine transformations are linear transformations so procedures such as image warping cannot be done using the AffineImageOp class.


Because an affine transformation is made up of combinations of rotations, translations, scalings, and shearings, the AffineTransform class has a series of methods that allow you to specify these transformation groups. For example,

//rotate theta radians around the origin
public void rotate (double theta);

//rotate theta radians around point x,y
public void rotate (double theta, double x, double y);

//scale by sx in the x direction and sy in the y direction
public void scale (double sx, double sy);

//translate by tx in the x direction and ty in the y direction
public void translate(double tx, double ty);

//shear using multipliers of shx and shy
public void shear(double shx, double shy);

Note that the initial matrix is set to identity in the AffineTransform constructor and each instruction concatenates a new temporary transformation to the stored affine transformation. For this reason, the order of the methods make a difference in the final affine transformation. In other words, the affine transformation created using

rotate(.5);
translate(10, 15);

will be different from the affine transformation created using

translate(10,15)
rotate(.5);

In general, there are two ways you can transfer a coordinate space: absolute coordinate system transformations and relative coordinate system transformations. In an absolute coordinate system transformation, the axis and coordinate system remain fixed and everything in it gets transformed. In a relative coordinate system transformation, the axis and coordinate system get transformed and everything in it remains constant with respect to these axes. By default, the AffineTransform transformations are done as a relative coordinate system transformation. As an example, let's assume that a rotation was performed followed by a translation along the x axis. In an absolute coordinate system transformation, the translation would be to the right regardless of the preceding rotation because the axes haven't moved. In a relative coordinate system transformation, the x axis moved with the rotation, thus the translation direction is dependent upon the preceding rotation. If this rotation was 90 degrees, a translation along the x axis would be down.

Tip

If it appears as if the AffineTransformation is doing your instructions in the reverse order, you are probably designing your instructions for absolute coordinate system transformations.


Last, for the AffineTransformOp, the source and destination must be different; otherwise a IllegalArgumentException will be thrown.

The constructors for AffineTransformOp are as follows:

AffineTransformOp(AffineTransform xform, int interpolationType)

AffineTransformOp(AffineTransform xform, RenderingHints hints)

where the interpolationType can be AffineTransform.TYPE_BILINEAR or AffineTransform.TYPE_NEAREST_NEIGHBOR.

In Listing 4.8, an affine transformation is created to rotate an image by 45 degrees around the image's center. Because this would normally map some source pixels to points with a negative x or y value, the image will also be translated in both the x and y directions to make sure that the entire image can be represented by the destination BufferedImage.

Listing 4.8 RotateImage45Degrees
package ch4;

import java.awt.*;
import javax.swing.*;
import java.awt.image.*;
import java.awt.geom.*;
import java.io.*;

/**
   RotateImage45Degrees.java -
   1. scales an image's dimensions by a factor of two
   2. rotates it 45 degrees around the image center
   3. displays the processed image
 */
public class RotateImage45Degrees extends JFrame {
    private Image inputImage;
    private BufferedImage sourceBI;
    private BufferedImage destinationBI = null;
    private Insets frameInsets;
    private boolean sizeSet = false;

    public RotateImage45Degrees(String imageFile) {
        addNotify();
        frameInsets = getInsets();
        inputImage = Toolkit.getDefaultToolkit().getImage(imageFile);

        MediaTracker mt = new MediaTracker(this);
        mt.addImage(inputImage, 0);
        try {
            mt.waitForID(0);
        }
        catch (InterruptedException ie) {
        }

        sourceBI = new BufferedImage(inputImage.getWidth(null),
                                     inputImage.getHeight(null),
                                     BufferedImage.TYPE_INT_ARGB);

        Graphics2D g = (Graphics2D)sourceBI.getGraphics();
        g.drawImage(inputImage, 0, 0, null);

        AffineTransform at = new AffineTransform();

        // scale image
        at.scale(2.0, 2.0);

        // rotate 45 degrees around image center
        at.rotate(45.0*Math.PI/180.0,
                  sourceBI.getWidth()/2.0,
                  sourceBI.getHeight()/2.0);


        /* translate to make sure the rotation
           doesn't cut off any image data
        */
        AffineTransform translationTransform;
        translationTransform = findTranslation(at, sourceBI);
        at.preConcatenate(translationTransform);

        // instantiate and apply affine transformation filter
        BufferedImageOp bio;
        bio = new AffineTransformOp(at, AffineTransformOp.TYPE_BILINEAR);

        destinationBI = bio.filter(sourceBI, null);

        int frameInsetsHorizontal = frameInsets.right + frameInsets.left;
        int frameInsetsVertical = frameInsets.top + frameInsets.bottom;
        setSize(destinationBI.getWidth() + frameInsetsHorizontal,
                destinationBI.getHeight() + frameInsetsVertical);
        show();
    }


    /*
      find proper translations to keep rotated image
      correctly displayed
    */
    private AffineTransform findTranslation(AffineTransform at,
                                            BufferedImage bi) {
        Point2D p2din, p2dout;

        p2din = new Point2D.Double(0.0,0.0);
        p2dout = at.transform(p2din, null);
        double ytrans = p2dout.getY();

        p2din = new Point2D.Double(0, bi.getHeight());
        p2dout = at.transform(p2din, null);
        double xtrans = p2dout.getX();

        AffineTransform tat = new AffineTransform();
        tat.translate(-xtrans, -ytrans);
        return tat;
    }


    public void paint(Graphics g) {
        if (destinationBI != null)
            g.drawImage(destinationBI,
                        frameInsets.left, frameInsets.top, this);
    }

    public static void main(String[] args) {
        if (args.length!= 1) {
            new RotateImage45Degrees("images/fruits.png");
        }
        new RotateImage45Degrees(args[0]);
    }
}

With regard to alpha, the alpha component is treated the same as any other component, meaning that the alpha value of the destination pixel is found by interpolating the alpha channel just as the blue component of the destination pixel is found by interpolating the blue channel. Thus, transforming a BufferedImage is identical to transforming a Raster. Last, you cannot use the same source and destination object when filtering.

ConvolveOp

The java.awt.image.ConvolveOp class convolves a kernel with a source image in order to produce a destination image. A kernel can be thought of as a two-dimensional array with an origin. During the convolution, the origin of the array is overlaid on each pixel of the source image. This origin value is multiplied by the pixel value it is over, and all surrounding kernel array values are multiplied by the pixel values that they are over. Finally, all these values are summed together and the resulting number replaces the pixel corresponding to the kernel center. For example, consider the following kernel with an origin at (1, 1):

(1/9) (1/9) (1/9)

(1/9) (1/9) (1/9)

(1/9) (1/9) (1/9)

For each image pixel, its value will be multiplied by (1/9) and each of its neighbors will be multiplied by 1/9. When these values are added together, the original image pixel will be replaced by the average value of itself and its eight neighbors. The effect of this kernel is to cause the destination image to appear like a smoothed version of the input image.

When you are using a convolution algorithm, edge pixels present a difficulty because they don't have all the neighboring pixels that a non-edge pixel does. Under these conditions, convolution algorithms aren't able to function, and some instruction is required as to how these edge pixels should be handled. In one of the ConvolveOp constructors, there is a parameter called edgeConditions, which is an integer. If this value is set to ConvolveOp.EDGE_NO_OP, the edge pixels in the destination object will be identical to those of the source object. If this value is set to ConvolveOp.EDGE_ZERO_FILL, the edge pixels will be set to 0. This latter value is the default.

The two ConvolveOp constructors are as follows:

ConvolveOp(Kernel kernel)
ConvolveOp(Kernel kernel, int edgeCondition, RenderingHints hints)

With regard to the filtered object, if the source object is a BufferedImage with an alpha component, this component isn't convolved separately. Instead, the other color components are multiplied by their corresponding normalized alpha component, and the color components are convolved independently. Finally, the alpha value of the source pixel is divided out of the returned components and given to the destination pixel as its alpha value. If this behavior isn't wanted, you can filter the BufferedImage's Raster—in which case, all components—including alpha—are convolved independently. You cannot use the same source and destination object when filtering.

RescaleOp

This class multiplies each pixel sample by a scaling factor before adding an offset to it. Mathematically, this can be expressed as follows:

dstSample = (srcSample*scaleFactor) + offset

Similar to the ConvolveOp class, any value above the maximum allowed value (usually 255) gets clipped to the maximum value and any value below 0 gets clipped to 0. You can use the source image as the destination image for this filtering operation. The constructors for this class are as follows:

RescaleOp(float scaleFactor, float offset, RenderingHints hints)

RescaleOp(float[] scaleFactors, float[] offsets, RenderingHints hints)

In the first constructor, only a single scale factor can be given, but in the second constructor, any number of scale factors can be given. Table 4.6 illustrates how the choice of constructor and the choice of the object to be filtered effect the destination pixels.

Table 4.6. RescaleOp Behavior
Object Filtered Number of scaleFactors Filtering
BufferedImage Number of color components Each color component scaled separately; alpha not changed
BufferedImage Number of components Each component scaled separately
BufferedImage 1 Each color component scaled identically; alpha not changed
Raster Number of components Each component scaled separately
Raster 1 Each component scaled identically

LookupOp

The java.awt.image.LookupOp object provides a means to filter Rasters and BufferedImages using a lookup table(LUT). In the LookupOp class, the LUT is simply an array in which the source pixel samples are treated as array indices. The corresponding destination pixel samples get their values from the array elements. In other words:

dstSample = LUTarray[srcSample]

A LookupTable contains one or more of these lookup arrays, which allow you to process individual bands differently. The LookupOp class contains a filter method for Rasters and for BufferedImages with slightly different behaviors (see Table 4.7).

Table 4.7. LookupOp Behavior
Object Filtered Number of Bands in LookupTable Filtering
BufferedImage Number of color components Each color component filtered separately; alpha not changed
BufferedImage Number of components Each component filtered separately
BufferedImage 1 Each color component filtered identically; alpha not changed
Raster Number of components Each component filtered separately
Raster 1 Each component filtered identically

There are two main LookupTable subclasses, ByteLookupTable and ShortLookupTable, where the ByteLookupTable assumes that the current input image's pixel samples all lie between 0–255 inclusive whereas the ShortLookupTable assumes that they lie between 0–66635 inclusive. Last, you can use the same source and destination object when filtering.

ColorConvertOp

The java.awt.image.ColorConvertOp class performs a pixel by pixel color conversion of the image source into the image destination. This is done by converting the pixels from the source image's color space into the destination image's color space. This class has three main constructors that can take zero, one, or two ColorSpaces as parameters. These three constructors are as follows:

ColorConvertOp(RenderingHints hints)
ColorConvertOp(ColorSpace cspace, RenderingHints hints)
ColorConvertOp(ColorSpace srcCspace, ColorSpace dstCspace, RenderingHints hints)

When this operation is to be performed on BufferedImages, no ColorSpace is necessary in the ColorConvertOp's constructor because the BufferedImages contain ColorModels that already represent a particular ColorSpace. Alternatively, you can provide a single ColorSpace if a null destination BufferedImage is going to be used in the filter method. In this case an appropriate BufferedImage with the provided ColorSpace will be created and returned by the filter method. Unlike BufferedImages, Rasters do not contain ColorModels, so for Raster filtering, two ColorSpace objects must be provided in the ColorConvertOp's constructor. Last, you can use the same source and destination object when filtering.

BandCombineOp

The last filter that we will look at is the java.awt.image.BandCombineOp filter. Unlike the other filters discussed, this filter only implements the RasterOp interface and not the BufferedImageOp interface, meaning that it can only be used to filter Rasters. The purpose of this filter is to perform linear combinations of the Raster bands. In other words, the value of each band in the destination Raster will be found through a linear function of the bands in the source Raster. The constructor for this class is as follows:

BandCombineOp(float[][] matrix, RenderingHints hints)

where the number of rows in the matrix is equal to the number of bands in the destination Raster and the number of columns is either equal to the number of columns in the source Raster or the number of columns in the source Raster plus one. In this latter case, an additional band is created that is always equal to one. For example, consider a BandCombineOp filter that switches the red and blue bands of a Raster containing a red, green, and blue band. That would require the following matrix:

[destRedBand] [ 0 0 1 ] [sourceRedBand]

[destGreenBand] = [ 0 1 0 ] × [sourceGreenBand]

[destBlueBand] [ 1 0 0 ] [sourceBlueBand]

Similarly, a BandCombineOp filter that inverts the green band is as follows:

[destRedBand] [ 1 0 1 0 ] [sourceRedBand]

[destGreenBand] = [ 0 -1 0 255] × [sourceGreenBand]

[destBlueBand] [ 0 0 1 0 ] [sourceBlueBand ]

[ 1 ]

In this latter example, the number of columns in the array were equal to the number of bands in the Raster + 1. For this reason, an extra band was created with each element being equal to 1. Last, for this filter class, the source Raster and the destination Raster can be the same.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset