The JAI class primarily contains a set of methods to create RenderedOp objects given an operation, a ParameterBlock object, and a RenderingHints object (refer to Listing 6.1). Its most common method is the static create method, that is,
static public RenderedOp create(String operationName, ParameterBlock param, RenderingHints renderingHints)
or if the RenderingHints object is null (meaning that default values should be used), you can use
static public RenderedOp create(String operationName, ParameterBlock param)
There are also a large number of other JAI create methods that allow you to perform an operation without using a ParameterBlock. In all listings in this chapter, ParameterBlocks will be used in the JAI's create methods, but, in practice, it is common to see method calls such as JAI.create("Fileload", filename) for loading an image file.
Note
The case of the operator isn't significant, so the operations add, Add, and ADD are treated identically.
There is another set of methods called createRenderable, which act similar to the create methods, but create a RenderableOp instead of a RenderedOp. The RenderableOp class will be discussed in a later section entitled “RenderedOps Versus RenderableOps”. One last point is that when the JAI's create method is used, numerous verifications occur with regard to the provided ParameterBlock and the operation String. For instance, the number of sources are checked as well as the number, types, and values of the parameters.
In the previous discussion, the operation to be performed must be one of the operations registered with the JAI package (refer to Listing 6.4). In Tables 6.1 to 6.12, the different operators are presented. In these tables, the format of the necessary ParameterBlock is provided along with a short description of each operator.
Tip
For more information about a particular operator, look at the documentation for its descriptor class. For instance, the add operator's descriptor class will be called javax.media.jai.operator.AddDescriptor.
Before examining these tables, a few points need to be made.
In many cases, the parameters provided to a ParameterBlock object are arrays; for example in the Clamp operator, the two parameters are double arrays specifying a set of low values and a set of high values. These are specified as an array instead of as a simple data value to give the user the ability to process each of the image bands differently. The way this is done is that if the number of elements in the array is equal to or greater than the number of image bands, the array value that will be used for a particular band will be constantArray[bandNumber]. On the other hand, if the number of array elements is less than the number of image bands, the array value that will be used for each band will be constantArray[0] and all bands will be treated equally. Thus, the constant array value used to process each band is as follows:
if (constantArray.length >= dstNumBands) value = constantArray[bandNumber]; else value = constantArray[0];
In the JAI API documentation, the operators are listed as requiring Object parameters. For example, whenever an integer array is needed, instead of int[], it will be listed as Integer[]. In all cases, you can use either, so we decided to use the primitive data types for simplicity. Also, many of the parameters required for an operation have default values. In order to use a default value, you can just use null for that parameter value. In the upcoming tables, default values will be listed when available.
In general, the output of all operators are clamped according to the data type of the destination image. In other words, each data type has a minimum and maximum allowable value. Any destination value higher than the maximum value will be set to the maximum value and any destination value lower than the minimum value will be set to the minimum value.
Note
Images composed of data types float or double are clamped at 0.0, 1.0.
Also, the output of all operators are rounded if the destination data type isn't float or double.
As discussed in Chapter 4, a pixel isn't the smallest element of an image. Each pixel is composed of one or more samples in which each sample corresponds to a particular image band. Thus an image with three bands (possible red, green, and blue) will have three samples per pixel. Most of the JAI operators work directly on samples although they are often described as operating on pixels. For example, when it is said that the Invert operator inverts pixels, it actually inverts each sample in each pixel.
As will be discussed in the “Extending JAI” section, a natural operator grouping exists based on the OpImage subclass that the operator implementation extends. Although this grouping is functionally useful, we have chosen different operator groupings in order to present smaller, more descriptive groups.
Associated with each pixel is a location. Pixel operators iteratively go through all pixel locations in a PlanarImage and carry out some type of computation. These computations are performed independently on each location without considering any other pixel locations within that PlanarImage. These operations can be grouped into two categories: single source pixel operators and multisource pixel operators.
Single source pixel operators calculate destination pixel values directly from the corresponding pixel value in a source image. A more mathematical form is destination[c][r][b] = function(source[c][r][b]), where c is the column number, r is the row number, and b is the band number. These operations can be further broken down into one group that requires no parameters such as Absolute, Exp, Format, Invert, Log, and Not (see Table 6.1) and one group that does require parameters such as Clamp, ColorConvert, Lookup, Rescale, and Threshold (see Table 6.2).
Operator | Parameter Block Format/Description |
---|---|
Absolute | addSource(PlanarImage pi); The Absolute operator computes the absolute value of all pixels in pi. |
Format | addSource(PlanarImage pi); add(int datatype); The Format operator reformats an image by casting each of its data samples to a different data type, where datatype can be one of the following: DataBuffer.TYPE_BYTE (default value), DataBuffer.TYPE_SHORT, DataBuffer.TYPE_USHORT, DataBuffer.TYPE_INT, DataBuffer.TYPE_FLOAT, or DataBuffer.TYPE_DOUBLE) See Listing 6.12 for an example of this operator. |
Exp | addSource(PlanarImage pi); The Exp operator computes the exponential of all pixels in pi. |
Invert | addSource(PlanarImage pi); The Invert operator computes the inverse of all pixels in pi. If pi's datatype is signed, a sample's inverse is the negation of the sample's value. If pi's datatype is unsigned, the sample's inverse is the maximum value of that datatype minus the sample's value. |
Log | addSource(PlanarImage pi); The Log operator computes the natural log of all pixels in pi. |
Not | addSource(PlanarImage pi); The Not operator performs bitwise logical NOT on all pixels in pi. |
Table 6.2 provides a list of the single source pixel operators requiring one or more parameters. Be sure to refer to the previous section “Use of Constant Arrays” to understand how the operators use the array parameters. Unless otherwise noted, the parameters don't have default values.
Multiple source pixel operators calculate a destination pixel value directly from the corresponding pixel values of more than one source. Mathematically, using two sources, this can be expressed as follows: destination[c][r][b] = function(source1[c][r][b], source2[c][r][b]), where c is the column number, r is the row number, and b is the band number. This group of operators can be broken down into two groups. The first group uses multiple image sources and no parameters, and the second group uses a single source image and a constant array parameter. In these operators, this constant array acts like a second image source.
Examples of the first group of operators are Add, AddCollection, And, Divide, DivideComplex, Max, Min, Multiply, MultiplyComplex, Or, Subtract, and Xor (see Table 6.3). Note that AddCollection is the only operator that allows more than two sources. Examples of the second group of operators are AddConst, AndConst, DivideByConst, DivideIntoConst, MultiplyConst, OrConst, SubtractConst, SubtractFromConst, and XorConst (see Table 6.4).
In Table 6.3 there are two operations involving complex data, that is, DivideComplex and MultiplyComplex. A complex image is simply a PlanarImage with an even number of bands in which the odd-numbered bands (first, third, and so on) will be interpreted as making up the real part of the image, whereas the even-numbered bands (second, fourth, and so on) will be interpreted as making up the imaginary part of the image.
In Table 6.4, each ParameterBlock contains a single image source and a constant array that can be considered a second image source. Be sure to refer to the preceding section “Use of Constant Arrays” to understand how the operators use the array parameters.
The pixel operators that don't fit in any of the previous groups are presented here. They are the BandCombine, BandSelect, Composite, Constant, MatchCDF, Overlay, Pattern and Piecewise operators (see Table 6.5). Because of the complexity of these operators, examples of many of them are provided following this table. Unless otherwise noted, the parameters don't have default values.
Operator | Parameter Block Format/Description |
---|---|
BandCombine | addSource(PlanarImage pi); add(double[][] matrix); The BandCombine operator linearly combines the bands in pi according to the matrix array. The number of columns in matrix represent the number of bands in pi plus one. The number of rows in matrix represent the number of bands in the destination image. This operator is similar to the java.awt.Image.BandCombineOp described in Chapter 4 (see Listing 6.6). |
BandSelect | addSource(PlanarImage pi); add(int[] bandIndices); The BandSelect operator copies bands in pi to the destination image in the order specified by bandIndices (see Listing 6.7). |
Composite | addSource(PlanarImage pi1); addSource(PlanarImage pi2); add(PlanarImage alpha1); add(PlanarImage alpha2); add(Boolean alphaPremultiplied); add(Integer destAlpha); The Composite operator combines corresponding pixels in pi1 and pi2 using the alpha values provided in alpha1 and alpha2. CompositeDescriptor.NO_DESTINATION_ALPHA CompositeDescriptor.DESTINATION_ALPHA_FIRST CompositeDescriptor.DESTINATION_ALPHA_LAST Default values are alpha2 = null (opaque); alphaPremultiplied = false; destAlpha = CompositeDescriptor.NO_DESTINATION_ALPHA (see Listing 6.8). |
Constant | add(Float width); add(Float height); add(Number[] constants); The Constant operator creates a new image of size width, height where each pixel is set equal to constants (see Listing 6.9). |
MatchCDF | addSource(PlanarImage pi); add(float[][] CDF); The MatchCDF operator attempts to make pi's cumulative density function (CDF) match the provided CDF. The format of CDF is as follows: CDF[numberOfBands][numberOfBinsInBand] where, for a particular band, each subsequent CDF value must be nonnegative and nondecreasing. The final value for each band must be 1.0. |
Overlay | addSource(PlanarImage pi1); addSource(PlanarImage pi2); The Overlay operator covers pixels on pi1 with pixels from pi2 wherever the bounds of the two source images intersect. |
Pattern | add(int width); add(int height); add(Raster pattern); The Pattern operator creates a destination image of dimensions (width, height) made up of a repeated pattern specified by pattern. The tile dimensions in the destination image will be the dimensions of pattern. |
Piecewise | addSource(PlanarImage pi); add(float[][][] breakPoints); The Piecewise operator performs a piecewise linear mapping of pixel values in pi, where breakPoints is defined as breakPoints[numBands][2][numBreakPoints]. When the array's second index is equal to 0, the breakPoints array represents a list of possible source sample values. When this array index is equal to 1, the breakPoints array represents a list of possible destination sample values. Thus, the breakPoints array maps a set of source sample values to a set of destination sample values. Any source sample value that isn't contained in the set of source sample values will have its destination value computed using the closest source values that do exist in this set along with their corresponding destination values (see Listing 6.9). |
Note
Listings 6.6 through 6.9 are not standalone applications, but are methods belonging to an application named OtherPointOperatorsTester.java.
In Listing 6.6, an example of a method using the "BandCombine" operator is shown. Assuming that an image has three color components and is using an RGB color space, the general equation for a particular band in the destination image is
x*sourceRedComponent + y*sourceGreenComponent + z*sourceBlueComponent + t
where x,y,z, and t are variables. Thus a band in the destination image is created by linearly combining bands in a source image plus adding an offset. Using "BandCombine" operator, the x,y,z, and t variables are contained in a two-dimensional double array.
/** BandCombine operation in which the destination band components are; destinationRedComponent = 255 – sourceRedComponent destinationGreenComponent = sourceBlueComponent; destinationBlueComponent = sourceGreenComponent;. */ public PlanarImage bandCombine(PlanarImage pi) { double[][] matrix = { { -1.0D, 0.0D, 0.0D, 255.0D }, { 0.0D, 0.0D, 1.0D, 0.0D }, { 0.0D, 1.0D, 0.0D, 0.0D }, }; ParameterBlock param = new ParameterBlock(); param.addSource(pi); param.add(matrix); return JAI.create("BandCombine", param); } |
In Listing 6.7, an example of a method using the "BandSelect" operation is shown. Assuming that the source bands are contained in an array called sourceBandArray and the destination bands are contained in an array called destinationBandArray, the general equation for a particular band is
destinationBand[bandNumber] = sourceBand[bandSelectArray[bandNumber]]
where bandSelectArray is a single dimensional int array with as many elements as there are bands in the destination image.
/** BandSelect method used to reverse the second and third bands Thus, if the initial band order is red, green, blue the destination band order will be red, blue, green */ public PlanarImage bandSelect(PlanarImage pi) { int[] array = {0, 2, 1}; ParameterBlock param = new ParameterBlock(); param.addSource(pi); param.add(array); return JAI.create("BandSelect", param); } |
In Listing 6.8, an example of a method using the "Composite" operation is shown.
/** performs compositing of PlanarImages pi1 and pi2 using normalized alpha values of .5 for pi1 and normalized alpha values of 1.0 (opaque) for pi2. Thus, the destination pixels will be made up of equal parts of pi1 and pi2. */ public PlanarImage composite(PlanarImage pi1, PlanarImage pi2) { byte alpha1Value = (byte)128; //normalized value of .5 byte alpha2Value = (byte)256; //normalized value of 1.0 ParameterBlock param = new ParameterBlock(); param.addSource(pi1); param.addSource(pi2); param.add(makeAlpha(pi1.getWidth(), pi1.getHeight(), alpha1Value, alpha1Value, alpha1Value)); param.add(makeAlpha(pi2.getWidth(), pi2.getHeight(), alpha2Value, alpha2Value, alpha2Value)); param.add(new Boolean(false)); param.add(CompositeDescriptor.NO_DESTINATION_ALPHA); return JAI.create("Composite", param); } /** returns a PlanarImage containing 3 bands with samples being of type byte. All sample in band0 will be set to alpha0, all samples in band1 will be set to alpha1 and all samples in band2 will be set to alpha2. */ private PlanarImage makeAlpha(float width, float height, byte alpha0, byte alpha1, byte alpha2) { byte[] alphaValues; alphaValues = new byte[3]; alphaValues[0] = alpha0; //alpha value for 1st band alphaValues[1] = alpha1; //alpha value for 2nd band alphaValues[2] = alpha2; //alpha value for 3rd band ParameterBlock param = new ParameterBlock(); param.add(width); param.add(height); param.add(alphaValues); RenderedOp ro = JAI.create("Constant", param); return ro; } |
The "Composite" operator combines two source PlanarImages in such a way that by taking into account each pixel's corresponding alpha (transparency) values, the two images appear together (see Figure 6.6).
Alpha values are supplied by interpreting the pixel values of two other PlanarImages as the alpha values for the two source images. In Listing 6.8, in order to create these alpha PlanarImages, a method using the "Constant" operation is used.
For more details on how this compositing is performed, let
pi1Value = sample value of PlanarImage1
pi2Value = sample value of PlanerImage2
pi1Alpha = normalized alpha value for PlanarImage1
pi2Alpha = normalized alpha value for PlanarImage2
(where normalized alpha values range from 0.0, 1.0).
The "Porter-Duff over" composite rule (which is the composite rule used) can then be defined as
destinationValue = pi1Value*pi1Alpha + (1-pi1Alpha)*(pi2Value*pi2Alpha)
In Figure 6.6, the left and middle images are the source images. The last image is the result of the composite operator applied to these two source images. In this operation, all pixels in the first source image were given a normalized alpha value of .5, whereas all pixels in the second source image were given an alpha value of 1.0. Thus, the destination image represents each of the two source images equally.
In Listing 6.9, an example of a method using the "Piecewise" operator is shown.
/** performs piecewise linear mapping of a PlanarImage with 3 bands In this example: all values under 50 will become 100 all values over 200 will become 255 all other values becomes linearly interpolated between the two, i.e., 100 + (value-50)*(255-100)/(200-50) */ public PlanarImage piecewise(PlanarImage pi) { float[][][] breakPoints = new float[3][2][2]; breakPoints[0][0][0] = 50; breakPoints[0][1][0] = 100; breakPoints[0][0][1] = 200; breakPoints[0][1][1] = 255; breakPoints[1][0][0] = 50; breakPoints[1][1][0] = 100; breakPoints[1][0][1] = 200; breakPoints[1][1][1] = 255; breakPoints[2][0][0] = 50; breakPoints[2][1][0] = 100; breakPoints[2][0][1] = 200; breakPoints[2][1][1] = 255; ParameterBlock param = new ParameterBlock(); param.addSource(pi); param.add(breakPoints); return JAI.create("Piecewise", param); } |
This operator requires a three-dimensional float array often called breakPoints. The format of this array is as follows:
float breakPoints[numBands][2][numBreakPoints]
In order to understand this operator, it is best to think of this array as two separate arrays; that is, sourceBreakPoints and destinationBreakPoints where
sourceBreakPoints[bandNumber][breakPoints] = breakPoints[bandNumber][0][breakPoint]
and
destinationBreakPoints[bandNumber][breakPoints] = breakPoints[bandNumber][1][breakPoint]
Thus, this operator maps the values in the sourceBreakPoints array into the values in the destinationBreakPoints array.
For the sourceBreakPoints and the destinationBreakPoints arrays, each subsequent break point must have a higher value than the one before it. For example for a single band image, the source breakpoints could be {2, 4, 6, 8} and the destination breakpoints could be {1, 4, 12, 20} .
The purpose of these arrays is to map source pixel values into destination pixel values. If a source pixel value corresponds to a source breakpoint, its destination pixel value will simply be the corresponding destination breakpoint value. If a source pixel value falls between two breakpoints, its destination pixel value will be linearly computed according to the two closest source breakpoints and their corresponding destination breakpoints. For example, using the source and destination breakpoints listed previously, any source pixel value less than or equal to 2 will have a destination pixel value of 1. Any source pixel value of 2 or 3 will have a destination pixel value of
1+ (value-2)*(4-1)/(4-2)
Any source pixel value of 4 or 5 will have a destination pixel value of
4+(value-4)*(12-4)/(6-4)
Any source pixel value of 6 or 7 will have a destination pixel value of
12+(value-6)*(20-12)/(8-6)
Any source pixel value of 8 or above will have a destination pixel value of 20.
Unlike point operators, when computing the value of a destination pixel, area operators generally need to use more than a single pixel within a source image. For example, a smoothing filter can compute a destination pixel's value by averaging its corresponding source pixel with a region containing that source pixel's neighbors. The listed area operators are Border, BoxFilter, Convolve, Crop, and Median Filter. Because of the confusion that often occurs between borders and a related concept of border extenders, a section titled “Creating Borders and Border Extenders” immediately follows Table 6.6 that discusses these concepts.
Operator | Parameter Block Format/Description |
---|---|
Border | addSource(PlanarImage pi); add(int leftBorderSize); add(int rightBorderSize); add(int topBorderSize); add(int bottomBorderSize); add(BorderExtender extenderType); The Border operator puts a border around the source image pi. The extenderType describes which type of border to use. This choice is usually specified by using BorderExtender.createInstance(int type), where type is one of the following: BorderExtender.BORDER_COPY BorderExtender.BORDER_ZERO BorderExtender.BORDER_REFLECT BorderExtender.BORDER_WRAP Alternatively, the extenderType can be specified through new BorderExtenderConstant(double[] constant) (see Listing 6.10) |
BoxFilter | addSource(PlanerImage pi); add(int boxWidth); add(int boxHeight); add(int boxXOrigin); add(int boxYOrigin); The BoxFilter operator convolves pi with a box kernel with dimensions of boxWidth, boxHeight and a center located at boxXOrigin, boxYOrigin. Each element of the box filter has a weight equal to 1/(boxWidth*boxHeight). |
Convolve | addSource(PlanarImage pi); add(KernelJAI kernel); The Convolve operator convolves pi with kernel kernel, where this kernelJAI object contains the kernel's shape, origin, and element values. |
Crop | addSource(PlanarImage pi); add(int xOrigin); add(int yOrigin); add(int width); add(int height); The Crop operator crops pi using a rectangle with an origin at xOrigin, yOrigin and dimensions of width, height. |
GradientMagnitude | addSource(PlanarImage pi); add(KernelJAI kernel1); add(KernelJAI kernel2); The GradientMagnitute operator computes the magnitude of the two values found by implementing convolution using kernel1 and kernel2 independently. |
MedianFilter | addSource(PlanarImage pi); add(int maskShape); add(int maskSize); The MedianFilter operator performs median filtering of pi using a mask of size maskSize and a shape of one of the following: MedianFilterDescriptor.MEDIAN_MASK_SQUARE, MedianFilterDescriptor.MEDIAN_MASK_PLUS MedianFilterDescriptor.MEDIAN_MASK_X MedianFilterDescriptor. MEDIAN_MASK_SQUARE_SEPARABLE where this latter mask shape uses a square mask, but instead of computing the median of all pixels in the square, it first computes a median value for each row and then computes the median of the calculated row medians. |
There are two ways to provide pixel data at locations past an image's natural boundaries. The first way is through the Border operation as described in Table 6.6. This method creates a border around an image by extending the image dimensions and filling the border area as specified: copy, constant, reflect, wrap, or zero.
copy— Border pixels replicate values of edge and corner pixels.
constant— Border pixels are set to provided constant values.
Listing 6.10 illustrates how the different border descriptors are used (see Figure 6.7).
In Figure 6.7, the top left image is the source image. In the following images (presented in order from top row to the bottom row), the following border types are illustrated: copy, zero, reflect, wrap, and constant.
The main purpose of the Border operator is to extend the image dimensions for visual purposes. By mistake, it is often used so that when an operation requires pixel values past the normal image dimensions, they are available. This situation is very common for some of the area operators such as Convolution, BoxFilter, and MedianFilter. The reason this type of image extension should not be done is that once you use the Border operator, the created border becomes part of the image. Thus, it will be processed by all subsequent operators and will appear when displayed. A better way to provide these additional pixel values is to use a border extender instead of a border.
Like a border, a border extender provides pixel values to operators that require values beyond the dimensions of an image. Unlike a border, border extenders are otherwise invisible. Thus, they don't extend the dimensions of the image, they don't get processed by other operators, and they don't appear when the image is displayed. In Figure 6.8 the first column depicts an original source image, the source image with a border extender and the source image with a border of width 10 pixels on each side. The second column depicts these three images filtered using a 19x19 box filter with an origin at (10, 10). Note that the images in this figure were created using Listing 6.11.
An important thing to remember about border extenders is that they aren't operators, but are rendering hints. Thus they are used in a rendering by creating a rendering hints key/value pair with the key being JAI.KEY_BORDER_EXTENDER and the value being a java.media.jai.BorderExtender object. This key/value pair is then added to a RenderingHints object, that is,
BorderExtender extender; extender = BorderExtender.createInstance(BorderExtender.BORDER_ZERO); RenderingHints.Key extenderKey = JAI.KEY_BORDER_EXTENDER; RenderingHints renderHints = new RenderingHints(extenderKey, extender);
This RenderingHints object can than be passed to a created RenderedOp in the JAI's create method. Listing 6.11 provides an example of the use of both borders and border extenders. This application produced the image shown in Figure 6.8.
package ch6; import java.awt.*; import javax.swing.*; import java.io.*; import java.awt.image.renderable.ParameterBlock; import javax.media.jai.JAI; import javax.media.jai.PlanarImage; import javax.media.jai.BorderExtender; import javax.media.jai.BorderExtenderConstant; import javax.media.jai.RenderedOp; import javax.media.jai.RenderedImageList; /** BordersAndBorderExtenders -- this class illustrates box filtering using border extenders and using borders */ public class BordersAndBorderExtenders extends JFrame { public BordersAndBorderExtenders(String filename) { setTitle("ch6.BordersAndBorderExtenders"); int extenderType = BorderExtender.BORDER_REFLECT; BorderExtender extender; extender = BorderExtender.createInstance(extenderType); RenderingHints.Key extenderKey = JAI.KEY_BORDER_EXTENDER; RenderingHints renderHints; renderHints = new RenderingHints(extenderKey, extender); RenderedOp sourceImage = loadImageFile(filename); RenderedOp filteredImage = filter(sourceImage); RenderedOp sourceImageWithExtender; sourceImageWithExtender = loadImageFile(filename, renderHints); RenderedOp filteredImageWithExtender; filteredImageWithExtender = filter(sourceImage, renderHints); // create image with black border of width 10 pixels on each side RenderedOp sourceImageWithBorder; sourceImageWithBorder = createBorderedImage(sourceImage, 10); RenderedOp filteredImageWithBorder; filteredImageWithBorder = filter(sourceImageWithBorder); getContentPane().setBackground(Color.white); getContentPane().setLayout(new GridLayout(3,2)); getContentPane().add(new ch6Display(sourceImage)); getContentPane().add(new ch6Display(filteredImage)); getContentPane().add(new ch6Display(sourceImageWithExtender)); getContentPane().add(new ch6Display(filteredImageWithExtender)); getContentPane().add(new ch6Display(sourceImageWithBorder)); getContentPane().add(new ch6Display(filteredImageWithBorder)); printSize(sourceImage, "sourceImage"); printSize(sourceImage, "filteredImage"); printSize(sourceImageWithExtender, "sourceImageWithExtender"); printSize(filteredImageWithExtender, "filteredImageWithExtender"); printSize(sourceImageWithBorder, "sourceImageWithBorder"); printSize(filteredImageWithBorder, "filteredImageWithBorder"); /* add a little extra space so viewer can distinguish between the different images */ Insets insets = getInsets(); int xsize = 2*(sourceImage.getWidth()+40); xsize += (insets.left+insets.right); int ysize = 3*(sourceImage.getHeight()+40); ysize += (insets.top+insets.bottom); setSize(xsize, ysize); show(); } private void printSize(PlanarImage pi, String name) { System.out.print("Size of " + name + " is "); System.out.println(pi.getWidth() + ", " + pi.getHeight()); } private RenderedOp loadImageFile(String filename) { ParameterBlock pb = new ParameterBlock(); pb.add(filename); return JAI.create("fileload", pb); } private RenderedOp loadImageFile(String filename, RenderingHints rh) { ParameterBlock pb = new ParameterBlock(); pb.add(filename); return JAI.create("fileload", pb, rh); } private RenderedOp createBorderedImage(PlanarImage pi, int length) { ParameterBlock borderParams = new ParameterBlock(); borderParams.addSource(pi); borderParams.add(new Integer(length)); borderParams.add(new Integer(length)); borderParams.add(new Integer(length)); borderParams.add(new Integer(length)); int extenderType = BorderExtender.BORDER_REFLECT; borderParams.add(BorderExtender.createInstance(extenderType)); return JAI.create("Border", borderParams); } /** filter using a 19x19 box filter with an origin of 10,10 */ private RenderedOp filter(PlanarImage pi) { ParameterBlock param = new ParameterBlock(); param.addSource(pi); param.add(19); param.add(19); param.add(10); param.add(10); return JAI.create("Boxfilter", param); } private RenderedOp filter(PlanarImage pi, RenderingHints rh) { ParameterBlock param = new ParameterBlock(); param.addSource(pi); param.add(19); param.add(19); param.add(10); param.add(10); return JAI.create("Boxfilter", param, rh); } public static void main(String[] args) { if (args.length != 1) { System.err.print("USAGE: "); System.err.println("BordersAndBorderExtenders imageFilename"); } else new BordersAndBorderExtenders(args[0]); } } |
The typical output for Listing 6.11 is the following:
Size of sourceImage is 256, 256 Size of filteredImage is 256, 256 Size of sourceImageWithExtender is 256, 256 Size of filteredImageWithExtender is 256, 256 Size of sourceImageWithBorder is 276, 276 Size of filteredImageWithBorder is 276, 276
Note that the border extenders didn't increase the image dimensions.
Geometric operators calculate destination pixel values by spatially transforming a destination image. In other words, each location in a destination image is transformed into a location in a source image. Because these new pixel locations might not correspond to integer values, interpolation must be used in order to derive an appropriate value for that location using the surrounding source pixel values. That calculated value will then be applied to the original destination pixel location. (For more information regarding interpolation, see Chapter 4.)
Because of this need for interpolation, most of the geometric operators require an interpolation type to be specified. This is done by instantiating a subclass of the javax.media.jai.Interpolation class. The possible subclasses are the javax.media.jai.Nearest (the default value), javax.media.jai.Bilinear, javax.media.jai.Bicubic, and javax.media.jai.Bicubic2 for nearest neighbor, bilinear, and two different types of bicubic polynomial interpolation, respectively.
For many of these geometric operations, there will be times when the operator requires image data that isn't available. For example, a translation of 20 pixels in the x direction will leave 20 columns in the destination image not containing data from that source image. One way to control what pixel values are placed in these columns is to specify a border extender as described in the previous section. The geometric operators are Affine, Rotate, Scale, Shear, Translate, Transpose and Warp (see Table 6.7).
It is important to note a discrepancy between the previous discussion and the operator descriptions in Table 6.7. As is commonly done, the operators are described as applying some type of transformation to a source image. What actually occurs is that the inverse transformation is applied to the destination image. This is done in order to obtain destination pixel values in the manner just described.
Two operators, ErrorDiffusion and OrderedDither, are used for situations in which the output device cannot represent the colors contained in the image. For example, a monitor might be limited to only displaying 256 colors, whereas a JPEG image might have thousands of different colors that need to be represented (see Table 6.8).
Statistical operators are unique because they don't change any of the pixel values in the source image (see Table 6.9). Their only effect is to add one or more properties to a PlanarImage. For example, the Extrema operator adds a property called "minimum", which represents the minimum value in each band; a property called "maximum", which represents the maximum value in each band; and a property called "extrema", which represents both the minimum and maximum values in each band. Thus, this line of code:
double[] minValuesForEachBand = (double [])planarImage.getProperty("minimum");
has the same affect as these two lines:
double[][] extrema = (double[][])planarImage.getProperty("extrema"); double[] minValuesForEachBand = extrema[0];
In this section, we will examine the operators for converted to and from the frequency domain as well as some other operators that are useful for frequency domain filtering. These operators are the Conjugate, DCT, DFT, IDCT, IDFT, ImageFunction, Magnitude, MagnitudeSquared, PeriodicShift, Phase, and PolarToComplex operators (see Table 6.10). Prior to this examination, we first need to take a closer look at complex images.
As previously discussed, a complex image is similar to a regular image except that it has two components: a real component and an imaginary component. Thus, a gray scale complex image requires two bands to represent it and an RGB complex image requires six bands. For this reason, in any operator that converts from the spatial domain to the frequency domain (dct and dft), the number of bands in the returned image will be twice that of the source image. Likewise, in any operator converting from the frequency domain to the space domain (idct and idft), the number of bands in the returned image will be half that of the source image.
An example of many of these operators can be found in Listing 6.12. In this listing, the DFT of a source image is computed and the resulting complex image is processed for display purposes (see Figure 6.9).