Pixel Storage and Conversion

Each location in an Image is associated with a single pixel, but a pixel isn't the smallest unit of interest. Each pixel contains one or more samples representing the different bands in the Image. For example, pixels representing a color image could have samples of red, green, blue or alpha, red, green, blue, where alpha is a measure of transparency not a color component. Similarly, pixels representing a grayscale image might only contain one sample. Another thing that must be considered is output devices. A pixel may represent bands of red, green, and blue, but an output device, such as a printer, might expect bands of cyan, magenta, and yellow. So when working with image data, two required steps are:

1.
Extract the samples from a pixel given that pixel's location.

2.
Interpret and convert(if necessary) these samples.

These tasks are performed by the java.awt.image.Raster and java.awt.image.ColorModel, respectively.

Rasters

A Raster is made up of two main objects, a java.awt.image.DataBuffer and a java.awt.image.SampleModel. The DataBuffer's job is the storage of the Image pixels, and the SampleModel's job is the understanding of this storage. Thus, the SampleModel can get the appropriate pixel samples from the DataBuffer given a pixel location. The entire process of converting a pixel location into pixel samples proceeds as follows (see Figure 4.3).

1.
A Raster is passed a pixel location.

2.
It gives this location to its SampleModel.

3.
The SampleModel obtains the correct samples from its corresponding DataBuffer.

4.
The SampleModel then gives these values back to the Raster so that they can be passed on for interpretation and conversion.

Figure 4.3. The Raster's SampleModel uses its corresponding DataBuffer to convert a pixel's location into samples.


The Raster class provides methods to access the data contained in a DataBuffer, whereas a Raster subclass, the WritableRaster, adds the capability to change this data. One other point regarding Rasters is that they do more than simply pass coordinates to the SampleModel and return the results. A Raster allows image data to be used with an x and/or y offset, whereas a SampleModel doesn't (SampleModels always have an origin of 0, 0). In order to find the difference between the SampleModel's origin and the Raster's origin, you can use the Raster's public int getSampleModelTranslateX() and public int getSampleModelTranslateY() methods. So, in addition to containing a DataBuffer and a SampleModel, a Raster contains a java.awt.Point representing its origin.

DataBuffers and SampleModels

A DataBuffer stores the pixel data as one or more arrays of some primative data type. For example, with respect to image data containing bands of alpha, red, green, and blue with each band consisting of 1 byte, three common ways to provide storage are as follows:

  • SinglePixelPacked technique— Each array element represents all the pixel samples for a particular location. In this case, these packed samples are held in a DataBuffer containing a single array of type integer (see Figure 4.4). This is by far the most common method to store image data. An example of how the different 8 bit components are extracted from a 32 bit integer as well as an example of how they are packed back into an integer can be found in Listing 4.3.

    Figure 4.4. In this integer array, each element contains all the samples from a single pixel. Each sample uses eight of the integer's 32 bits.

  • BandedSample technique— Each array element represents a single sample with all the alpha components in one array, the red components in another array, and likewise, the green and blue components in two other arrays. In this case, the DataBuffer object would contain four arrays of type byte. To find all of the samples for pixel number n, simply take the nth element from each array (see Figure 4.5).

    Figure 4.5. In the top byte array, each element represents the alpha sample from a different pixel. The red, green, and blue samples are held in three other arrays.

  • PixelInterleaved technique— Each array element represents a single sample with interleaved alpha, red, green, and blue components. In this case, the DataBuffer would contain a single array of type byte. To find all of the samples for pixel n, simply take elements 4*n, 4*n+1, 4*n+2 and 4*n+3 (see Figure 4.6).

    Figure 4.6. In this byte array, the different samples alternate within a single array.

There is one more common image storage method, but this one is for single band images (images with a single sample per pixel), such as grayscale images. In the case of this MultiPixelPacked technique, a packed primitive data type holds the pixel sample of more than one pixel (see Figure 4.7).

Figure 4.7. In this packed integer, the gray samples from four different pixels are contained in one packed integer.


Two of the four techniques (SinglePixelPacked and MultiPixelPacked) are packed techniques—meaning that each array element represents more than one sample. The other two techniques (Banded and Interleaved) are component techniques—meaning that each array element represents one and only one sample.

As you've just seen, there are many different ways to represent pixel color components using arrays. Thus it would be difficult to try and communicate directly with the DataBuffer object. For this reason, SampleModels are used. A SampleModel can be thought of as the brains behind the data storage because each one understands the organization of its corresponding DataBuffer. Given an x, y location, it can obtain the corresponding pixel samples from the array or arrays in the DataBuffer without the user having to know anything about the actual data allocation. Different subclasses of the SampleModel class know how to find pixel samples from DataBuffers that use different storage techniques (see Table 4.2).

Table 4.2. SampleModel Subclasses
SampleModel Subclass Description
SinglePixelPackedSampleModel Knows how to obtain pixel samples when the DataBuffer is storing all of a pixel's samples in one array element (refer to Figure 4.4).
ComponentSampleModel Knows how to obtain pixel samples when the DataBuffer is storing each sample in a separate array element. Parent class of BandedSampleModel and PixelInterleavedSampleModel.
BandedSampleModel Knows how to obtain pixel samples when the DataBuffer contains separate arrays for each band (refer to Figure 4.5).
PixelInterleavedSampleModel Knows how to obtain pixel samples when the DataBuffer contains a single array whose elements alternate between the different bands (refer to Figure 4.6)
MultiPixelPackedSampleModel Knows how to obtain pixel samples when the image data represents a single band and the DataBuffer is storing more than one sample into a single array element (refer to Figure 4.7)

Creating and Using Rasters

The easiest way to create a Raster is to provide a SampleModel, a DataBuffer, and an offset Point to the Raster's createRaster or createWritableRaster static method:

static Raster createRaster(SampleModel sm, DataBuffer db, Point location)

static WritableRaster createWritableRaster(SampleModel sm,
                                           DataBuffer db, Point location)

As mentioned, this offset Point is used to translate the origin in the Raster because the SampleModel's origin is always (0,0). If you don't want to translate the origin of the Raster, you can simple use null for the offset Point. It should be noted that it is also common to create Rasters without first creating a SampleModel. This is done by using the Raster's, createBandedRaster, createInterleavedRaster, or createPackedRaster methods, which internally create a BandedSampleModel, an InterleavedSampleModel, or either a SinglePixelPackedModel or a MultiPixelPackedSampleModel, respectively.

In Listing 4.3, we used bit operations in order to extract alpha and the red, green, and blue color components from a packed integer. In a sense, we were acting like a SinglePixelPackedSampleModel because we knew how the data was stored and were able to convert a pixel location into a set of pixel samples; in this case, alpha, red, green, and blue. Under these conditions, this is a reasonable way to obtain these samples (in fact, if you know where and how your data is stored, bitwise operations are the most efficient way to work with pixels), but in general this isn't the most robust way to do this. If you rely on bitwise mathematics, you're forced to understand how your data is stored and more importantly, you won't be able to write generic algorithms for pixel processing. In other words, your pixel processing methods should simply take a Raster or WritableRaster argument without worrying about the SampleModel being used. Inside your methods, you can use these Raster methods for obtaining either individual pixel samples or an array of a pixel's samples:

int getSample(int x, int y, int bandNumber)

where bandNumber is the number of the band whose sample you want. Usually, 0 = red, 1 = green, 2 = blue, and 3 = alpha.

int[] getPixel(int x, int y, int[] iArray)

where iArray is an integer array whose size is greater than or equal to the number of samples in the pixel. If this value isn't null, it will also be the returned object. If this value is null, an appropriate array is allocated, filled, and returned. If a WritableRaster is used, the following two methods for setting pixel values are available:

void setSample(int x, int y, int bandNumber, int sampleValue)

where sampleValue will be the new value of the pixel sample corresponding to band number bandNumber.

void setPixel(int x, int y, int[] iArray)

where iArray is an integer array holding the pixel's new sample values (one sample per array element). To see how these methods are used, we will redo the earlier GrabandFade example (from Listing 4.3)-this time using DataBuffers, SampleModels, and Rasters (see Listing 4.5).

Listing 4.5 GrabandFadewithRasters
package ch4;

import java.awt.*;
import java.applet.*;
import java.net.*;
import java.awt.image.PixelGrabber;
import java.awt.image.MemoryImageSource;
import java.awt.image.DataBuffer;
import java.awt.image.DataBufferInt;
import java.awt.image.Raster;
import java.awt.image.WritableRaster;
import java.awt.image.SampleModel;
import java.awt.image.SinglePixelPackedSampleModel;


/**
 * GrabandFadewithRasters.java -- displays provided image
 * and then slowly fades to black
 */
public class GrabandFadewithRasters extends Applet {
    private Image originalImage;
    private Image newImage;
    private MemoryImageSource mis;
    private int width;
    private int height;
    private int index = 10;
    private int[] originalPixelArray;
    private boolean imageLoaded = false;
    private WritableRaster raster;
    private String imageURLString = "file:images/peppers.png";

    public void init() {
        URL url;
        try {
            url = new URL(imageURLString);
            originalImage = getImage(url);
        }
        catch (MalformedURLException me) {
            showStatus("Malformed URL: " + me.getMessage());
        }

        try {
            PixelGrabber grabber = new PixelGrabber(originalImage,
                                                    0, 0, -1, -1, true);
            if (grabber.grabPixels()) {
                width = grabber.getWidth();
                height = grabber.getHeight();
                originalPixelArray = (int[])grabber.getPixels();

                mis = new MemoryImageSource(width, height,
                                            originalPixelArray,0, width);
                mis.setAnimated(true);
                newImage = createImage(mis);
            }
            else {
                System.err.println("Grabbing Failed");
            }
        }
        catch (InterruptedException ie) {
            System.err.println("Pixel Grabbing Interrupted");
        }

        DataBufferInt dbi = new DataBufferInt(originalPixelArray,
                                              width*height);

        int bandmasks[] = {0xff000000,0x00ff0000,0x0000ff00,0x000000ff};
        SampleModel sm;
        sm = new SinglePixelPackedSampleModel(DataBuffer.TYPE_INT,
                                              width, height, bandmasks);

        raster = Raster.createWritableRaster(sm, dbi, null);
    }

    public void update(Graphics g) {
        paint(g);
    }

    public void paint(Graphics g) {
        int value;
        int sourceRed, sourceGreen, sourceBlue;
        if (newImage != null) {
            g.drawImage(newImage, 0, 0, this);
            if (imageLoaded == false) {
                imageLoaded = true;
                for (int x =0; x < width; x+=1)
                    for (int y =0; y < height; y+=1) {
                        value = originalPixelArray[x*height+y];
                        sourceRed = raster.getSample(x,y,1);
                        sourceGreen = raster.getSample(x,y,2);
                        sourceBlue = raster.getSample(x,y,3);

                        if (sourceRed > index) {
                            sourceRed-=index;
                            imageLoaded = false;
                        }
                        else
                            sourceRed = 0;

                        if (sourceGreen > index) {
                            sourceGreen-=index;
                            imageLoaded = false;
                        }
                        else
                            sourceGreen = 0;

                        if (sourceBlue > index) {
                            sourceBlue-=index;
                            imageLoaded = false;
                        }
                        else
                            sourceBlue = 0;

                        raster.setSample(x,y,1,sourceRed);
                        raster.setSample(x,y,2,sourceGreen);
                        raster.setSample(x,y,3,sourceBlue);
                    }
                mis.newPixels();
            }
        }
    }
}

In the previous discussion as well as in Listing 4.3, we only considered a single pixel at a time. In practice, it is more efficient to deal with arrays of pixels, and both the getPixels/setPixels methods and the getSamples/setSamples methods allow you to do this.

Last, there is one other way to get and set pixel data from a Raster and that is with the Raster's getDataElements/setDataElements methods:

public Object getDataElements(int x, int y, Object outData)

void setDataElements(int x, int y, Object inData)

where outData and inData are references to arrays defined by the Raster's getTransferType method. These getDataElements/setDataElements methods transfer the samples in a form that is dependent upon the type of SampleModel being used (see Table 4.3). For example, in a SinglePixelPackedSampleModel, the pixel samples are held in a packed primitive data type, which is the transfer type. For a MultiPixelPackedSampleModel, the pixel sample is taken out of its packed primitive data type and returned in the smallest data type that can represent it. For ComponentSampleModels, the samples are returned in an array of whatever type held the samples. Because the getDataElement methods return pixel samples differently depending on the underlying SampleModel, care must be taken when using them. On the other hand, they are useful for efficiently transferring data between Raster's with similar SampleModels, that is,

raster1.setDataElements(x, y, raster2.getDataElements(x, y, null))

or

raster1.setDataElements(x, y, w, h, raster2.getDataElements(x, y, w, h, null))

where x and y represent either the pixel location (in the first method) or the origin of the rectangle to be copied (in the second method). Likewise, w and h represent the width and height of this rectangle. As you'll see, these methods are also useful for transferring data between Rasters and ColorModels.

Table 4.3. Raster Transfer Types
Raster's SampleModel Class Raster's Transfer Type
SinglePixelPackedSampleModel Packed primative data type
MultiPixelPackedSampleModel Smallest primitive data type that can represent an unpacked sample
ComponentSampleModel Array of whatever type held the samples

ColorModels

When looking at Listing 4.5, it appears that all the pieces necessary to convert pixels into color components are available, but one thing is still missing. That piece is the java.awt.image.ColorModel. The ColorModel takes pixel samples returned by the Raster and converts them to color components. As you've noticed in the previous examples, there are times when a pixel's samples are identical to the output device's required color components. For example, let's assume that our output device is a color monitor that requires color components of red, green, and blue with each component being an integer value between 0 and 255. If each pixel of our image data contains three samples representing red, green, and blue with each sample being between 0 and 255, a ColorModel object isn't necessary. On the other hand, what if the pixel samples are packed into a short integer (16 bits) instead of an integer (32 bits)? Then a reasonable scheme would be for each sample to be represented by 5 bits, allowing 32 possible values per sample. If you tried to use these pixel samples as color components, the image would appear too dark when displayed. In this case, a ColorModel is necessary to make the correct conversions.

Because the ColorModel is concerned with converting pixel samples to color components and vice versa, it requires two sets of methods: one to convert pixel samples to color components and one to convert color components to pixel samples. These method groups are the getComponents and the getDataElement methods, respectively. (Note that when we talk about color components, we also include alpha when it is relevant.) The two main getComponents methods are as follows:

int[] getComponents(Object pixel, int[] components, int offset)

int[] getComponents(int pixel, int[] components, int offset)

In the first method, the pixel parameter is expected to be an array of the ColorModel's transfer type that, for compatibility, should be the same as the Raster's transfer type (refer to Table 4.3). The component parameter will be an integer array that will be used to hold the color components. If this array is non null, a reference to it will also be returned by this method. If the component array is null, an appropriately sized integer array will be allocated and returned. Last, the offset parameter specifies where to begin putting the color components in the component array. The second method is really a special case of the first one. This special case occurs when you are using a ColorModel subclass that expects the pixel samples to be packed into a single integer. As you'll see in the next section, this ColorModel subclass is called a DirectColorModel.

As mentioned, the getDataElement methods convert color components to pixel samples. The two main getDataElement methods are the following (note that the first method is called getDataElements and the second is called getDataElement):

Object getDataElements(int[] components, int offset, Object obj)

int getDataElement(int[] components, int offset)

Again, the first method is concerned with arrays of type transfer type, where the components parameter holds the color components, the offset parameter describes where the first color component is in the components array, and the obj parameter will be an array of type transfer type and will hold the pixel samples. If the obj array is non null, a reference to it will be returned. If this array is null, an appropriate array will be allocated and returned. In the second method, the pixel samples will be returned packed into a single integer.

Creating and Using ColorModels

Because the Raster and the ColorModel need to work together, some care is required to make sure that they are compatible. For instance, the number of bands of pixel samples must match the number of components expected by the ColorModel. Also, the transfer type must be compatible. In other words, if the SampleModel is sending four pixel samples packed into a single integer, that is how the ColorModel should expect them (see Table 4.4).

Table 4.4. Typical Correspondence Between SampleModels and ColorModels
SampleModel Subclass ColorModel Subclass
SinglePixelPackedSampleModel DirectColorModel (subclass of abstract PackedColorModel)
BandedSampleModel ComponentColorModel
InterleavedSampleModel ComponentColorModel
MultiPixelPackedSampleModel IndexColorModel

The DirectColorModel is used when the image pixels represent red, green, and blue (and possibly alpha) samples; and these samples are packed together into an integer, short integer, or byte. A ComponentColorModel is used when each image pixel represents all of its color (and possibly alpha) information as separate samples, and all samples are stored in a separate data element. The IndexColorModel is used when the image pixels represent indices into an array containing the actual pixel samples. This is a common technique for grayscale images, which are used with an output device, such as a monitor, that expect pixel samples of red, green, and blue. For example, without indexing, if there are three samples per pixel (red, green, and blue) and each one takes 8 bits, the memory required is 8*3*imageWidth*imageHeight bits. If we are using a grayscale image so that the red, green, and blue samples must be equal, then there are only 256 different combinations of pixel samples that can be used. Thus, a grayscale image only requires 8*width*height bits of memory if each image pixel is a single byte and is used as an index to obtain the red, green, and blue pixel samples from three arrays. Of course, this latter calculation isn't completely accurate because the red, green, and blue arrays must be allocated, which would take up an additional 3*256 bytes.

As an example of the ColorModel's role in interpreting pixel samples, consider Listing 4.6. In this listing, two DirectColorModels are created. The first one expects 8-bit pixel samples of red, green, blue, and alpha. The second one expects 5-bit pixel samples of red, green, and blue. In both cases, the red, green, and blue samples are the same, but the normalized color components (color components whose values vary from 0.0 to 1.0) are very different. In other words, the ColorModels know the allowable range of the sample values, and they consider the ratio of the sample value to its maximum value.

Listing 4.6 FindComponents
package ch4;

import java.awt.image.DirectColorModel;
public class FindComponents {
    DirectColorModel dcm32;
    DirectColorModel dcm16;
    int[] components;
    float[] componentsf;
    int value32;
    short value16;
    int red8, green8, blue8, alpha8;
    short red5, green5, blue5;

    /**
       FindComponents.java -- prints out normalized color components for two dif
ferent
    */
    public FindComponents() {
        red8 = red5 = 30;
        green8 = green5 = 20;
        blue8 = blue5 = 10;
        alpha8 = 255;

        dcm32 = new DirectColorModel(32, 0x00ff0000, 0x0000ff00,
                                     0x000000ff, 0xff000000);
        value32 = (alpha8<<24) + (red8<<16) + (green8<<8) + blue8;
        components = dcm32.getComponents(value32, null, 0);
        componentsf = dcm32.getNormalizedComponents(components,0,null,0);
        System.out.println("Normalized components are: ");
        for(int i=0;i<componentsf.length;i++)
            System.out.println("	"+componentsf[i]);

        dcm16 = new DirectColorModel(16, 0x7c00, 0x3e0, 0x1f);
        value16 = (short)((red5<<10) + (green5<<5) + blue5);
        components = dcm16.getComponents(value16, null, 0);
        componentsf = dcm16.getNormalizedComponents(components,0,null,0);
        System.out.println("Normalized components are: ");
        for(int i=0;i<componentsf.length;i++)
            System.out.println("	"+componentsf[i]);
    }

    public static void main(String[] args) {
        new FindComponents();
    }
}

When run, the output of this listing would be the following:

Normalized components are:
        0.11764706
        0.078431375
        0.039215688
        1.0
Normalized components are:
        0.9677419
        0.6451613
        0.32258064

ColorSpaces

Another interesting example, which leads us to the concept of color space, is if we are using a printer for our output device. In this situation, the color components might need to be cyan, magenta, and yellow (CMY); in which case, we'll need to convert each pixel into three color components just as we did for the color monitor that needed red, green, and blue (RGB) components. So clearly just converting pixels into color components isn't enough: We still need to interpret these components. This interpretation is the job of the java.awt.color.ColorSpace. In other words, the number, order, and interpretation of color components for a ColorModel is specified by its ColorSpace. Thus, given three color components, you would need to look at the ColorModel's ColorSpace in order to understand whether they are CMY, RGB, or something else entirely.

Although there are many different color spaces, in Java the two most important are the sRGB color space and the CIEXYZ color space. All ColorSpaces have methods to convert to and from these two color spaces. The sRGB color space is a proposed standardized RGB color space that all ColorModels use by default. For more information regarding this color space, see http://www.w3.org/pub/WWW/Graphics/Color/sRGB.html. Because most people are familiar with representing colors using red, green, and blue components, this color space is easy to use and work with, although a minor problem with the sRGB color space is that it is possible to lose information if you convert from one color space to another by going through an intermediate sRGB color space. The CIEXYZ color space, on the other hand, can be used to convert between any two color spaces without worrying about lost information. Besides the ideal sRGB and the ideal CIEXYZ, Java provides a few other ideal color spaces such as GRAY.

These color spaces also allow you to convert colors between different ideal colorspaces. The way this conversion is performed is by using profiles. Profiles define the transformation between a particular color space and something called a Profile Connection Space (PCS). Each profile describes how to transform a color from its color space to the PCS and vice versa. Therefore, using profiles, you can convert a color in any color space to any other color space by going through the PCS. Of course, no input or output device is ideal, so if exact color replication is desired, profiles can also be used to transform to and from a non-ideal, device dependent color space. For more information on profiles, see the International Color Consortium Web site at http://www.color.org.

So to summarize this section, if you are working with red, green, and blue color components, you can ignore the ColorSpace class most of the time. Similarly, if you are using packed integer ARGB or RGB data, you often don't need to use a Raster because the bit manipulation isn't that difficult. On the other hand, to create generic, robust code you will need Rasters, ColorModels, and ColorSpaces, and you will need them to be compatible. As an illustration, if we augment Figure 4.3, you can see how the output of the Raster interacts with the ColorModel so that the ColorModel can extract and interpret (via its ColorSpace) the color components (see Figure 4.8). Later in this chapter, we will introduce the BufferedImage class, which contains all these objects, thus greatly simplifying the coding of image processing software.

Figure 4.8. The Raster's SampleModel uses its corresponding DataBuffer to convert a pixel's location into samples. These samples get passed to a ColorModel for conversion into color components in the appropriate color space.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset