Java Images: A Raster and a ColorModel

The previous example is a simple description of what would be referred to as a Raster with a grayscale ColorModel in Java. In other words, the Raster consists of a rectangular array of the pixel values, and the ColorModel contains methods to convert pixel data into colors. Together, they provide the information we need to render the image.

Two pieces make up the Raster: a DataBuffer containing the actual numbers, and a SampleModel, which groups the numbers into pixels. In the Java 2D API, the Sample is the atomic representation of image data. In the case of the grayscale image described previously, one Sample is equivalent to one pixel. However, in the case of color data, there will be multiple Samples for each pixel. For example, in an RGB image, there will be four samples per pixel, one each for red, green, and blue, and one more for the transparency. The data can be stored in a wide variety of orders (for example, all red followed by all blue followed by all green, or alternatively in triplets of RGB, RGB, and so on).

To reiterate an important point, in order to flexibly handle the diversity of file formats and their interpretation, Java 2D uses the SampleModel to interpret the numbers stored in a DataBuffer. The DataBuffer simply holds the image data (the numbers) in storage, but the SampleModel contains the methods for grouping those numbers into pixels.

Individually, a SampleModel and a DataBuffer aren't sufficient to produce an image. Only together can the raw data (stored in the DataBuffer) and the interpretation of that data (the SampleModel) constitute an image.

Whereas the SampleModel interprets the DataBuffer in terms of pixels, the ColorModel interprets the Raster (again, SampleModel plus DataBuffer) in terms of color. Some confusion might follow as to the differences between a ColorModel and a SampleModel. The difference is as follows: By using a DataBuffer and SampleModel, it is possible to examine and process the raw values associated with each pixel, but unless the ColorModel is specified, there is no way to interpret the pixels into colors. Because of the SampleModel, we know what numbers in the DataBuffer are associated with each pixel, but we still don't know what they mean in terms of color. An image, then, is fully described by its raw data in the DataBuffer, the interpretation of the raw data into pixels via the SampleModel, and finally the interpretation of the pixel data into a color space through the ColorModel. Without all three of these components, we cannot render raw numbers into an image.

In many cases, the rectangular area represented by the Raster will correspond to the entire image; however, the Raster can represent any rectangular area of the image. Therefore, whereas the space of the DataBuffer and SampleModel are always defined with an origin of (0,0), the Raster itself can be translated from this origin. The Raster, therefore contains an X and Y translation. This seemingly esoteric digression will be important when we discuss image tiling and other topics in Chapter 6.

The entire scheme will be discussed in further detail later when we examine the grayscale and color examples in the next several chapters, but it is important for you to pause here and reflect on how an image is represented in Java. To summarize, an image is ultimately made up of numbers; however, a lot of information is needed in order to interpret those numbers. The DataBuffer stores the numbers, whereas the SampleModel maps the numbers onto pixels. A rectangular array of pixels (DataBuffer plus SampleModel) is called a Raster. After the pixels are organized and interpreted with a Raster, the numbers are interpreted further with a ColorModel. All these components must be present in order to represent an image in Java.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset