Creating and modifying images

In the preceding sections, we considered different ways of drawing images loaded from files. In this section, we see how to generate new images or alter an existing image by specifying its pixels directly.

A raster image is represented as an array of pixels in memory. If we have an image with width w pixels and height h pixels, it is represented by N = w * h pixels. Normally, the horizontal rows of an image lie sequentially in memory: the w pixels of the first row, then the second row, and so on to the h row.

The pixels of the image can hold differing amounts of information depending on the image type. In openFrameworks, the following types are used:

  • The OF_IMAGE_COLOR_ALPHA type denotes a colored image with transparency. Here, each pixel is represented by 4 bytes, holding red, green, blue, and alpha color components respectively, with values from 0 to 255.
  • The OF_IMAGE_COLOR type denotes colored image without transparency. Here each pixels is represented by 3 bytes, holding red, green, and blue components. Such images are used when no transparency pixels are needed. For example, JPG files and images from cameras represented in openFrameworks of this type.
  • The OF_IMAGE_GRAYSCALE type denotes a grayscale image. Each pixel here is represented by 1 byte and holds only one component of color. Most often, such images are used for representing masks. In most situations, we use colored images, but if your project needs a huge amount of masks or halftone images use grayscale type, because it occupies less memory.

Tip

In this book, we are talking mainly about images of class ofImage, where each pixel component is represented by 1 byte, with integer values from 0 to 255 (type unsigned char). But, in some cases, more accuracy is needed. Such situations occur when using a buffer with gradual content erasing, or using an image as a height map. For such purposes, openFrameworks has an image class, ofFloatImage. The methods of the class are the same as ofImage, but each pixel component holds a float value. For an example on how to use it, see examples/graphics/floatingPointImageExample.

Also, there is the class ofShortImage, which works with integer values in the range 0 to 65535; that is, unsigned short type. Such images are a best fit for representing data from depth cameras, where pixels hold distance to the scene objects in millimeters.

See more details on using these image types in Chapter 9, Computer Vision with OpenCV, and Chapter 10, Using Depth Cameras.

Creating images

To create image by code, we need to create a pixel array and then push it into the image using the image.setFromPixels( data, w, h, type ) method. Here data is the pixels array, w is the image width, h is image height, and type is the image type (OF_IMAGE_COLOR_ALPHA, OF_IMAGE_COLOR, or OF_IMAGE_GRAYSCALE).

The data should be array of unsigned char type. If we create a four-channel image with width w and height h pixels, then array size will be w * h * 4 bytes. For given x from 0 to w-1 and y from 0 to h-1, we have the red, green, blue, and alpha components for the pixel (x, y) located in data[index], data[index + 1], data[index + 2], and data[index + 3] respectively, where index equals 4 * ( x + w * y ).

In the following example, the image is generated in each testApp::update() function calling and it evolves with time.

Note

This is example 04-Images/04-ColorWaves.

#include "testApp.h"
ofImage image;       //Declare image object

void testApp::setup(){
}

void testApp::update(){
  //Creating image

  int w = 512;  //Image width
  int h = 512;  //Image height

  //Allocate array for filling pixels data
  unsigned char *data = new unsigned char[w * h * 4];

  //Fill array for each pixel (x,y)
  for (int y=0; y<h; y++) {
      for (int x=0; x<w; x++) {
           //Compute preliminary values,
           //needed for our pixel color calculation:

           //1. Time from application start
           float time = ofGetElapsedTimef();

           //2. Level of hyperbola value of x and y with
           //center in w/2, h/2
           float v = ( x - w/2 ) * ( y - h/2 );

           //3. Combining v with time for motion effect
           float u= v * 0.00025 + time;
           //Here 0.00025 was chosen empirically

           //4. Compute color components as periodical 
           //functions of u, and stretched to [0..255]
           int red = ofMap( sin( u ), -1, 1, 0, 255 );
           int green = ofMap( sin( u * 2 ), -1, 1, 0, 255 );
           int blue = 255 - green;
           int alpha = 255;  //Just constant for simplicity


           //Fill array components for pixel (x, y):
           int index = 4 * ( x + w * y );
           data[ index ] = red;
           data[ index + 1 ] = green;
           data[ index + 2 ] = blue;
           data[ index + 3 ] = alpha;
      }
  }

  //Load array to image
  image.setFromPixels( data, w, h, OF_IMAGE_COLOR_ALPHA );

  //Array is not needed anymore, so clear memory
  delete[] data;
}

void testApp::draw(){
  ofBackground(255, 255, 255);     //Set up white background
  ofSetColor( 255, 255, 255 );     //Set color for image drawing
  image.draw( 0, 0 );              //Draw image
}

Note, for time measurement, we use the ofGetElapsedTimef() function, which returns the float number equal to the amount of seconds from application start. Also, we use the ofMap() function for mapping result of sin(...) (lying in [-1, 1]) into interval [0, 255]. See details in the Basic utility functions section in Chapter 1, openFrameworks Basics.

After running the preceding code, you will see an animated image with moving color waves, as shown in the following screenshot:

Creating images

Modifying images

Instead of creating images from nothing, you can modify existing images. For such purposes, use the image.getPixels() function, which returns a pixel array of an image. After changing this array, call image.update() to apply changes in the image. Actually, image.update() loads the changed image into the video memory for drawing on the screen; see the Using ofTexture for memory optimization section for details.

Note

This is example 04-Images/05-ImageModify.

In the following example, we read and modify pixels of the sunflower image and draw it on the screen. We alter the image just once, at testApp::setup(). In the code, we did not know exactly which type has the sunflower.png image file, OF_IMAGE_COLOR or OF_IMAGE_COLOR_ALPHA.

For this reason, we made a universal code by computing the number of image pixel components, int components, which equals image.bpp/8. Here, the image.bpp field holds the bits per value and characterizes the number of bits allocated for each image pixel. It can be 8, 24, or 32, which corresponds to OF_IMAGE_GRAYSCALE, OF_IMAGE_COLOR, or OF_IMAGE_COLOR_ALPHA respectively. So, dividing the value 8, we get the number of pixel components 1, 3, or 4. In the example, we use a color image file, so components will be equal to either 3 or 4 (not 1).

Tip

In this example, it is convenient to use a number of components. Sometimes, it is more handy to directly check the type of image. The image type is held in the field image.type and gets the values OF_IMAGE_GRAYSCALE, OF_IMAGE_COLOR, and OF_IMAGE_COLOR_ALPHA.

Always check the type or number of color components of a given image in serious projects. Performing image modifications with incorrect assumption of its type leads to computations that rely on incorrect pixel array size. It can cause memory errors or result in corrupted images.

The code is given as follows:

#include "testApp.h"
ofImage image;       //Declare image object

void testApp::setup(){
  image.loadImage( "sunflower.png" );  //Load image

  //Modifying image

  //Getting pointer to pixel array of image
  unsigned char *data = image.getPixels();

  //Calculate number of pixel components
  int components = image.bpp / 8;

  //Modify pixel array
  for (int y=0; y<image.height; y++) {
      for (int x=0; x<image.width; x++) {

          //Read pixel (x,y) color components
          int index = components * (x + image.width * y);
          int red = data[ index ];
          int green = data[ index + 1 ];
          int blue = data[ index + 2 ];

          //Calculate periodical modulation
          float u = abs(sin( x * 0.1 ) * sin( y * 0.1 ) );

          //Set red component modulated by u
          data[ index ] = red * u;

          //Set green value as inverted original red
          data[ index + 1 ] = (255 - red);

          //Invert blue component
          data[ index + 2 ] = (255 - blue);

          //If there is alpha component or not, 
          //we don't touch it anyway
      }
  }
  //Calling image.update() to apply changes
  image.update();
}

void testApp::draw(){
  ofBackground( 255, 255, 255 );
  ofSetColor( 255, 255, 255 );
  image.draw( 0, 0 );         //Draw image
}

On running the project, you will see the sunflower image with non-linearly modified colors.

Modifying images

The preceding method for manipulating the image's pixels using image.getPixels() is fast, but sometimes is not very convenient, because you need to work with each pixel's color component individually. So let's consider more convenient functions, which operate with a pixel's color using ofColor type.

Working with the color of a single pixel

There exist functions for getting and setting color of the image's pixel without knowing the image type:

  • The image.getColor( x, y ) function reads color of pixel (x, y) of the image. It returns object of type ofColor, with fields r, g, b, a, corresponding red, green, blue, and alpha color components (see details in the Colors section in Chapter 2, Drawing in 2D).
  • The image.setColor( x, y, color ) function sets color of pixel (x, y) to color value, where color has type ofColor. After changing pixels' colors using image.setColor(), you need to call the image.update() function for the changes to take effect.

Note

Be careful, the overall performance of code which uses these functions can be slightly lower than code which uses the functions image.getPixels() and image.setFromPixels() .

Let's consider an example of using these functions for geometrical distortion of an image.

A simple geometrical distortion example

This example distorts the geometry of an image by shifting its horizontal lines by sine wave, which also changes with time. For achieving this, we keep the original image in image untouched, and use it for building distorted image image2 in the testApp::update() function.

Note

This is example 04-Images/06-HorizontalDistortion.

#include "testApp.h"

ofImage image;       //Original image
ofImage image2;      //Modified image

//--------------------------------------------------------------
void testApp::setup(){

  image.loadImage( "sunflower.png" );  //Load image
  image2.clone( image );               //Copy image to image2
}

void testApp::update(){
  float time = ofGetElapsedTimef();

  //Build image2 using image
  for (int y=0; y<image.height; y++) {
      for (int x=0; x<image.width; x++) {
          //Use y and time for computing shifted x1
          float amp = sin( y * 0.03 );
          int x1 = x + sin( time * 2.0 ) * amp * 50.0;

          //Clamp x1 to range [0, image.width-1]
          x1 = ofClamp( x1, 0, image.width - 1 );

          //Set image2(x, y) equal to image(x1, y)
          ofColor color = image.getColor( x1, y );
          image2.setColor( x, y, color );
      }
  }

  image2.update();
}

void testApp::draw(){
  ofBackground(255, 255, 255);
  ofSetColor( 255, 255, 255 );
  image2.draw( 0, 0 );
}

Note, in the testApp::setup() function, we use the image2.clone( image ) function, which copies image to image2. In the given example it is required for allocating image2.

When you run the preceding code, you will see a project in which you will see a waving sunflower image as shown in the following screenshot:

A simple geometrical distortion example

Tip

Learn how to implement the similar image distortion using shaders in the A simple geometrical distortion example section in Chapter 8, Using Shaders.

We are about to finish discussing the methods of the image's modification. Now, let's consider useful functions for resizing, cropping, and rotating the images.

The functions for manipulating the image as a whole

There are number of the functions, which perform the global image manipulations. They are as follows:

  • image.resize( newW, newH ) – resizes the image to a new size, newW × newH
  • image.crop( x, y, w, h ) – crops the image to a subimage with the top-left corner (x, y) and size w × h
  • image.rotate90( times ) – rotates the image clockwise at 90 * times degrees
  • image.mirror( vertical, horizontal ) – mirrors the image, where vertical and horizontal are bool values
  • image2.clone( image ) – copies image into image2 (we used this function in the preceding example)

Now we will discuss the relationship between the image in the ordinary memory used by the CPU and the video memory used by the video card. It is important for understanding and optimizing image processing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset