Geometrical transformations of images

Here we consider the different kinds of geometrical transformations that change the position of the image's pixel. OpenCV does operations such as image resizing and warping using interpolation that suppress the aliasing effect. Hence, using OpenCV operations is more preferable than custom pixel-by-pixel implementation, except when you implement your own transformation algorithm with antialiasing (which can be tricky), or maybe when you need the aliasing effect. The following is a list of geometrical transformations that are applicable to ofxCv images:

  • The resize( w, h ) function changes the image size to w × h pixels. For example:
    image2 = image;
    image2.resize( image2.width * 0.5, image2.height * 0.5 );

    This code transforms image to image2 with a size that equals 50 percent of the size of image. Such a procedure decreases the number of pixels four times, so the speed of processing increases 4 times. However, object localization accuracy in x and y axes decreases only twice.

    So, input image decimation lets you adjust the balance between the speed and the accuracy of your computer vision algorithm. Hence, if your algorithm works too slowly and the accuracy is not very important, try to decimate the input image.

  • The scaleIntoMe( mom, interpolationMethod ) function is an advanced resizing function. It scales the content of the mom image into the image calling the function and additionally lets you choose an interpolation method by using the interpolationMethod parameter. Its possible values are:
    • CV_INTER_LINEAR – This method is used for the bilinear interpolation of pixel values. This method is fast and moderately qualitative. It is used by default in all other functions that deal with resizing and warping.
    • CV_INTER_AREA – This method is used for interpolation using the pixel area relation. It gives the highest quality when it performs image decimation, though it works slower than CV_INTER_LINEAR. Note, it does not work well for image zooming, just as CV_INTER_NN.
    • CV_INTER_NN – This method is used for resizing using the "nearest neighbor" rule. It just selects the nearest pixel and hence does not perform interpolation at all. It is the fastest method but gives poor quality. It is useful for the pixelization effect.
    • CV_INTER_CUBIC – This method uses cubic splines for interpolation. It works well for image zooming. Compared to CV_INTER_LINEAR, it gives sharper edges and is slower.

    Note, you need to allocate the image before calling scaleIntoMe().

  • The scale( scaleX, scaleY ) function resizes the content of the image proportionally to scaleX and scaleY. If both parameters are equal to 1.0, the image will not change.
  • The mirror( flipY, flipX ) function flips the image vertically or horizontally if flipY or flipX equals true respectively.
  • The translate( shiftX, shiftY ) function shifts the image at shiftX along the x axis and at shiftY along the y axis where shiftX and shiftY are of the type float. This function works with subpixel quality. Free space in the image is filled with black color.
  • The rotate( angle, centerX, centerY ) function rotates the image counterclockwise at angle (measured in degrees) and around the position (centerX, centerY). All the parameters are float. For example, if you need to rotate image at 45 degrees around its center, use the following function calling:
    image.rotate( 45, image.width/2, image.height/2 );

    Free space in the image is filled with black color.

  • The transform( angle, centerX, centerY, scaleX, scaleY, moveX, moveY ) function is used for making several transformations such as scaling the image, rotating it, and then moving it.
  • The undistort( radialDistX, radialDistY, tangentDistX, tangentDistY, focalX, focalY, centerX, centerY ) function is a crucial function for correcting camera distortions such as fish-eye. The camera calibration technique is outside the scope of this book. But you can play with parameters for obtaining "rubber" image distortions; for example, if you wish to apply such transformations to the sunflower.png image, try the following function calls:
    image.undistort( 0, 1, 0, 0, 200, 200, w/2, h/2 );
    image.undistort( 0, 1, 0.0, 0.2, 200, 200, w/2, h/2 );
    image.undistort( -0.5, 1, 0.2, 0.1, 2000, 150, w/2, h/2 );

    After applying the preceding transformations, you will obtain the following results:

    Geometrical transformations of images
  • The warpPerspective( A, B, C, D ) function performs perspective transform in such a way that points A, B, C, and D map to the corresponding corners of the image, that is, top-left, top-right, bottom-right, and bottom-left respectively. The points A, B, C, and D are of the ofPoint type. This function is exceptionally useful for correcting images of rectangular flat surfaces that were obtained from a tilted camera. See the example in the Perspective distortion removing example section.
  • The warpIntoMe( mom, srcPoints, dstPoints ) function is an advanced version of the warpPerspective() function. It performs perspective warping of the mom image into the image by calling this function so that points srcPoints map to points dstPoints. Here srcPoints and dstPoints are point arrays that are ofPoint src[4] and dst[4] respectively.
  • The remap( mapX, mapY ) function lets you perform arbitrary image warping with float images mapX and mapY so that the resulted value for pixel (x, y) is taken from the pixel with coordinates ( mapX( x, y ), mapY( x, y ) ). Note, mapX and mapY are pointers to OpenCV images of the type IplImage*. This function is useful for various nonlinear image deformations. See the Video morphing example section for more details.

Perspective distortion removing example

Perspective distortion is a geometrical distortion of an object's shape when captured by the camera so that the straight or parallel lines on the object become curvilinear or nonparallel lines in an image. If you want to remove this distortion using strict mathematical modeling, you need to specify the camera's optical characteristics and information about the object's points in three dimensions, although it can be hard.

Fortunately, we often just need to restore the image on a flat rectangular surface in space. This happens while creating interactive floors and tables using color or depth camera. For resolving this, it is enough to specify the coordinates of the four surface corners in the image and then perform perspective warping using warpPerspective().

This method works well for cameras with optics close to the ideal pinhole model. However, if your camera has a wide angle (with the fish-eye effect), the resulted image will not be an ideal rectangle. To obtain better results, you need to undistort the image first using the undistort() function.

Let's consider an example that shows us how to do perspective distortion removing.

Note

This is example 09-OpenCV/03-PerspectiveRemoving.

Consider a camera-captured image table.png, which contains a sheet of paper. We want to restore the picture printed on the paper. The image has a size of 1024×768 pixels, and the coordinates of the paper's corners are A (192, 286), B (742, 188), C (950, 489), and D (215, 665) as shown in the following screenshot:

Perspective distortion removing example

Assume that this image is loaded to the image object. To restore the picture printed on the paper, call the following function:

image.warpPerspective(
  ofPoint( 192, 286 ),
  ofPoint( 742, 188 ),
  ofPoint( 950, 489 ),
  ofPoint( 215, 665 ) );

The resulting image is shown here:

Perspective distortion removing example

You can see that the picture is restored quite well except the black area at the top of the image, which appears because the sheet of paper does not lie perfectly smoothly.

Note, the resulted image has proportions different from the original paper's picture. The used sheet of paper has the size A4 (297 × 210 mm), so it has size proportions 3:2, whereas the image object has the size 1024 × 768 pixels, so the proportions are 4:3. To obtain the image with correct proportions, you need to use image2.warpIntoMe() instead of image.warpPerspective() and specify the size of image2 proportional to 297 × 210; for example, 297 × 210 or 594 × 420 pixels.

We have finished explaining basic image processing using the classes of ofxCv images. Now, we will consider applying this for solving a particularly important task of detecting objects on the image, and we will see how to use the ofxCvContourFinder class for this.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset