Using OpenCV functions

OpenCV is a really huge library with hundreds of functions, including optical flow computing, feature detection and matching, and machine learning. Most of these functions are currently not wrapped in the addon ofxOpenCv. You can use these capabilities by calling the OpenCV functions directly, by performing the following steps:

  1. First, in the testApp.h file, add the following line after the line #include "ofxOpenCv.h", which instructs the compiler to use the OpenCV's namespace:
    using namespace cv;
  2. Now you can declare OpenCV's images, which are objects of the type Mat:
    Mat imageCV;
  3. For converting the ofxCv image image into imageCV, call the following function:
    imageCV = Mat( image.getCvImage() );

    Note, this is fast operation that does not involve copying of data. imageCV and image will share the same memory region with pixel values. So, we would suggest only using imageCV for reading and not for changing.

    Tip

    We do not suggest the use of the setNativeScale() function for images that will be converted to Mat objects and back because there can be some undesirable pixel values range conversion.

  4. If you need to change the Mat object, you can copy it. Remember, the operator = applied for Mat objects does not copy pixel values. So, for copying those you need to use the direct command:
    Mat imageCV2;
    imageCV2 = imageCV.clone();  //Copy imageCV to imageCV2
    //Processing imageCV2...
  5. If you want to show Mat object on the screen, use the imshow() function:
    imshow( "Image", imageCV );

    The image will be shown in a separate window with the title "Image". This is very useful for debugging purposes. However, when debugging is finished, you can comment these functions because imshow() is a CPU-consuming operation.

    To use this function, you should add the line #include "highgui.h" after all other inclusions at the top of the testApp.cpp file.

  6. For converting the result of OpenCV back to ofxCv image, use the following code:
    IplImage iplImage( imageCV2 );
    image = &iplImage;          //Copy result to image

    The last operation makes a copy so you can change imageCV2 further without affecting image. Note, the image type of pixel values and number of channels should be the same as in imageCV2.

Note

Warning

Currently, the described operation image = &iplImage raises an error when image is not allocated. This is caused by a small bug in the addon's code. To fix this, open addons/ofxOpenCv/src/ofxCvImage.cpp and find the following function definition:

void ofxCvImage::operator = ( const IplImage* mom )

In this function body, find the line with this command:

if( mom->nChannels == cvImage->nChannels && mom->depth == cvImage->depth )

Replace the preceding line with the following line:

if( !bAllocated || mom->nChannels == cvImage->nChannels && mom->depth == cvImage->depth )

We will demonstrate all these steps in example of using optical flow.

Optical flow

Optical flow is a vector field that characterizes the motion of objects between two successive frames. Simply put, it is a two-channel image where the first and second channels mean the x and y axes of the pixels shift respectively. There are many algorithms for optical flow computing. Most algorithms assume that the motion between frames is relatively small.

The applications of optical flow in interactive applications includes:

  • Detecting areas of motion for tracking the user's activity. This method of motion detection is stable with ordinary nondepth cameras because it is invariant to the changing light conditions. Also, it is possible to use an average motion vector for controlling particles and other objects.
  • Segmenting the image using directions of optical flow for finding and using the contours of the moving objects further.
  • Using optical flow for interpolating between frames of a video. It lets you implement effects such as the famous Flo-Mo video effect. In general, this is the way for automating video morphing between arbitrary pairs of images. See the Video morphing example section.
  • Implementing a video effect similar to the datamoshing effect by applying the optical flow data obtained from one video for shifting pixels of another video or still image. The key idea behind this is considered in the end of this section.

Consider an example of using optical flow for video morphing and warping.

Video morphing example

Let's take two images of the same size, calculate the optical flow between these, and use this data for warping the first image to the second image in correspondence with the morphing parameter morphValue in the range [0, 1]. The value 0 means no warping and value 1 means warping on the entire range of optical flow.

Note

This is example 09-OpenCV/05-VideoMorphing.

Before running the example, fix a small bug in ofxOpenCv, as described in the information box in the Using OpenCV functions section.

Use the Project Generator wizard for creating an empty project with the linked ofxOpenCv addon (see the Using ofxOpenCv section). Then, copy images checkerBoard.png, hands1.png, and hands2.png into bin/data of the project, and copy sources of the example to the src folder.

Here, we will consider just the main parts of the code related to computing optical flow and video morphing.

Declare images in the testApp class declaration as follows:

ofxCvColorImage color1, color2; //First and second original images
ofxCvGrayscaleImage gray1, gray2;  //Decimated grayscaled images
ofxCvFloatImage flowX, flowY;      //Resulted optical flow 
                                   //in x and y axes

At the beginning of the testApp::setup() function, implement loading and decimating of images. Decimating is needed for a faster computing optical flow:

ofImage imageOf1, imageOf2;  //Load openFrameworks' images
imageOf1.loadImage("hands1.png");
imageOf2.loadImage("hands2.png");

color1.setFromPixels( imageOf1 );  //Convert to ofxCv images
color2.setFromPixels( imageOf2 );

float decimate = 0.3;              //Decimate images to 30%
ofxCvColorImage imageDecimated1;
imageDecimated1.allocate( color1.width * decimate, 
                          color1.height * decimate );

//High-quality resize
imageDecimated1.scaleIntoMe( color1, CV_INTER_AREA );
gray1 = imageDecimated1;

ofxCvColorImage imageDecimated2;
imageDecimated2.allocate( color2.width * decimate,
                          color2.height * decimate );
//High-quality resize
imageDecimated2.scaleIntoMe( color2, CV_INTER_AREA );
gray2 = imageDecimated2;

Now continue the testApp::setup() function body, and compute optical flow using the Farneback's method. Currently, it is the most stable optical flow algorithm in OpenCV. The resulting optical flow flow is held as a two-channel image, so we split it into two separate images flowX and flowY, that we declared earlier:

  Mat img1( gray1.getCvImage() );  //Create OpenCV images
  Mat img2( gray2.getCvImage() );
  Mat flow;                        //Image for flow
  //Computing optical flow
  calcOpticalFlowFarneback( img1, img2, flow,
                            0.7, 3, 11, 5, 5, 1.1, 0 );
  //Split flow into separate images
  vector<Mat> flowPlanes;
  split( flow, flowPlanes );
  //Copy float planes to ofxCv images flowX and flowY
  IplImage iplX( flowPlanes[0] );
  flowX = &iplX;
  IplImage iplY( flowPlanes[1] );
  flowY = &iplY;

For improving the sensitivity of detecting larger motions between images, it is desirable to smooth the images before computing optical flow, especially when input images are binary or have hard edges.

In testApp::draw(), we draw the original images and then draw optical flow as blue lines. For this purpose, we use optical flow values:

float *flowXPixels = flowX.getPixelsAsFloats();
float *flowYPixels = flowY.getPixelsAsFloats();

Now let's check the optical flow. Run the project.

Note

If you run the project, it might crash with an error in line flowX = &iplX if you didn't fix the small bug in ofxOpenCv yet. Fix it as it described in information box in the Using OpenCV functions section.

At the top of the screen, you will first see an image with an overlaid optical flow, and the second image just for reference:

Video morphing example

Note that in general, the optical flow is computed correctly. Now let's continue our consideration and see how to morph using the computed optical flow.

Using optical flow for morphing

Morphing will be implemented as warping using the remap() function, discussed in the Geometrical transformations of images section. So, we need to construct ofxCvFloatImage images mapX and mapY, which point how to do warping in the x and y axes. For this purpose, we will use optical flow and the morphing value morphValue:

mapX.allocate( w, h );	  //w and h is size of gray1 image
mapY.allocate( w, h );
//Get pointers to pixels data
float *flowXPixels = flowX.getPixelsAsFloats();
float *flowYPixels = flowY.getPixelsAsFloats();
float *mapXPixels = mapX.getPixelsAsFloats();
float *mapYPixels = mapY.getPixelsAsFloats();
for (int y=0; y<h; y++) {
  for (int x=0; x<w; x++) {
      int i = x + w * y;	//index
      mapXPixels[ i ] = x + flowXPixels[ i ] * morphValue;
      mapYPixels[ i ] = y + flowYPixels[ i ] * morphValue;
  }
}
//Notify that pixels values were changed
mapX.flagImageChanged();
mapY.flagImageChanged();

Now we can perform warping. The most important thing here is that our mapping (mapX, mapY) is direct, whereas the remap() function uses inverse mapping. So, we inverse it using our own function inverseMapping( mapX, mapY ), see the function definition in the project's code. Now for warping, we just need to resize the mappings to the original images' size and perform remap() as follows:

  //bigMapX and bigMapY have type ofxCvFloatImage
  int W = color1.width;
  int H = color1.height;
  bigMapX.allocate( W, H );
  bigMapY.allocate( W, H );
  bigMapX.scaleIntoMe( mapX, CV_INTER_LINEAR );
  bigMapY.scaleIntoMe( mapY, CV_INTER_LINEAR );
  multiplyByScalar( bigMapX, 1.0 * W / w );
  multiplyByScalar( bigMapY, 1.0 * H / h );

  //Do warping
  morph = color1;
  morph.remap( bigMapX.getCvImage(), bigMapY.getCvImage() );

Let's see how it works. Run the project and look at the bottom of the screen. You will see the result of morphing between the first and second images. Move the mouse from left to right to change the morphing parameter. You will see how the hands on the first image continuously change their shape to the shape of the hands from the second image:

Using optical flow for morphing

The morphing result is quite good. But you can see some undesirable effects in the resultant image. There are several reasons for this: decimation of images before optical flow computing, mistakes in the resultant optical flow, and roughness of the inverseMapping() function. However, this method is automatic, so it can be used in interactive projects for creating strange and interesting effects.

Tip

In our example, we have morphed just the geometry of the first image to match the shape of the second image. For morphing the colors of objects, you need to morph the second image too, and then blend the colors using the morphValue parameter.

Applying morphing to another image

Having computed optical flow, you can use it for morphing any other image, not necessarily the first input image. This is really a very interesting effect—you will see how morphing reveals the structure of the original moving hands in this arbitrary image. Try it in our example by pressing 2. For returning to the original morphing view, press 1. You will see the result of morphing for a checkerboard image:

Applying morphing to another image

Tip

In this example, we are applying optical flow for shifting pixels on the fixed checkerboard image. However, you can apply this transformation to a warped image obtained at the last warping step. Then you will see the "smudge" of the original image, which looks like the datamoshing effect widely used in "glitch" videos.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset