Processing multiple frames

Till now we saw examples of modification and drawing video frames just like single images. Deeper processing should involve analysis of the several frames.

If we compare two successive frames, we can find the direction and velocity of motion for each frame pixel. Such a vector field is called optical flow. It has many uses in video, graphics, and computer vision. Optical flow computation is a nontrivial task of computer vision, and we will learn to do it in Chapter 9, Computer Vision with OpenCV.

Another idea is to bufferize a number of frames and then draw parts of the frames in different parts of the screen. The famous video effect called slit-scan or time displacement is based on this principle. In effect, horizontal lines of the resulting image are built from horizontal lines of several successive frames. Often, bottom lines are taken from older frames, and top lines are made from the newest frames. So if the object was moved horizontally in the original video, in the processed video you see a slow-motion propagation from the top to the bottom of the frame. An object rotating like spinning dances will look like a twisted spiral. (See the screenshot in the Horizontal slit-scan section.)

Tip

The origins of the slit-scan effect lie in mechanical slit-photography technology developed in the 19th century. Nowadays slit-scan is made with computers, and it is used in cinematography and art.

Slit-scan is implemented in openFrameworks' addon ofxSlitScan. Also, there exist plugins for this effect in video editors such as Adobe After Effects.

Radial slit-scan example

Here we consider the implementation of a circular version of the slit-scan effect, which can be called radial slit-scan. The mouse position will define the center where a portion of new frame is drawn. The other pixels (x, y) are filled using the older frames, where the frame's "oldness" depends on the distance between mouse position and (x, y).

This example is based on the emptyExample project in openFrameworks. Before running it, copy the handsTrees.mov file into the bin/data folder of your project.

Note

This is example 05-Video/04-VideoSlitScan.

In the testApp.h file, inside the testApp class declaration, add declaration of video player object video, frames buffer frames, output image image, and some other declarations:

ofVideoPlayer video;           //Video player object

deque<ofPixels> frames;        //Frames buffer
int N;                         //Frames buffer size

//Pixels array for constructing output image
ofPixels imagePixels;
ofImage image;                 //Output image

//Main processing function which
//computes the pixel color (x, y) using frames buffer
ofColor getSlitPixelColor( int x, int y );

You will note that the buffer of frames is declared here as deque<ofPixels> frames. Class deque is C++ Standard Template Library container, holding items of any class. In our case, such a class is ofPixels. You can think of frames as a dynamic array, which can change its size during runtime. It provides an indexed access to any item such as frames[i], and most importantly, it efficiently adds and removes items at its ends.

The deque class is very similar to the popular vector class of C++ Standard Template Library. The vector class can be resized and has an indexed access to its items too, and is also a little faster than deque. However, it slowly adds and removes elements to its ends, which is crucial for our example. (While using vector, see the Using image sequence example section.)

In the testApp.cpp file, the setup() function just reads and plays the video, and the draw() function draws processed image on the screen:

void testApp::setup(){
  video.loadMovie( "handsTrees.mov" );  //Load video file

  //Play video with 1/4 of its normal speed
  //for better seeing slit-scan effect
  video.setSpeed( 0.25 );

  video.play();  //Start video to play

  N = 150;       //Set buffer size
}

//--------------------------------------------------------------
void testApp::draw(){
  ofBackground(255, 255, 255);         //Set white background

  //Draw image
  ofSetColor( 255, 255, 255 );
  image.draw(0,0);
}

Let's consider the first part of the update() function. It gradually reads frames from movies, and stores N last frames in frames buffer in such a way that newer frames have smaller indexes:

void testApp::update(){
  video.update();            //Decode the new frame if needed

  //Do computing only if a new frame was obtained
  if ( video.isFrameNew() ) {
      //Push the new frame to the beginning of the frame list
      frames.push_front( video.getPixelsRef() );

      //If number of buffered frames > N,
      //then pop the oldest frame
      if ( frames.size() > N ) {
          frames.pop_back();
      }
  }

We use frames.push_front( video.getPixelsRef() ) for adding pixel array of the current video frame item to the beginning, and we use frames.pop_back() for removing the oldest frame. These two operations always let us have the newest frame in frames[0], and not more than N - 1 older frames. (When the project starts, frames buffer is empty. With the lapse of time, its size gradually increases and later keeps equal to N.)

The second part of the update() function computes output image image using the getSlitPixelColor( x, y ) function, which will be discussed later.

  //It is possible that video player did not finish decoding
  //the first frame at first testApp::update() calling,
  //so we need check, if there are frames
  if ( !frames.empty() ) {
      //Now constructing the output image in imagePixels

      //If imagePixels is not initialized yet, then initialize
      //it by copying from any frame.
      //This is simplest way to create a pixel array
      //of the same size and type
      if ( !imagePixels.isAllocated() ) {
          imagePixels = frames[0];
      }

      //Getting video frame size for formulas simplification
      int w = frames[0].getWidth();
      int h = frames[0].getHeight();

      //Scan all the pixels
      for (int y=0; y<h; y++) {
          for (int x=0; x<w; x++) {

              //Get "slit" pixel color
              ofColor color = getSlitPixelColor( x, y );

              //Set pixel to image pixels
              imagePixels.setColor( x, y, color );
          }
      }
      //Set new pixels values to the image
      image.setFromPixels( imagePixels );
  }
}

The main processing function of the example is getSlitPixelColor(x,y). It computes and returns the pixel color (x,y) corresponding to the radial slit-scan image. The function makes it work using frame buffer frames and current mouse position (mouseX, mouseY):

ofColor testApp::getSlitPixelColor( int x, int y ){
  //Calculate the distance from (x,y) to the current 
  //mouse position mouseX, mouseY

  float dist = ofDist( x, y, mouseX, mouseY );

  //Main formula for connecting (x,y) with frame number
  float f = dist / 8.0;
  //Here "frame number" is computed as a float value.
  //We need it for getting a "smooth result"
  //by interpolating colors later

  //Compute two frame numbers surrounding f
  int i0 = int( f );
  int i1 = i0 + 1;

  //Compute weights of the frames i0 and i1
  float weight0 = i1 - f;
  float weight1 = 1 - weight0;

  //Limiting frame numbers by range from 0 to n=frames.size()-1
  int n = frames.size() - 1;
  i0 = ofClamp( i0, 0, n );
  i1 = ofClamp( i1, 0, n );

  //Getting the frame colors
  ofColor color0 = frames[ i0 ].getColor( x, y );
  ofColor color1 = frames[ i1 ].getColor( x, y );

  //Interpolate colors - this is the function result
  ofColor color = color0 * weight0 + color1 * weight1;

  return color;
}

This example is quite CPU-intensive, so we suggest to run it in the Release mode of your development environment. Of course, it runs in the Debug mode too, but can give slow performance.

Tip

To improve performance further, you can implement the algorithm using fragment shader; see the Processing several images section in Chapter 8, Using Shaders.

Run this example and place the mouse cursor somewhere in the central area of the video frame. You will see radial waves of motion centered in the mouse position.

Now begin to move the mouse cursor from left to right hands and back. You will see how your movement changes time-space distribution of this interactive picture. When you move your mouse to some point, this part of the image shows "future", and other parts of the image gradually go to the "past", in respect to the video. It is simple to understand from algorithmic point of view: the closer the pixel is to mouse position, the newer the corresponding frame is for its color. The example of the resulting frame is shown in the following screenshot. The mouse cursor is pointing to the center of the right hand, so this region is undistorted:

Radial slit-scan example

The most important function of the example is getSlitPixelColor( x, y ), which returns the color of the pixel (x, y) computed from colors of the buffered frames. The main formula is:

float f = dist / 8.0;

It computes the desired frame number for getting color for pixel (x,y) depending on coordinates (x, y). It is equal to the distance between the pixel and the mouse position to the frame number, divided by 8.0. If you change the constant 8.0 to another value, you will notice the change in the speed of the radial wave.

Horizontal slit-scan

Change the main formula in getSlitPixelColor( x, y ) using the following line:

float f = y / 8.0;

It gives "classical" horizontal slit-scan effect (which is independent of the mouse position and hence is not interactive). The example of using this formula is shown in the following screenshot. You can observe the specific twisting of the hands and fingers.

Horizontal slit-scan

Play with the formula and try other slit-scan effects by yourself.

Discussing color interpolation

The last important thing to discuss in the example is color interpolation. Notice that we compute frame number as a float, but not as an integer value:

float f = dist / 8.0;

The reason for this is our desire to visually smoothen borders between the frames to get better results. (Check how the borders look by making truncating float f = int(dist / 8.0);). We achieve this using color interpolation between two successive frames with numbers i0 and i1 around f:

int i0 = int( f );
int i1 = i0 + 1;

Then we compute weights for these frame numbers in a way that the sum of weight0 and weight1 is equal to 1. If f is closer to i0 than to i1, weight0 is greater than weight1 and vice versa:

  float weight0 = i1 - f;
  float weight1 = 1 - weight0;

The correspondence between f, i0, i1, and weights is shown in the following diagram:

Discussing color interpolation

Finally, we construct the resulting color by interpolating colors of frames i0 and i1 using weights:

ofColor color = color0 * weight0 + color1 * weight1;

Tip

Despite using color interpolation, you can notice "interlacing-like" artifacts in the resulting video. The reason for this is that we need to show in image continuous space-time motion, using just frames made in discrete moments of time. For reduction such as an "interlacing-like" effect, we need to shift the pixels slightly during color interpolation, using optical flow between frames i0 and i1. You will be able to construct such an algorithm after learning optical flow in Chapter 9, Computer Vision with OpenCV.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset