Time for action – tracking people with Horn-Schunck optical flow

First, we are going to demonstrate the method described by Horn and Schunck. A good example of its usage can be found if you type vision.OpticalFlow System object in the search box on the top-right corner of your MATLAB window. The help page for the object includes an example based on the viptraffic.avi video. In this example, we will show some alternative steps for the same process, using a different video as input.

Since we will be using the Computer Vision System Toolbox for the optical flow algorithms, we might as well use another one of the videos included in its demos. The video is called atrium.avi and shows several people walking in an atrium in arbitrary trajectories. Our goal is to estimate their motions. Since the methods for the optical flow we will use can be applied only to grayscale videos, we will also convert our frames to grayscale of type uint8. Here, we will try to estimate the motion between the 89th and 90th frames.

  1. First, we will load our video and get the number of its frames, using the method VideoReader. Before we do that, we clear our workspace:
    >> clear all;
    >> videoObj = VideoReader('atrium.avi'), % Open video
  2. Then, we must create a system object for motion estimation:
    >> opticalFlow = vision.OpticalFlow('ReferenceFrameDelay', 1,...
    'Method','Horn-Schunck',...
    'OutputValue', 'Horizontal and vertical components in complex form'),
  3. Now, it is time to start the frame-by-frame processing of our video and estimate the motion between the pair of frames 89 and 90.
    >> for i = 89:90 % For frames 89 and 90
    frame = read(videoObj,i); % Load one frame at a time
    temp = rgb2gray(frame); % Convert frame to grayscale
    im(:,:,i-88) = single(temp); % Convert frame to single (for calculations)
    of(:,:,i-88) = step(opticalFlow, im(:,:,i-88)); % Estimate optical flow
    end
  4. The optical flow result is in complex form. This means that the matrix holding it contains the elements of the format x+yi. The real part, x, is the flow in axis x and the imaginary part, y, is the flow in axis y. We can isolate these results by using the real and imag functions.
    >> xMotion = real(of);
    >> yMotion = imag(of);
  5. The absolute value of the real and imaginary parts gives the magnitude of the optical flow. This measure depicts how large the motion of a pixel is, without carrying any information about its direction. It is given by:
    >> absMotion = abs(of);
  6. At this point, it is a good idea to display the two consecutive frames side-by-side (we have to convert them back to uint8):
    >> subplot(2,2,1),imshow(uint8(im(:,:,1))),title('89th frame'),
    >> subplot(2,2,2),imshow(uint8(im(:,:,2))),title('90th frame'),
  7. And at the bottom line of the figure, we will demonstrate their difference using a color composite image and a normalized image of the absolute optical flow value:
    >> subplot(2,2,3)
    >> imshowpair(im(:,:,1),im(:,:,2), 'ColorChannels','red-cyan'), 
    >> title('Composite Image (Red – Frame 89, Cyan – Frame 90)'),
    >> subplot(2,2,4)
    >> imshow(mat2gray(absMotion(:,:,2)))
    >> title('Normalized absolute optical flow value'),
    Time for action – tracking people with Horn-Schunck optical flow
  8. In order to depict the direction of the optical flow, we can use a system object that draws lines (and other shapes) on an image, called ShapeInserter. To use it, we have to first initialize its settings:
    >> shapeInserter = vision.ShapeInserter('Shape','Lines','BorderColor','Custom', 'CustomBorderColor', 255);
  9. Then, we have to create a matrix containing the origin coordinates of the motion vectors we want to draw, with their magnitudes amplified by a factor of our choice (here, we will amplify them by two). For this purpose, we will use a helper function called videooptflowlines:
    >> lines = videooptflowlines(of(:,:,2), 2);
  10. Now, we can draw the motion vector map on our image, using step and display our result:
    >> out =  step(shapeInserter, im(:,:,2), lines);
    >> figure,imshow(uint8(out))
    Time for action – tracking people with Horn-Schunck optical flow
  11. From the previous picture, we can observe that some vectors near or on the walking persons, appear to be unnaturally long. Also, we can see that some vectors on the background also appear longer than they should. These values are often called outliers. Let's fix this by setting values below or above a threshold to zero:
    >> of(abs(of)>20)=0;
    >> of(abs(of)<5)=0;
  12. Now, let's re-draw our result:
    >> lines = videooptflowlines(of(:,:,2), 2);
    >> out =  step(shapeInserter, im(:,:,2), lines);
    >> figure,imshow(uint8(out))
    Time for action – tracking people with Horn-Schunck optical flow

What just happened?

This example might seem to be like a little too much, a little too quickly. However, most of the steps it covers are pretty simple to follow. Let's try to explain them one by one.

In step 1, we loaded our video and in step 2, we created a system object that will be used to estimate the optical flow using the Horn-Schunck method.

Step 3 contains the core of our optical flow estimation process. It can easily be altered to estimate the optical flow for all frame pairs in the video; however, here it was used to estimate the flow only for one frame pair (frames 89 and 90).

Steps 4 and 5 demonstrated the nature of the optical flow results. We have used the 'Horizontal and vertical components in complex form' choice, therefore the result was a matrix with complex values. These two steps showed how we can decompose it into three matrices: one with the vertical motions (xMotion), one with the horizontal motions (yMotion), and one with the absolute motions (absMotion).

In steps 6 and 7, we displayed our two frames, their composite color image, and the absolute optical flow values as a grayscale image.

Next, in steps 8 and 9 we created a shape inserter object, we made a matrix with the coordinates of the motion vector lines we wish for it to draw and then, in step 10, we drew the lines on the frame and displayed the result.

Since our resulting image contained a lot of outliers, we filtered out values that are too large (over 20 pixels absolute value) and too small (under 5 pixels absolute value) from our optical flow result in step 11. Finally, we repeated the drawing process described previously, in step 12.

From the results of steps 10 and 12, we can make the following observations:

  • The motion vectors seem to be centered on the moving persons, but they also seem to be rather arbitrary. This makes it doubtful if they can be used for reconstructing the first frame using the second one and the optical flow.
  • The particular optical flow estimation method does not produce entirely useful results, with many outliers that confuse the final result.
  • The optical flow information seems very useful, especially in surveillance systems. As you can easily observe, it has detected the hidden person on the right of the frames.

Have a go hero – estimating optical flow using Lucas-Kanade

Now, it's your turn. Since we have covered the whole process in much detail, you can now repeat it for the Lucas-Kanade method. The only thing you have to change is the 'Method' setting in the optical flow system object. You may also want to experiment with different frames of the atrium video or other videos. If you do that, it may be necessary also to tweak the amplification factor in step 9, or the thresholds in step 11.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset