Descriptor matchers

DescriptorMatcher is an abstract base class to match keypoint descriptors that, as happens with DescriptorExtractor, make programs more flexible than using matchers directly. With the Ptr<DescriptorMatcher> DescriptorMatcher::create(const string& descriptorMatcherType) function, we can create a descriptor matcher of the desired type. The following are the supported types:

  • BruteForce-L1: This is used for float descriptors. It uses L1 distance and is efficient and fast.
  • BruteForce: This is used for float descriptors. It uses L2 distance and can be better than L1, but it needs more CPU usage.
  • BruteForce-SL2: This is used for float descriptors and avoids square root computation from L2, which requires high CPU usage.
  • BruteForce-Hamming: This is used for binary descriptors and calculates the Hamming distance between the compared descriptors.
  • BruteForce-Hamming(2): This is used for binary descriptors (2 bits version).
  • FlannBased: This is used for float descriptors and is faster than brute force by pre-computing acceleration structures (as in DB engines) at the cost of using more memory.

The void DescriptorMatcher::match(InputArray queryDescriptors, InputArray trainDescriptors, vector<DMatch>& matches, InputArray mask=noArray()) and void DescriptorMatcher::knnMatch(InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch>>& matches, int k, InputArray mask=noArray(), bool compactResult=false) functions give the best k matches for each descriptor, k being 1 for the first function.

The void DescriptorMatcher::radiusMatch(InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch>>& matches, float maxDistance, InputArray mask=noArray(), bool compactResult=false) function also finds the matches for each query descriptor but not farther than the specified distance. The major drawback of this method is that the magnitude of this distance is not normalized, and it depends on the feature extractor and descriptor used.

Tip

In order to get the best results, we recommend that you use matchers along with descriptors of the same type. Although it is possible to mix binary descriptors with float matchers and the other way around, the results might be inaccurate.

Matching the SURF descriptors

SURF descriptors belong to the family of oriented gradients descriptors. They encode statistical knowledge about the geometrical shapes present in the patch (via histograms of oriented gradients/Haar-like features). They are considered as a more efficient substitution for SIFT. They are the best known multiscale feature description approaches, and their accuracy has been widely tested. They have two main drawbacks though:

  • They are patented
  • They are slower than binary descriptors

There is a common pipeline in every descriptor matching application that uses the components explained earlier in this chapter. It performs the following steps:

  1. Compute interest points in both images.
  2. Extract descriptors from the two generated interest point sets.
  3. Use a matcher to find connections between descriptors.
  4. Filter the results to remove bad matches.

The following is the matchingSURF example that follows this pipeline:

#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"

using namespace std;
using namespace cv;

int main( int argc, char** argv )
{
    Mat img_orig = imread( argv[1],IMREAD_GRAYSCALE);
    Mat img_fragment = imread( argv[2], IMREAD_GRAYSCALE);
    if(img_orig.empty() || img_fragment.empty())
    {
        cerr << " Failed to load images." << endl;
        return -1;
    }

     //Step 1: Detect keypoints using SURF Detector
     vector<KeyPoint> keypoints1, keypoints2;
     Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");

     detector->detect(img_orig, keypoints1);
     detector->detect(img_fragment, keypoints2);

     //Step 2: Compute descriptors using SURF Extractor
     Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SURF");
     Mat descriptors1, descriptors2;
     extractor->compute(img_orig, keypoints1, descriptors1);
     extractor->compute(img_fragment, keypoints2, descriptors2);

     //Step 3: Match descriptors using a FlannBased Matcher
     Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
     vector<DMatch> matches12;
     vector<DMatch> matches21;
     vector<DMatch> good_matches;

     matcher->match(descriptors1, descriptors2, matches12);
     matcher->match(descriptors2, descriptors1, matches21);

     //Step 4: Filter results using cross-checking
     for( size_t i = 0; i < matches12.size(); i++ )
     {
         DMatch forward = matches12[i];
         DMatch backward = matches21[forward.trainIdx];
         if( backward.trainIdx == forward.queryIdx )
             good_matches.push_back( forward );
     }

     //Draw the results
     Mat img_result_matches;
     drawMatches(img_orig, keypoints1, img_fragment, keypoints2, good_matches, img_result_matches);
     imshow("Matching SURF Descriptors", img_result_matches);
     waitKey(0);

     return 0;
 }

The explanation of the code is given as follows. As we described earlier, following the application pipeline implies performing these steps:

  1. The first step to be performed is to detect interest points in the input images. In this example, the common interface is used to create a SURF detector with the line Ptr<FeatureDetector> detector = FeatureDetector::create("SURF").
  2. After that, the interest points are detected, and a descriptor extractor is created using the common interface Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create( "SURF"). The SURF algorithm is also used to compute the descriptors.
  3. The next step is to match the descriptors of both images, and for this purpose, a descriptor matcher is created using the common interface, too. The line, Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased"), creates a new matcher based on the Flann algorithm, which is used to match the descriptors in the following way:
    matcher->match(descriptors1, descriptors2, matches12)
  4. Finally, the results are filtered. Note that two matching sets are computed, as a cross-checking filter is performed afterwards. This filtering only stores the matches that appear in both sets when using the input images as query and train images. In the following screenshot, we can see the difference when a filter is used to discard matches:
    Matching the SURF descriptors

    Results after matching SURF descriptors with and without a filter

Matching the AKAZE descriptors

KAZE and AKAZE are novel descriptors included in the upcoming OpenCV 3.0. According to published tests, both outperform the previous detectors included in the library by improving repeatability and distinctiveness for common 2D image-matching applications. AKAZE is much faster than KAZE while obtaining comparable results, so if speed is critical in an application, AKAZE should be used.

The following matchingAKAZE example matches descriptors of this novel algorithm:

#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace cv;
using namespace std;

int main( int argc, char** argv )
{
  Mat img_orig = imread( argv[1], IMREAD_GRAYSCALE );
  Mat img_cam = imread( argv[2], IMREAD_GRAYSCALE );

  if( !img_orig.data || !img_cam.data )
  {
    cerr << " Failed to load images." << endl;
    return -1;
  }

  //Step 1: Detect the keypoints using AKAZE Detector
  Ptr<FeatureDetector> detector = FeatureDetector::create("AKAZE");
  std::vector<KeyPoint> keypoints1, keypoints2;

  detector->detect( img_orig, keypoints1 );
  detector->detect( img_cam, keypoints2 );

  //Step 2: Compute descriptors using AKAZE Extractor
  Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("AKAZE");
  Mat descriptors1, descriptors2;

  extractor->compute( img_orig, keypoints1, descriptors1 );
  extractor->compute( img_cam, keypoints2, descriptors2 );

  //Step 3: Match descriptors using a BruteForce-Hamming Matcher
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
  vector<vector<DMatch> > matches;
  vector<DMatch> good_matches;

  matcher.knnMatch(descriptors1, descriptors2, matches, 2);

  //Step 4: Filter results using ratio-test
  float ratioT = 0.6;
  for(int i = 0; i < (int) matches.size(); i++)
  {
      if((matches[i][0].distance < ratioT*(matches[i][1].distance)) && ((int) matches[i].size()<=2 && (int) matches[i].size()>0))
      {
          good_matches.push_back(matches[i][0]);
      }
  }

  //Draw the results
  Mat img_result_matches;
  drawMatches(img_orig, keypoints1, img_cam, keypoints2, good_matches, img_result_matches);
  imshow("Matching AKAZE Descriptors", img_result_matches);

  waitKey(0);

  return 0;
}

The explanation of the code is given as follows. The first two steps are quite similar to the previous example; the feature detector and descriptor extractor are created through their common interfaces. We only change the string parameter passed to the constructor, as this time, the AKAZE algorithm is used.

Note

A BruteForce matcher that uses Hamming distance is used this time, as AKAZE is a binary descriptor.

It is created by executing Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming"). The matcher.knnMatch(descriptors1, descriptors2, matches, 2) function computes the matches between the image descriptors. It is noteworthy to mention the last integer parameter, as it is necessary for the filter processing executed afterwards. This filtering is called Ratio Test, and it computes the goodness of the best match between the goodness of the second best match. To be considered as a good match, this value must be higher than a certain ratio, which can be set in a range of values between 0 and 1. If the ratio tends to be 0, the correspondence between descriptors is stronger.

In the following screenshot, we can see the output when matching a book cover in an image where the book appears rotated:

Matching the AKAZE descriptors

Matching AKAZE descriptors in a rotated image

The following screenshot shows the result when the book does not appear in the second image:

Matching the AKAZE descriptors

Matching AKAZE descriptors when the train image does not appear

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset