DescriptorMatcher
is an abstract base class to match keypoint descriptors that, as happens with DescriptorExtractor
, make programs more flexible than using matchers directly. With the Ptr<DescriptorMatcher> DescriptorMatcher::create(const string& descriptorMatcherType)
function, we can create a descriptor matcher of the desired type. The following are the supported types:
The void DescriptorMatcher::match(InputArray queryDescriptors, InputArray trainDescriptors, vector<DMatch>& matches, InputArray mask=noArray())
and void DescriptorMatcher::knnMatch(InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch>>& matches, int k, InputArray mask=noArray(), bool compactResult=false)
functions give the best k matches for each descriptor, k being 1 for the first function.
The void DescriptorMatcher::radiusMatch(InputArray queryDescriptors, InputArray trainDescriptors, vector<vector<DMatch>>& matches, float maxDistance, InputArray mask=noArray(), bool compactResult=false)
function also finds the matches for each query descriptor but not farther than the specified distance. The major drawback of this method is that the magnitude of this distance is not normalized, and it depends on the feature extractor and descriptor used.
SURF descriptors belong to the family of oriented gradients descriptors. They encode statistical knowledge about the geometrical shapes present in the patch (via histograms of oriented gradients/Haar-like features). They are considered as a more efficient substitution for SIFT. They are the best known multiscale feature description approaches, and their accuracy has been widely tested. They have two main drawbacks though:
There is a common pipeline in every descriptor matching application that uses the components explained earlier in this chapter. It performs the following steps:
The following is the matchingSURF
example that follows this pipeline:
#include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/nonfree/nonfree.hpp" using namespace std; using namespace cv; int main( int argc, char** argv ) { Mat img_orig = imread( argv[1],IMREAD_GRAYSCALE); Mat img_fragment = imread( argv[2], IMREAD_GRAYSCALE); if(img_orig.empty() || img_fragment.empty()) { cerr << " Failed to load images." << endl; return -1; } //Step 1: Detect keypoints using SURF Detector vector<KeyPoint> keypoints1, keypoints2; Ptr<FeatureDetector> detector = FeatureDetector::create("SURF"); detector->detect(img_orig, keypoints1); detector->detect(img_fragment, keypoints2); //Step 2: Compute descriptors using SURF Extractor Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SURF"); Mat descriptors1, descriptors2; extractor->compute(img_orig, keypoints1, descriptors1); extractor->compute(img_fragment, keypoints2, descriptors2); //Step 3: Match descriptors using a FlannBased Matcher Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased"); vector<DMatch> matches12; vector<DMatch> matches21; vector<DMatch> good_matches; matcher->match(descriptors1, descriptors2, matches12); matcher->match(descriptors2, descriptors1, matches21); //Step 4: Filter results using cross-checking for( size_t i = 0; i < matches12.size(); i++ ) { DMatch forward = matches12[i]; DMatch backward = matches21[forward.trainIdx]; if( backward.trainIdx == forward.queryIdx ) good_matches.push_back( forward ); } //Draw the results Mat img_result_matches; drawMatches(img_orig, keypoints1, img_fragment, keypoints2, good_matches, img_result_matches); imshow("Matching SURF Descriptors", img_result_matches); waitKey(0); return 0; }
The explanation of the code is given as follows. As we described earlier, following the application pipeline implies performing these steps:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF")
.Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create( "SURF")
. The SURF algorithm is also used to compute the descriptors.Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased")
, creates a new matcher based on the Flann algorithm, which is used to match the descriptors in the following way:matcher->match(descriptors1, descriptors2, matches12)
KAZE and AKAZE are novel descriptors included in the upcoming OpenCV 3.0. According to published tests, both outperform the previous detectors included in the library by improving repeatability and distinctiveness for common 2D image-matching applications. AKAZE is much faster than KAZE while obtaining comparable results, so if speed is critical in an application, AKAZE should be used.
The following matchingAKAZE
example matches descriptors of this novel algorithm:
#include <iostream> #include "opencv2/core/core.hpp" #include "opencv2/features2d/features2d.hpp" #include "opencv2/highgui/highgui.hpp" using namespace cv; using namespace std; int main( int argc, char** argv ) { Mat img_orig = imread( argv[1], IMREAD_GRAYSCALE ); Mat img_cam = imread( argv[2], IMREAD_GRAYSCALE ); if( !img_orig.data || !img_cam.data ) { cerr << " Failed to load images." << endl; return -1; } //Step 1: Detect the keypoints using AKAZE Detector Ptr<FeatureDetector> detector = FeatureDetector::create("AKAZE"); std::vector<KeyPoint> keypoints1, keypoints2; detector->detect( img_orig, keypoints1 ); detector->detect( img_cam, keypoints2 ); //Step 2: Compute descriptors using AKAZE Extractor Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("AKAZE"); Mat descriptors1, descriptors2; extractor->compute( img_orig, keypoints1, descriptors1 ); extractor->compute( img_cam, keypoints2, descriptors2 ); //Step 3: Match descriptors using a BruteForce-Hamming Matcher Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming"); vector<vector<DMatch> > matches; vector<DMatch> good_matches; matcher.knnMatch(descriptors1, descriptors2, matches, 2); //Step 4: Filter results using ratio-test float ratioT = 0.6; for(int i = 0; i < (int) matches.size(); i++) { if((matches[i][0].distance < ratioT*(matches[i][1].distance)) && ((int) matches[i].size()<=2 && (int) matches[i].size()>0)) { good_matches.push_back(matches[i][0]); } } //Draw the results Mat img_result_matches; drawMatches(img_orig, keypoints1, img_cam, keypoints2, good_matches, img_result_matches); imshow("Matching AKAZE Descriptors", img_result_matches); waitKey(0); return 0; }
The explanation of the code is given as follows. The first two steps are quite similar to the previous example; the feature detector and descriptor extractor are created through their common interfaces. We only change the string parameter passed to the constructor, as this time, the AKAZE algorithm is used.
It is created by executing Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming")
. The matcher.knnMatch(descriptors1, descriptors2, matches, 2)
function computes the matches between the image descriptors. It is noteworthy to mention the last integer parameter, as it is necessary for the filter processing executed afterwards. This filtering is called Ratio Test, and it computes the goodness of the best match between the goodness of the second best match. To be considered as a good match, this value must be higher than a certain ratio, which can be set in a range of values between 0 and 1. If the ratio tends to be 0, the correspondence between descriptors is stronger.
In the following screenshot, we can see the output when matching a book cover in an image where the book appears rotated:
The following screenshot shows the result when the book does not appear in the second image: