Contours and connected components

Contour extraction operations can be considered halfway between feature extraction and segmentation, since a binary image is produced in which image contours are separated from other homogeneous regions. Contours will typically correspond to object boundaries.

While a number of simple methods detect edges in images (for example, the Sobel and Laplace filters), the Canny method is a robust algorithm for doing this.

Note

This method uses two thresholds to decide whether a pixel is an edge. In what is called a hysteresis procedure, a lower and an upper threshold are used (see http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html). Since OpenCV already includes a good example of the Canny edge detector (in [opencv_source_code]/samples/cpp/edge.cpp), we do not include one here (but see the following floodFill example). Instead, we will go on to describe other highly useful functions based on detected edges.

To detect straight lines, the Hough transform is a classical method. While the Hough transform method is available in OpenCV (the functions HoughLines and HoughLinesP, for example, [opencv_source_code]/samples/cpp/houghlines.cpp), the more recent Line Segment Detector (LSD) method is generally a more robust one. LSD works by finding alignments of high-gradient magnitude pixels, given its alignment tolerance feature. This method has been shown to be more robust and faster than the best previous Hough-based detector (Progressive Probabilistic Hough Transform).

The LSD method is not available in the 2.4.9 release of OpenCV; although, at the time of this writing, it is already available in the code source's repository in GitHub. The method will be available in Version 3.0. A short example ([opencv_source_code]/samples/cpp/lsd_lines.cpp) in the library covers this functionality. However, we will provide an additional example that shows different features.

Note

To test the latest source code available in GitHub, go to https://github.com/itseez/opencv and download the library code as a ZIP file. Then, unzip it to a local folder and follow the same steps described in Chapter 1, Getting Started, to compile and install the library.

The LSD detector is a C++ class. The function cv::Ptr<LineSegmentDetector> cv::createLineSegmentDetector (int _refine=LSD_REFINE_STD, double _scale=0.8, double_sigma_scale=0.6, double _quant=2.0, double _ang_th=22.5, double _log_eps=0, double _density_th=0.7, int _n_bins=1024) creates an object of the class and returns a pointer to it. Note that several arguments define the detector created. The meaning of those parameters requires you to know the underlying algorithm, which is out of the scope of this book. Fortunately, the default values will suffice for most purposes, so we refer the reader to the reference manual (for Version 3.0 of the library) for special cases. Having said that, the first parameter scale roughly controls the number of lines that are returned. The input image is automatically rescaled by this factor. At lower resolutions, fewer lines are detected.

Note

The cv::Ptr<> type is a template class for wrapping pointers. This template is available in the 2.x API to facilitate automatic deallocation using reference counting. The cv:: Ptr<> type is analogous to std::unique_ptr.

Detection itself is accomplished with the method LineSegmentDetector::detect(const InputArray _image, OutputArray _lines, OutputArray width=noArray(), OutputArray prec=noArray(), OutputArraynfa=noArray()). The first parameter is the input image, while the _lines array will be filled with a (STL) vector of Vec4i objects that represent the (x, y) location of one end of the line followed by the location of the other end. The optional parameters width, prec, and noArray return additional information about the lines detected. The first one, width, contains the estimated line widths. Lines can be drawn with the convenient (yet simple) method called LineSegmentDetector::drawSegments(InputOutputArray _image, InputArray lines). Lines will be drawn on top of the input, namely, _image.

The following lineSegmentDetector example shows the detector in action:

#include "opencv2/opencv.hpp"
#include <iostream>

using namespace std;
using namespace cv;

vector<Vec4i> lines;
vector<float> widths;
Mat input_image, output;

inline float line_length(const Point &a, const Point &b)
{
    return (sqrt((b.x-a.x)*(b.x-a.x) + (b.y-a.y)*(b.y-a.y)));
}

void MyDrawSegments(Mat &image, const vector<Vec4i>&lines, const vector<float>&widths,
const Scalar& color, const float length_threshold)
{
    Mat gray;
    if (image.channels() == 1)
    {
        gray = image;
    }
    else if (image.channels() == 3)
    {
        cvtColor(image, gray, COLOR_BGR2GRAY);
    }

    // Create a 3 channel image in order to draw colored lines
    std::vector<Mat> planes;
    planes.push_back(gray);
    planes.push_back(gray);
    planes.push_back(gray);

    merge(planes, image);

    // Draw segments if length exceeds threshold given
    for(int i = 0; i < lines.size(); ++i)
    {
        const Vec4i& v = lines[i];
        Point a(v[0], v[1]);
        Point b(v[2], v[3]);
        if (line_length(a,b) > length_threshold) line(image, a, b, color, widths[i]);
    }
}


void thresholding(int threshold, void*)
{
    input_image.copyTo(output);
    MyDrawSegments(output, lines, widths, Scalar(0, 255, 0), threshold);
    imshow("Detected lines", output);
}

int main(int argc, char** argv)
{
    input_image = imread("building.jpg", IMREAD_GRAYSCALE);

    // Create an LSD detector object
 Ptr<LineSegmentDetector> ls = createLineSegmentDetector();

    // Detect the lines
ls->detect(input_image, lines, widths);

    // Create window to show found lines
    output=input_image.clone();
    namedWindow("Detected lines", WINDOW_AUTOSIZE);

    // Create trackbar for line length threshold
    int threshold_value=50;
    createTrackbar( "Line length threshold", "Detected lines", &threshold_value, 1000, thresholding );
    thresholding(threshold_value, 0);

    waitKey();
    return 0;
}

The preceding example creates a window with the source image, which is loaded in grayscale, and shows the drawSegments method. However, it allows you to impose a segment length threshold and specify the line colors (drawSegments will draw all the lines in red). Besides, lines will be drawn with a thickness given by the widths estimated by the detector. A trackbar is associated with the main window to control the length of the threshold. The following screenshot shows an output of the example:

Contours and connected components

Output of the lineSegmentDetector example

Circles can be detected using the function HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, intminRadius=0, int maxRadius=0). The first parameter is a grayscale input image. Output parameter circles will be filled with a vector of Vec3f objects. Each object represents the (center_x, center_y, radius) components of a circle. The last two parameters represent the minimum and maximum search radii, so they have an effect on the number of circles detected. OpenCV already contains a straightforward example of this function, [opencv_source_code]/samples/cpp/houghcircles.cpp. The example detects circles with a radius between 1 and 30 and displays them on top of the input image.

Segmentation algorithms typically form connected components, that is, the regions of connected pixels in a binary image. In the following section, we show how to obtain connected components and their contours from a binary image. Contours can be retrieved using the now classical function, findContours. Examples of this function are available in the reference manual (also see the [opencv_source_code]/samples/cpp/contours2.cpp and [opencv_source_code]/samples/cpp/segment_objects.cpp examples). Also note that in the 3.0 release of OpenCV (and in the code already available in the GitHub repository), the class ShapeDistanceExtractor allows you to compare the contours with the Shape Context descriptor (an example of this is available at [opencv_source_code]/samples/cpp/shape_example.cpp) and the Hausdorff distance. This class is in a new module of the library called shape. Shape transformations are also available through the class ShapeTransformer (example, [opencv_source_code]/samples/cpp/shape_transformation.cpp).

The new functions connectedComponents and connectedComponentsWithStats retrieve connected components. These functions will be part of the 3.0 release, and they are already available in the GitHub repository. An example of this is included in OpenCV that shows how to use the first one, [opencv_source_code]/samples/cpp/connected_components.cpp.

Note

The connected component that labels the functionality was actually removed in previous OpenCV 2.4.x versions and has now been added again.

We provide another example (connectedComponents) that shows how to use the second function, int connectedComponentsWithStats(InputArray image, OutputArray labels, OutputArray stats, OutputArray centroids, int connectivity=8, intltype=CV_32S), which provides useful statistics about each connected component. These statistics are accessed via stats(label, column) where the column can be the following table:

CC_STAT_LEFT 

The leftmost (x) coordinate that is the inclusive start of the bounding box in the horizontal direction

CC_STAT_TOP 

The topmost (y) coordinate that is the inclusive start of the bounding box in the vertical direction

CC_STAT_WIDTH 

The horizontal size of the bounding box

CC_STAT_HEIGHT 

The vertical size of the bounding box

CC_STAT_AREA 

The total area (in pixels) of the connected component

The following is the code for the example:

#include <opencv2/core/utility.hpp>
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>

using namespace cv;
using namespace std;

Mat img;
int threshval = 227;

static void on_trackbar(int, void*)
{
    Mat bw = threshval < 128 ? (img < threshval) : (img > threshval);
    Mat labelImage(img.size(), CV_32S);

    Mat stats, centroids;
    int nLabels = connectedComponentsWithStats(bw, labelImage, stats, centroids);

    // Show connected components with random colors
    std::vector<Vec3b> colors(nLabels);
    colors[0] = Vec3b(0, 0, 0);//background
    for(int label = 1; label < nLabels; ++label){
        colors[label] = Vec3b( (rand()&200), (rand()&200), (rand()&200) );
    }
    Mat dst(img.size(), CV_8UC3);
    for(int r = 0; r < dst.rows; ++r){
        for(int c = 0; c < dst.cols; ++c){
            int label = labelImage.at<int>(r, c);
            Vec3b &pixel = dst.at<Vec3b>(r, c);
            pixel = colors[label];
         }
     }
    // Text labels with area of each cc (except background)
    for (int i=1; i< nLabels;i++)
    {
        float a=stats.at<int>(i,CC_STAT_AREA);
        Point org(centroids.at<double>(i,0), centroids.at<double>(i,1));
        String txtarea;
        std::ostringstream buff;
        buff << a;
        txtarea=buff.str();
        putText( dst, txtarea, org,FONT_HERSHEY_COMPLEX_SMALL, 1, Scalar(255,255,255), 1);
    }

    imshow( "Connected Components", dst );
}

int main( int argc, const char** argv )
{
    img = imread("stuff.jpg", 0);
    namedWindow( "Connected Components", 1 );
    createTrackbar( "Threshold", "Connected Components", &threshval, 255, on_trackbar );
    on_trackbar(threshval, 0);

    waitKey(0);
    return 0;
}

The preceding example creates a window with an associated trackbar. The trackbar controls the threshold to apply to the source image. Inside the on_trackbar function, a call is made to connectedComponentsWithStats using the result of the thresholding. This is followed by two sections of the code. The first section fills the pixels that correspond to each connected component with a random color. The pixels that belong to each component are in labelImage (a labelImage output is also given by the function connectedComponents). The second part displays a text with the area of each component. This text is positioned at the centroid of each component. The following screenshot shows the output of the example:

Contours and connected components

The output of the connectedComponents example

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset