I am working on the following code:
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
Mat src, grey;
int thresh = 10;
const char* windowName = "Contours";
void detectContours(int,void*);
int main()
{
src = imread("C:/Users/Public/Pictures/Sample Pictures/Penguins.jpg");
//Convert to grey scale
cvtColor(src,grey,CV_BGR2GRAY);
//Remove the noise
cv::GaussianBlur(grey,grey,Size(3,3),0);
//Create the window
namedWindow(windowName);
//Display the original image
namedWindow("Original");
imshow("Original",src);
//Create the trackbar
cv::createTrackbar("Thresholding",windowName,&thresh,255,detectContours);
detectContours(0,0);
waitKey(0);
return 0;
}
void detectContours(int,void*)
{
Mat canny_output,drawing;
vector<vector<Point>> contours;
vector<Vec4i>heirachy;
//Detect edges using canny
cv::Canny(grey,canny_output,thresh,2*thresh);
namedWindow("Canny");
imshow("Canny",canny_output);
//Find contours
cv::findContours(canny_output,contours,heirachy,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
//Setup the output into black
drawing = Mat::zeros(canny_output.size(),CV_8UC3);
//Draw contours
for(int i=0;i<contours.size();i++)
{
cv::drawContours(drawing,contours,i,Scalar(255,255,255),1,8,heirachy,0,Point());
}
imshow(windowName,drawing);
}
Theoretically, Contours
means detecting curves. Edge detection
means detecting Edges. In my above code, I have done edge detection using Canny
and curve detection by findContours()
. Following are the resulting images
Canny Image
Contours Image
So now, as you can see, there is no difference! So, what is the actual difference between these 2? In OpenCV tutorials, only the code is given. I found an explanation about what is 'Contours' but it is not addressing this issue.
Edge detection just gives points where image intensity changes drastically. It may or may not form a closed shape. The main objective of contour detection is find a closed shape and draw the boundary of the object.
What is image contour? Image contouring is process of identifying structural outlines of objects in an image which in turn can help us identify shape of the object.
Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.
Edge detection is the process of finding outlines in an image, whatever they look like. Line detection finds line segments (sometimes by extension, other geometric figures such as circular arcs).
Edges are computed as points that are extrema of the image gradient in the direction of the gradient. if it helps, you can think of them as the min and max points in a 1D function. The point is, edge pixels are a local notion: they just point out a significant difference between neighbouring pixels.
Contours are often obtained from edges, but they are aimed at being object contours. Thus, they need to be closed curves. You can think of them as boundaries (some Image Processing algorithms & librarires call them like that). When they are obtained from edges, you need to connect the edges in order to obtain a closed contour.
The main difference between finding edges and countours is that if you run finding edges the output is new image. In this new (edge image) image you will have highlighted edges. There are many algorithms for detecting edges look at wiki see also.
For example Sobel operator gives smooth "foggy" results. In your particular case, the catch is that you are using Canny edge detector. This one makes few steps further than other detectors. It actually runs further edge refinement steps. Output of the Canny detector is thus binary image, with 1 px wide lines in place of edges.
On the other hand Contours
algorithm processes arbitrary binary image. So if you put in white filled square on black background. After running Contours
algorithm, you would get white empty square, just the borders.
Other added bonus of contour detection is, it actually returns set of points! That's great, because you can use these points further on for some processing.
In your particular case, it's only coincidence that both images match. It not rule, and in your case, it's because of unique property of Canny algorithm.
Contours can actually do a bit more than "just" detect edges. The algorithm does indeed find edges of images, but also puts them in a hierarchy. This means that you can request outer borders of objects detected in your images. Such a thing would not be (directly) possible if you only check for edges.
As can be read in the documentation, detecting contours is mostly used for object recognition, whereas the canny edge detector is a more "global" operation. I wouldn't be surprised if the contour algorithm uses some sort of canny edge detection.
The notion of contours is used as a tool to work on edge data. Not all edges are the same. But in many cases, e.g. objects with unimodal color distribution (i.e. one color), edges are the actual contours (outline,shape).
[1]Topological Structural Analysis of Digitized Binary Images by Border Following by Satoshi Suzuki, 1985.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With