I have this gray video stream:
The histogram of this image:
The thresholded image by :
threshold( image, image, 150, 255, CV_THRESH_BINARY );
i get :
Which i expect.
When i do adaptive thresholding with :
adaptiveThreshold(image, image,255,ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY,15,-5);
i get :
Which looks like edge detection and not thresholding. What i expected was black and white areas . So my question is, why does this look like edge detection and not thresholding.
thx in advance
Adaptive Threshold works like this:
The function transforms a grayscale image to a binary image according to the formulas:
THRESH_BINARY
THRESH_BINARY_INV
where T(x,y) is a threshold calculated individually for each pixel.
Threshold works differently:
The function applies fixed-level thresholding to a single-channel array.
So it sounds like adaptiveThreshold calculates a threshold pixel-by-pixel, whereas threshold calculates it for the whole image -- it measures the whole image by one ruler, whereas the other makes a new "ruler" for each pixel.
I had the same issue doing adaptive thresholding for OCR purposes. (sorry this is Python not C++)
img = cv.LoadImage(sys.argv[1])
bwsrc = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
bwdst = cv.CreateImage( cv.GetSize(img), cv.IPL_DEPTH_8U, 1)
cv.CvtColor(img, bwsrc, cv.CV_BGR2GRAY)
cv.AdaptiveThreshold(bwsrc, bwdst, 255.0, cv.CV_THRESH_BINARY, cv.CV_ADAPTIVE_THRESH_MEAN_C,11)
cv.ShowImage("threshhold", bwdst)
cv.WaitKey()
The last paramter is the size of the neighborhood used to calculate the threshold for each pixel. If your neighborhood is too small (mine was 3), it works like edge detection. Once I made it bigger, it worked as expected. Of course, the "correct" size will depend on the resolution of your image, and size of the features you're looking at.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With