Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unreliable results with cv2.HoughCircles

I have a video with 5 oil droplets, and I am trying to use cv2.HoughCircles to find them.

This is my code:

import cv, cv2
import numpy as np

foreground1 = cv2.imread("foreground1.jpg")
vid = cv2.VideoCapture("NB14.avi")

cv2.namedWindow("video")
cv2.namedWindow("canny")
cv2.namedWindow("blur")

while True:
    ret, frame = vid.read()
    subtract1 = cv2.subtract( foreground1, frame)
    framegrey1 = cv2.cvtColor(subtract1, cv.CV_RGB2GRAY)
    blur = cv2.GaussianBlur(framegrey1, (0,0), 2)
    circles =  cv2.HoughCircles(blur, cv2.cv.CV_HOUGH_GRADIENT, 2, 10, np.array([]), 40, 80, 5, 100)
    if circles is not None:
            for c in circles[0]:
                    cv2.circle(frame, (c[0],c[1]), c[2], (0,255,0),2)
    edges = cv2.Canny( blur, 40, 80 )
    cv2.imshow("video", frame)
    cv2.imshow("canny", edges)
    cv2.imshow("blur", blur)
    key = cv2.waitKey(30)

I would say that the canny edge detector looks very good, while the results from the hough transform are very unstable, every frame will provide different results.

Example:

frame1

frame2

frame3

I have been playing with the parameters and honestly I dont know how to get more stable results.

like image 941
Dr Sokoban Avatar asked Jan 30 '13 17:01

Dr Sokoban


1 Answers

Initially I though there would be no overlapping in your oil droplets, but there are. So, Hough might indeed by a good method to use here, but I've had better experience when combining RANSAC with it. I would suggest exploring that, but here I will provide something different from that.

First of all, I couldn't perform the background subtraction that you do since I did not have this "foreground1.jpg" image (so the results can be improved easily). I also didn't care about drawing circles, but you can do that, I simply draw the border of the objects that I consider as a circle.

So, first let us suppose there is no overlapping. Then finding the edges in your image (easy), binarizing the response of the edge detector by Otsu, filling holes, and finally measuring the circularity is enough. Now if there are overlaps, we can use the Watershed transform combined with the Distance transform to separate the droplets. The problem then is that you won't get really circular objects, and I didn't care much about that, but you can adjust for that.

In the following code I also had to use scipy for labeling connected components (important for building the marker for the Watershed), since OpenCV is lacking on that. The code is not exactly short but should be simple to understand. Also, given the full current code, there is no need for the circularity check because after the segmentation by Watershed, only the objects you are after remain. Lastly, there is some simplistic tracking based on the rough distance to the object's center.

import sys
import cv2
import math
import numpy
from scipy.ndimage import label

pi_4 = 4*math.pi

def segment_on_dt(img):
    border = img - cv2.erode(img, None)

    dt = cv2.distanceTransform(255 - img, 2, 3)
    dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
    _, dt = cv2.threshold(dt, 100, 255, cv2.THRESH_BINARY)

    lbl, ncc = label(dt)
    lbl[border == 255] = ncc + 1

    lbl = lbl.astype(numpy.int32)
    cv2.watershed(cv2.cvtColor(img, cv2.COLOR_GRAY2RGB), lbl)
    lbl[lbl < 1] = 0
    lbl[lbl > ncc] = 0

    lbl = lbl.astype(numpy.uint8)
    lbl = cv2.erode(lbl, None)
    lbl[lbl != 0] = 255
    return lbl


def find_circles(frame):
    frame_gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
    frame_gray = cv2.GaussianBlur(frame_gray, (5, 5), 2)

    edges = frame_gray - cv2.erode(frame_gray, None)
    _, bin_edge = cv2.threshold(edges, 0, 255, cv2.THRESH_OTSU)
    height, width = bin_edge.shape
    mask = numpy.zeros((height+2, width+2), dtype=numpy.uint8)
    cv2.floodFill(bin_edge, mask, (0, 0), 255)

    components = segment_on_dt(bin_edge)

    circles, obj_center = [], []
    contours, _ = cv2.findContours(components,
            cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

    for c in contours:
        c = c.astype(numpy.int64) # XXX OpenCV bug.
        area = cv2.contourArea(c)
        if 100 < area < 3000:
            arclen = cv2.arcLength(c, True)
            circularity = (pi_4 * area) / (arclen * arclen)
            if circularity > 0.5: # XXX Yes, pretty low threshold.
                circles.append(c)
                box = cv2.boundingRect(c)
                obj_center.append((box[0] + (box[2] / 2), box[1] + (box[3] / 2)))

    return circles, obj_center

def track_center(objcenter, newdata):
    for i in xrange(len(objcenter)):
        ostr, oc = objcenter[i]
        best = min((abs(c[0]-oc[0])**2+abs(c[1]-oc[1])**2, j)
                for j, c in enumerate(newdata))
        j = best[1]
        if i == j:
            objcenter[i] = (ostr, new_center[j])
        else:
            print "Swapping %s <-> %s" % ((i, objcenter[i]), (j, objcenter[j]))
            objcenter[i], objcenter[j] = objcenter[j], objcenter[i]


video = cv2.VideoCapture(sys.argv[1])

obj_center = None
while True:
    ret, frame = video.read()
    if not ret:
        break

    circles, new_center = find_circles(frame)
    if obj_center is None:
        obj_center = [(str(i + 1), c) for i, c in enumerate(new_center)]
    else:
        track_center(obj_center, new_center)

    for i in xrange(len(circles)):
        cv2.drawContours(frame, circles, i, (0, 255, 0))
        cstr, ccenter = obj_center[i]
        cv2.putText(frame, cstr, ccenter, cv2.FONT_HERSHEY_COMPLEX, 0.5,
                (255, 255, 255), 1, cv2.CV_AA)

    cv2.imshow("result", frame)
    cv2.waitKey(10)
    if len(circles[0]) < 5:
        print "lost something"

This works for your entire video, and here are two samples:

enter image description hereenter image description here

like image 79
mmgp Avatar answered Oct 04 '22 14:10

mmgp