I am trying to fit minimum bounding boxes to each of these "speckles" shown below. As part of the image processing pipeline, I use findContours to detect contours in my data, and then draw a minimum bounding box given an array of discovered contours.
The minimum bounding boxes are not very accurate - some features are clearly missed whereas others fail to completely "encapsulate" a full connected feature (and instead is segmented into several small minimum bounding boxes). I have played around with the retrieval modes (RETR_TREE shown below) and the contour approximation method (CHAIN_APPROX_TC89_L1 shown below), but could not find something I really liked. Can someone suggest a more robust strategy to capture these contours more accurately using OpenCV Python?
import numpy as np
import cv2
# load image from series of frames
for x in range(1, 20):
convolved = cv2.imread(x.jpg)
original = convolved.copy
#convert to grayscale
gray = cv2.cvtColor(convolved, cv2.COLOR_BGR2GRAY)
#find all contours in given frame, store in array
contours, hierarchy = cv2.findContours(gray,cv2.RETR_TREE, cv2.CHAIN_APPROX_TC89_L1)
boxArea = []
#draw minimum bounding box around each discovered contour
for cnt in contours:
area = cv2.contourArea(cnt)
if area > 2 and area < 100:
rect = cv2.minAreaRect(cnt)
box = cv2.cv.BoxPoints(rect)
box = np.int0(box)
cv2.drawContours(original,[box], 0, (128,255,0),1)
boxArea.append(area)
#save box-fitted image
cv2.imwrite('x_boxFitted.jpg', original)
cv2.waitKey(0)
** Edit: Per Sturkman's suggestion, drawing all possible contours seemed to cover all visually detectable features.
I know the question is about opencv. But since I am used to skimage, here some ideas (which are certainly available in opencv).
import numpy as np
from matplotlib import pyplot as plt
from skimage import measure
from scipy.ndimage import imread
from skimage import feature
%matplotlib inline
'''
Contour detection using a marching square algorithm.
http://scikit-image.org/docs/dev/auto_examples/plot_contours.html
Not quite sure if this is the best approach since some centers are
biased. Probably, due to some interpolation issue.
'''
image = imread('irregular_blobs.jpg')
contours = measure.find_contours(image,25,
fully_connected='low',
positive_orientation='high')
fig, ax = plt.subplots(ncols=1)
ax.imshow(image,cmap=plt.cm.gray)
for n, c in enumerate(contours):
ax.plot(c[:,1],c[:,0],linewidth=0.5,color='r')
ax.set_ylim(0,250)
ax.set_xlim(0,250)
plt.savefig('skimage_contour.png',dpi=150)
'''
Personally, I would start with some edge detection. For example,
Canny edge detection. Your Image is really nice and it should work.
'''
edges = feature.canny(image, sigma=1.5)
edges = np.asarray(edges)
# create a masked array in order to set the background transparent
m_edges = np.ma.masked_where(edges==0,edges)
fig,ax = plt.subplots()
ax.imshow(image,cmap=plt.cm.gray,alpha=0.25)
ax.imshow(m_edges,cmap=plt.cm.jet_r)
plt.savefig('skimage_canny_overlay.png',dpi=150)
In essence, there is no "best method". While for example, edge detection detects the positions very well, some structures remain open. Contour finding on the other hand, yields closed structures but the centers are biased, You have to try and play around with the parameters. If your image has some disturbing background you can use dilation in order to subtract the background. Here are some informations how to perform dilation. Sometimes, closing operations are also useful.
From your posted image, it seems that your threshold is too high or your background too noisy. Lowering the threshold and/or dilation might help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With