I'm trying to crop an image to the boundaries of a contour. I've found a code from this answer
mask = np.zeros_like(image)
cv2.drawContours(mask, [c], -1, 255, -1)
out = np.zeros_like(image)
out[mask == 255] = image[mask == 255]
(y, x) = np.where(mask == 255)
(topy, topx) = (np.min(y), np.min(x))
(bottomy, bottomx) = (np.max(y), np.max(x))
out = out[topy: bottomy + 1, topx:bottomx + 1]
crop_img = image[topy: bottomy + 1, topx:bottomx + 1]
cv2.imshow("croppedd", crop_img)
where c
is a contour.
I'm getting error like :
Traceback (most recent call last):
File "detect_shapes.py", line 66, in <module>
(y, x) = np.where(mask == 255)
ValueError: too many values to unpack (expected 2)
How can I solve my issue?
I don't think this is related to my image but, here my image;
The answer you are referring to above is loading image in grayscale
mode using
image = cv2.imread('...', 0)
Here, 0
refers to cv2.IMREAD_GRAYSCALE
flag. This is important because in this case, the image
will have just 1
channel. If you load your image in this way and run your code, it will work fine. I already tested it. In this case, (y, x) = np.where(mask == 255)
won't give any error as output of np.where(mask == 255)
is a tuple of two numpy array, since mask
is a 2d
array(check it using mask.shape
).
But, if you are loading your image as image = cv2.imread('...')
and not doing something like image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
to convert it to grayscale, then in this case, np.where(mask == 255)
return a tuple of three numpy array as mask
is a 3d
array here This is the reason why you are getting above error.
Look at
np.where(mask == 255)
without the x,y
unpacking. My guess it is a 3 element tuple. where
produces an array for each dimension of the input array. If mask
is 3d (x,y,channel), the where
is a (3,) tuple.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With