Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to segment the connected area based on depth color in opencv

I have a picture like enter image description here, which i need to segment the picture into 8 blocks.

I have tried this threshold method

img_gray = cv2.imread(input_file,cv2.IMREAD_GRAYSCALE)
ret,thresh = cv2.threshold(img_gray,254,255,cv2.THRESH_BINARY) =
kernel = np.array(cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3), (-1, -1)))
img_open = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
cv2.imshow('abc',img_open)
ret1,thresh1 = cv2.threshold(img_open,254,255,cv2.THRESH_BINARY_INV) #
contours, hierarchy = cv2.findContours(thresh1, cv2.RETR_CCOMP ,cv2.CHAIN_APPROX_NONE)

for i in range(len(contours)):
    if len(contours[i]) > 20:
        x, y, w, h = cv2.boundingRect(contours[i])
        cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
        print (x, y),(x+w, y+h)

after the thresholding
enter image description here

the end result is some blocks connected together are formed into a large segment, which is not what I hoped. enter image description here enter image description here enter image description here Any other ways to get it around

like image 362
user824624 Avatar asked Jul 21 '17 22:07

user824624


People also ask

How do you do color segmentation?

It can be explained as follows: given a color image I, let us consider the of pixels, where , , and is the th color component in the used color space. The segmentation is defined as an array , , assigning a label to each pixel of the image, indicating if it belongs to the background or the foreground.

How do I change the color space in OpenCV?

We can change the color space of an image using the cv2. cvtColor() function, which takes the image and the color space conversion code as mandatory parameters.


2 Answers

I'll try and give you a sketch of an algorithm that separates the cars based on depth gradients. Alas, simply looking at the contour of large depth gradients, the cars are not perfectly separated, therefore, some "refinement" of the boundary contour is required. Once the contours are complete, a simple connected component clustering is sufficient to separate the cars.

Here's my code (in Matlab, but I'm quite certain it's not too complex to find opencv equivalent functions):

img = imread('http://i.stack.imgur.com/8lJw8.png');  % read the image
depth = double(img(:,:,1));
depth(depth==255)=-100;  % make the background VERY distinct
[dy dx] = gradient(depth);  % compute depth gradients
bmsk = sqrt(dx.^2+dy.^2) > 5;  % consider only significant gradient
% using morphological operations to "complete" the contours around the cars
bmsk = bwmorph( bwmorph(bmsk, 'dilate', ones(7)), 'skel'); 

% once the contours are complete, use connected components
cars = bwlabel(~bmsk,4);  % segmentation mask
st = regionprops(cars, 'Area', 'BoundingBox');
% display the results
figure;
imshow(img);
hold all;
for ii=2:numel(st),  % ignore the first segment - it's the background
    if st(ii).Area>200, % ignore small regions as "noise"
        rectangle('Position',st(ii).BoundingBox, 'LineWidth', 3, 'EdgeColor', 'g');
    end;
end;

The output is

enter image description here

And

enter image description here

Not perfect, but brings you close enough.

Further reading:

  • bwmorph: to perform morphological operations.
  • bwlabel: to output a segmentation mask (labeling) of the connected components.
  • regionprops: compute statistics (e.g., area and bounding box) for image regions.

Coming to think of it, depth has such nice gradients, you can threshold the depth gradient and get nice connected components.

like image 174
Shai Avatar answered Sep 28 '22 03:09

Shai


Naive Approach (But it works)

Step 1: After reading the image in gray scale, threshold to get bottom cars.

ret1, car_thresh1 = cv2.threshold(cars, 191, 254, 0)

which gave me this. carsBottom

Step 2: Subtract this image from the main image

car_thresh2 = car_thresh1 - cars

which gave me this. enter image description here

Step 3: Threshold the subtracted image

ret3, cars_thresh3 = cv2.threshold(car_thresh2, 58, 255, 0)

which gave me thiscarsTop

Then I simply did what you did for extracting and drawing contours in the carsTop and carsBottom and this is the result. cars

like image 24
Rick M. Avatar answered Sep 28 '22 03:09

Rick M.