Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV - find blackboard edges on video and images

UPDATE

You can find all the images I have for testing on my GitHub here:

GitHub repository with sources

There are also 2 videos, where the detection should work on as well

ORIGINAL QUESTION

I tried to use OpenCV 4.x.x to find the edges of a blackboard (image following), but somehow I cannot succeed. My code at the moment looks like this: (Android with OpenCV and live camera feed), where imgMat is a Mat from the camera feed:

    Mat gray = new Mat();
    Imgproc.cvtColor(imgMat, gray, Imgproc.COLOR_RGB2BGR);

    Mat blurred = new Mat();
    Imgproc.blur(gray, blurred, new org.opencv.core.Size(3, 3));

    Mat canny = new Mat();
    Imgproc.Canny(blurred, canny, 80, 230);

    Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new org.opencv.core.Size(2, 2));
    Mat dilated = new Mat();
    Imgproc.morphologyEx(canny, dilated, Imgproc.MORPH_DILATE, kernel, new Point(0, 0), 10);
    Mat rectImage = new Mat();
    Imgproc.morphologyEx(dilated, rectImage, Imgproc.MORPH_CLOSE, kernel, new Point(0, 0), 5);
    Mat endproduct = new Mat();
    Imgproc.Canny(rectImage, endproduct, 120, 230);

    List<MatOfPoint> contours = new ArrayList<>();
    Mat hierarchy = new Mat();
    Imgproc.findContours(endproduct, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);

    double maxArea = 0;
    boolean hasContour = false;
    MatOfPoint2f biggestContour = new MatOfPoint2f();
    Iterator<MatOfPoint> each = contours.iterator();
    while (each.hasNext()) {
        MatOfPoint wrapper = each.next();
        double area = Imgproc.contourArea(wrapper);
        if (area > maxArea) {
            maxArea = area;
            biggestContour = new MatOfPoint2f(wrapper.toArray());
            hasContour = true;
        }
    }

    if (hasContour) {
        Mat output = imgMat.clone();

        MatOfPoint2f approx = new MatOfPoint2f();
        MatOfPoint poly = new MatOfPoint();

        Imgproc.approxPolyDP(biggestContour, approx, Imgproc.arcLength(biggestContour, true) * .02, true);
        approx.convertTo(poly, CvType.CV_32S);

        Rect rect = Imgproc.boundingRect(poly);

     }

Somehow I am not able to get it working, although the same code(written in python) worked on my computer with a video. I take the output from the rectangle and display it on my mobile screen, where it flickers around a lot and does not work properly.

These are my images I tried the python program on, and they worked:

big blackboard

big blackboard2

What am I doing wrong? I am not able to constantly detect the edges of the blackboard.

Additional information about the blackboard:

  • always rectangular
  • may have different lighting
  • the text should be ignored, only the main board should be detected
  • the outer blackboard should be ignored as well
  • only the contour for the main board should be shown/returned

Thanks for any advice or code!

like image 732
Lars Avatar asked Mar 03 '21 16:03

Lars


1 Answers

I used HSV because that's the easiest way to detect specific colors. I used an abundancy test to automatically select the color threshold (so this will work for green or blue boards). However, this test will fail on white or black boards since white and black count as all colors according to hue. Instead, in HSV, white and black are easiest to detect as very low saturation (white) or as very low value (black).

I did a 3-way check for each and selected the mask that had the most pixels in it (I assume that the boards are the majority of the image). I'm not sure how this will work on other images since we only have one here, so this may or may not work for other boards.

I used approxPolyDP to cut down on the number of points in the contour until I had 4 points and used that to draw the shape.

enter image description here

enter image description here

import cv2
import numpy as np

# get unique colors (to speed up search) and return the most abundant mask
def getAbundantColor(channel, margin):
    # get uniques
    unique_colors, counts = np.unique(channel, return_counts=True);

    # check for the most abundant color
    most = None;
    biggest_count = -1;
    for col in unique_colors:
        # count number of white pixels
        mask = cv2.inRange(channel, int(col - margin), int(col + margin));
        count = np.count_nonzero(mask);

        # if bigger, set new "most"
        if count > biggest_count:
            biggest_count = count;
            most = mask;
    return most, biggest_count;

# load image
img = cv2.imread("blackboard.jpg");

# it's huge, scale down so that we can see the whole thing
h, w = img.shape[:2];
scale = 0.25;
h = int(scale*h);
w = int(scale*w);
img = cv2.resize(img, (w,h));

# hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);

# median blur to get rid of most of the text
h = cv2.medianBlur(h, 5);
s = cv2.medianBlur(s, 5);
v = cv2.medianBlur(v, 5);

# get most abundant color
color_margin = 30;
hmask, hcount = getAbundantColor(h, color_margin);

# detect white and black separately
light_margin = 30;
# white
wmask = cv2.inRange(s, 0, light_margin);
wcount = np.count_nonzero(wmask);

# black
bmask = cv2.inRange(v, 0, light_margin);
bcount = np.count_nonzero(bmask);

# check which is biggest
sorter = [[hcount, hmask], [wcount, wmask], [bcount, bmask]];
sorter.sort();
mask = sorter[-1][1];

# dilate and erode to close holes
kernel = np.ones((3,3), np.uint8);
mask = cv2.dilate(mask, kernel, iterations = 2);
mask = cv2.erode(mask, kernel, iterations = 4);
mask = cv2.dilate(mask, kernel, iterations = 2);

# get contours # OpenCV 3.4, in OpenCV 2* or 4* it returns (contours, _)
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);

# for each contour, approximate a simpler shape until we have 4 points
simplified = [];
for con in contours:
    # go until we have 4 points
    num_points = 999999;
    step_size = 0.01;
    percent = step_size;
    while num_points >= 4:
        # get number of points
        epsilon = percent * cv2.arcLength(con, True);
        approx = cv2.approxPolyDP(con, epsilon, True);
        num_points = len(approx);

        # increment
        percent += step_size;

    # step back and get the points
    # there could be more than 4 points if our step size misses it
    percent -= step_size * 2;
    epsilon = percent * cv2.arcLength(con, True);
    approx = cv2.approxPolyDP(con, epsilon, True);
    simplified.append(approx);
cv2.drawContours(img, simplified, -1, (0,0,200), 2);

# print out the number of points
for points in simplified:
    print("Num Points: " + str(len(points)));

# show image
cv2.imshow("Image", img);
cv2.imshow("Hue", h);
cv2.imshow("Mask", mask);
cv2.waitKey(0);

Edit: In order to accommodate the uncertainty in the board's color and appearance I run the assumption that the board itself will be the majority of the picture. The lines involving the sorter are looking for the most abundant color in the image. If the white wall behind the board takes up more space in the image then that'll be the color that gets selected for the mask.

There are other ways to try and select just the board, but it's really difficult to come up with a catch-all solution. The rest of the code should do its job the same if you can come up with some way of masking the board. If you're willing to budge on the unknown color assumption and provide the original pictures of the failing cases then I can probably come up with an appropriate mask.

like image 191
Ian Chu Avatar answered Nov 03 '22 14:11

Ian Chu