Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Opencv: Convert floorplan image into data model

my plan is to extract information out of a floor plan drawn on a paper. I already managed to detect 70-80% of the drawn doors:

Detecting doors in a floorplan

Now I want to create a data model from the walls. I already managed to extract them as you can see here:

extracted walls From that I created the contours:

extracted wall lines My idea now was to get the intersections of the lines from that image and create a data model from that. However if I use houghlines algorithm I get something like this:

enter image description here Does somebody have a different idea of how to get the intersections or even another idea how to get a model? Would be very nice.

PS: I am using javacv. But an algorithm in opencv would also be alright as I could translate that.

like image 501
Schnodderbalken Avatar asked Dec 06 '13 16:12

Schnodderbalken


3 Answers

First, you can also use the line segment detector to detect lines: http://www.ipol.im/pub/art/2012/gjmr-lsd/

If I understand right, the problem is that you're getting a few different short lines for every "real" lines. You can take all the endpoints of the "short line" and approximate a line that crosses using fitLine(): http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=fitline#fitline

like image 153
GilLevi Avatar answered Nov 15 '22 02:11

GilLevi


It strikes me that what you really want is not necessarily walls, but rather rooms - which are incidentally bounded by walls.

Moreover, while it looks like your "wall" data is rather noisy (i.e. there are lots of small sections that could be confused for tiny rooms) - but your "room" data isn't (there aren't many phantom walls in the middle of rooms).

Therefore, it may be beneficial to detect rooms (approximately axis-aligned rectangles that don't contain white pixels over a certain threshold), and extrapolate walls by looking at the boundary between nearby pixels.

I would implement this in three phases: first, try to detect a few principle axis from the output of houghlines (I would first reach for a K-means clustering algorithm, and then massage the output to get perpendicular axis). Use this data to better align the image.

Second, begin seeding small rectangles randomly about the image, in black areas. "Grow" these rectangles in all directions until each side hits a white pixel over a certain threshold, or they run into another rectangle. Continue seeding until a large percentage of the area of the image is covered.

Third, find areas (also rectangles, hopefully) not covered by rectangles, and collapse them into lines:

  • Treat coordinates of the rectangles on the x&y axis independently - as a collection of intervals
  • Sort these coordinates and look for adjacent coordinates that form the upper bound of one rectangle and the lower bound of another.
  • Naively try pairing up these gaps found along each axis, and test the resulting candidate rectangles for intersection with the rooms. Discard intersecting rectangles.
  • Collapse these new rectangles into lines along their principle axis.
  • The points at the ends of the lines could then be joined when within some minimum distance (by extending the lines until they meet).

There are a few drawbacks to this approach:

  • It won't deal well with non-axis aligned walls. Fortunately, you probably want these auto-aligned most of the time anyway.
  • It is likely to treat small doorways in walls as part of the wall - an accidental gap in the line drawing. These will have to be detected separately and added back to the reconstructed drawing.
  • It won't deal well with noisy data - but it looks like you've already done a marvelous job of de-noising the data with opencv already!

I apologize for not including any code snippets - but I thought it more important to convey the idea, rather than the details (please comment if you'd like me to expand on any of it). Also note, that while I played around with opencv a few years ago, I'm by no means an expert - so it may already have some primitives to do some of this for you.

like image 27
Joshua Warner Avatar answered Nov 15 '22 03:11

Joshua Warner


Try dilating the lines from either the Hough transform image or the original contour image by 1 pixel. You can do this by drawing the lines bigger with a line thickness of 2 or 3 (if you used the hough transform to get the lines) or you could dilate them manually using this code.

void dilate_one(cv::Mat& grid){
cv::Size sz = grid.size();
cv::Mat sc_copy = grid.clone();

for(int i = 1; i < sz.height -1; i++){
    for(int j = 1; j < sz.width -1; j++){
        if(grid.at<uchar>(i,j) != 0){
                sc_copy.at<uchar>(i+1,j) = 255;
                sc_copy.at<uchar>(i-1,j) = 255;
                sc_copy.at<uchar>(i,j+1) = 255;
                sc_copy.at<uchar>(i,j-1) = 255;
                sc_copy.at<uchar>(i-1,j-1) = 255;
                sc_copy.at<uchar>(i+1,j+1) = 255;
                sc_copy.at<uchar>(i-1,j+1) = 255;
                sc_copy.at<uchar>(i+1,j-1) = 255;
        }
    }
}
grid = sc_copy;
}

After the Hough transform you have a set of vectors that represents your lines stored as cv::Vec4i v

This has the endpoints of the line. Easiest solution would be to match the end points of each line and find those which are closest. You could use simple L1 or L2 Norms to calculate the distance.

p1 = cv::Point2i(v[0],v[1]) and p2 = cv::point2i(v[2],v[3]))

Points which are very close should be intersections. The only problem are T intersections where there may not be an endpoint but this doesn't seem to be a problem in your image.

like image 1
en4bz Avatar answered Nov 15 '22 03:11

en4bz