I have input images which look like this:
I like to segment the images in a way that i get approximated polygons which only contain horizontal an vertical lines.
My first approach was a hough segmentation, but i was only able to create rectangular objects. This does not work for the second image.
Then i tried to use a decision tree: For each image i trained a decision tree with the inputs x
and y
positions of all pixels and the classification black/white. Then i only used the first n
layer of this tree. With this new tree i did a prediction for all pixels. Sometimes this worked well, but sometimes it didn't. Especially the tree depth varies from picture to picture...
Maybe someone has an idea how to do this? Or is there already an algorithm for this use case available?
Thank you very much
Regards
Kevin
I get pretty reasonable results using a morphological "thinning" followed by an "erosion" to remove either horizontally or vertically oriented features. I am just doing it at the command-line with ImageMagick but you can use the Python bindings if you prefer.
So, horizontal features:
convert poly.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:3x1 im1h.png
And vertical features:
convert poly.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:1x3 im1v.png
And, using the other image:
convert poly2.png -threshold 50% -morphology Thinning:-1 Skeleton -morphology erode rectangle:1x3 result.png
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With