Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

remove black lines with a minimum change of whole image

I have images with polygons in it. There are black lines going through these polygons. I need a way to remove these black lines with a minimum alteration of the polygons. What I have tried so far:

Step 1) parse the image from the top left corner to the bottom right corner(line by line).
Step 2) Loop through each pixel of a line/row.
Step 3) If you encounter a non-black pixel, put the color value of it in 
        a variable (lets call it lastNonBlack). 
Step 4) If you encounter a black pixel, just overwrite it's color value with lastNonBlack.

And here is the problem with that algorithm. Under some circumstances it splits a polygon (see first picture) or it expands the polygon by a with a line (see second picture).

enter image description here

enter image description here

Then I tried another approach where I take the color of the pixel which is above but that does not work either. Not the "splits" and "extensions" are not horizontal but vertical.

PS: I use Java so a java-solution would be best but since this is an algorithm problem anyone is welcome :)

edit: The above picture were constructed examples to show you the problems. My images look like this:

enter image description here

edit2: I replaced the images with bigger ones that show the problem better

like image 602
Selphiron Avatar asked Mar 26 '15 20:03

Selphiron


2 Answers

Alexandru is on the right track. What you want is something more like a "nearest neighbor" classifier. IF you aren't familiar with this it means that you want to know what color pixel(x,y) should be. You look at the pixels around it and say, well what value are these? Whatever the majority is is what pixel(x,y) should be.

As he said make a structuring element and then do a nearest neighbor classifier. Here is a image with 3 examples

examples of nearest neighbor

Lets look at X. If we are at pixel X (lower right corner) and want to decide what color should this pixel be? we look at the pixels around it and do a small vote. Our structuring element here is a 7x7 neighborhood centered around pixel X we see it is green=24, black=7, white = 18 well since the majority of the pixels are green pixel X should be green.

So that works great, the next question is how big do we make our structuring element? it should be proportional to the maximum size of the line. I think it should be 2*max_line_width + 1 the plus 1 is to make it odd sized(reduces the probability of having ties, and prevents smearing). Why this size? because its larger than the line, so that means a single line won't influence the pixel much. But its small enough that the information is still relevant to the pixel. lets look at some examples.

Pixel Y (upper right) max line width=1. what color should pixel Y be? green=8, black=5, white =12 so Y should be white. But that's incorrect, this is a common error when size is too large. If we use a 3x3 neighborhood we get this green=3,black=3,white=3 you have to make a judgement call here somehow. But you can see it won't incorrectly be classified

No matter what size you choose though, there will always be problems with the edges and corners. Look at pixel Z 3x3 Z=black, 5x5 z=black, 7x7 z=black. so this method isn't perfect, but it works reasonably well.

Just to discuss another shape, alexandru mentioned a t shape enter image description here

Its the same nearest neighboor algorithm, jsut using a different neghborhood shape, as you can see in this example the pixel would be black. But as we already saw, every method/shape has short comings. good luck

like image 175
andrew Avatar answered Oct 08 '22 20:10

andrew


As algorithms are welcome, I'll show you how I would do it with ImageMagick which is installed on most Linux distros and available for OSX and Windows.

My algorithm would be to make a mask in which all black pixels are transparent, and then overlay that on top of a median-filtered copy of your original image. In the median-filtered image, the black pixels will fall to the bottom of the sorted set of pixels at each point and therefore never be selected as the median, so only a nearby coloured pixel can become the new output pixel. The masked image with the black pixels converted to transparent is then overlaid so that only black pixels in your original image become transaprent and at these places you can see through the original image to the median-filtered one. It is easier than it sounds...

Make black pixels transparent:

convert in.png -transparent black mask.png

enter image description here

Generate filtered image of median of 7x7 neighbourhood

convert in.png -median 7x7 median.png

enter image description here

Overlay mask on top of median-filtered image, so filtered image only shows through at black pixels (which are now transparent)

convert median.png mask.png -composite result.png

enter image description here

like image 35
Mark Setchell Avatar answered Oct 08 '22 20:10

Mark Setchell