I am attempting to read information from pictures of hard plastic ID cards. As a first step, I've been trying to process the pictures to make the text more computer-readable. The pictures are fairly clear, but they are tricky because they are light on one side and dark on the other. It seems like it should be possible to use this information to create a depth map, which could then be converted to a black and white image. Mainly, I'd like to know if there is some known algorithm (the simpler the better) I could implement. I'm currently doing the rest of the processing using Python and PIL, but any implementation I could adapt would be great.
A small example of the images I'm working with:
Example in Mathematica. If the result is satisfactory I could explain the procedure step by step.
Erosion[
ColorNegate@
Thinning@
Dilation[
DeleteSmallComponents[
DeleteBorderComponents@
ColorNegate@
Binarize@Import["http://i.imgur.com/GLzvj.png"],
150],
8],
8]
Edit
Step by step ...
Starting with
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With