Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OCR of low-resolution text from screenshots

Tags:

python

opencv

ocr

I'm writing an OCR application to read characters from a screenshot image. Currently, I'm focusing only on digits. I'm partially basing my approach on this blog post: http://blog.damiles.com/2008/11/basic-ocr-in-opencv/.

I can successfully extract each individual character using some clever thresholding. Where things get a bit tricky is matching the characters. Even with fixed font face and size, there are some variables such as background color and kerning that cause the same digit to appear in slightly different shapes. For example, the below image is segmented into 3 parts:

  1. Top: a target digit that I successfully extracted from a screenshot
  2. Middle: the template: a digit from my training set
  3. Bottom: the error (absolute difference) between the top and middle images

The parts have all been scaled (the distance between the two green horizontal lines represents one pixel).

topbottommiddle

You can see that despite both the top and middle images clearly representing a 2, the error between them is quite high. This causes false positives when matching other digits -- for example, it's not hard to see how a well-placed 7 can match the target digit in the image above better than the middle image can.

Currently, I'm handling this by having a heap of training images for each digit, and matching the target digit against those images, one-by-one. I tried taking the average image of the training set, but that doesn't resolve the problem (false positives on other digits).

I'm a bit reluctant to perform matching using a shifted template (it'd be essentially the same as what I'm doing now). Is there a better way to compare the two images than simple absolute difference? I was thinking of maybe something like the EMD (earth movers distance, http://en.wikipedia.org/wiki/Earth_mover's_distance) in 2D: basically, I need a comparison method that isn't as sensitive to global shifting and small local changes (pixels next to a white pixel becoming white, or pixels next to a black pixel becoming black), but is sensitive to global changes (black pixels that are nowhere near white pixels become black, and vice versa).

Can anybody suggest a more effective matching method than absolute difference?

I'm doing all this in OpenCV using the C-style Python wrappers (import cv).

like image 283
mpenkov Avatar asked Jan 02 '12 04:01

mpenkov


People also ask

Can I OCR a screenshot?

Optical character recognition, or OCR, is a way to convert typed, handwritten text, or a screenshot into a form that your machine (computer) can understand. You can use it to modify or edit documents in hard form, extract text from screenshots, and much more.

What is the best resolution for OCR?

The recommended resolution for scanning documents for optimal OCR accuracy is 300 dots per inch (dpi). However, if the text font size is particularly small (less than 10pt), a dpi of 400-600 may be best.

Does OCR work on PNG?

Optical character recognition (OCR) is a technology that extracts text from images. It scans GIF, JPG, PNG, and TIFF images.


2 Answers

I would look into using Haar cascades. I've used them for face detection/head tracking, and it seems like you could build up a pretty good set of cascades with enough '2's, '3's, '4's, and so on.

http://alereimondo.no-ip.org/OpenCV/34

http://en.wikipedia.org/wiki/Haar-like_features

like image 189
rsaxvc Avatar answered Sep 28 '22 14:09

rsaxvc


OCR on noisy images is not easy - so simple approaches no not work well.

So, I would recommend you to use HOG to extract features and SVM to classify. HOG seems to be one of the most powerful ways to describe shapes.

The whole processing pipeline is implemented in OpenCV, however I do not know the function names in python wrappers. You should be able to train with the latest haartraining.cpp - it actually supports more than haar - HOG and LBP also.

And I think the latest code (from trunk) is much improved over the official release (2.3.1).

HOG usually needs just a fraction of the training data used by other recognition methods, however, if you want to classify shapes that are partially ocludded (or missing), you should make sure you include some such shapes in training.

like image 36
Sam Avatar answered Sep 28 '22 14:09

Sam