I am working on a project that needs accurate OCR results for images with rich background. So I am comparing results of two OCRs (one of them is Tesseract) to make my choice. The point is that results are strongly affected by the pre-processing step and especially image binarization. I extracted the binarized image of the other OCR and passed it to Tesseract which enhanced the results of Tesseract by 30-40%.
I have two questions and your answers would be of much help to me:
Thanks in advance :)
I think I have found the answers to my questions:
1- The binarization algorithm used is Otsu thresholding. You can see it here in line 179.
2- To get the binarized image, a method in tesseract api can be called:
PIX* thresholded = api->GetThresholdedImage(); //thresholded must be freed
Otsu thresholding is a global filter. You can use some local filter to get better results. You can look for Sauvalo's binarization see hereor Nick's here . Those both algorithm are Niblack's improvement. I used it to binarize my image for an OCR and I get better result Good luck
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With