Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Detecting the National ID card and getting the details [closed]

Tags:

I am trying to detect the National ID of the below type and get the details of it, For example the location of the signature should be found at the top right corner of the persons image, in this case "BC".

enter image description here

I need to do this application in iphone. I thought of using Opencv for it but how can I achieve the marked details? Do I need to train the application with similar kind Cards or OCR could help?

Is there any specific implementations for mobile applications?

I also gone through card-io which detects the credit card details, does Card-io detects the other card details also?

Update:

I have used tesseract for text detection. Tesseract works good if the image has text alone. So I cropped the red marked regions and given as input to Tesseract, it works good with the MRZ part.

There is a IOS implementation for Tesseract, with which I have tested.

What I need to do?

Now I am trying to automate the text detection part. Now I am planning to automate the following items,

1) Cropping the Face ( I have done using Viola-jones face detector ).

2) Need to take the Initial in this example "BC" from the Photo.

3) Extracting/detecting the MRZ region from the ID card.

I am trying to do 2 & 3, Any ideas or code snippets would be great.

like image 356
2vision2 Avatar asked Jun 16 '14 15:06

2vision2


1 Answers

Assuming these IDs are prepared according to a standard template having specific widths, heights, offsets, spacing etc., you can try a template based approach.

MRZ would be easy to detect. Once you detect it in the image, find the transformation that maps the MRZ in your template to it. When you know this transformation you can map any region on your template (for example, the photo of the individual) to the image and extract that region.

Below is a very simple program that follows a happy path. You will have to do more processing to locate the MRZ in general (for example, if there are perspective distortions or rotations). I prepared the template just by measuring the image, and it won't work for your case. I just wanted to convey the idea. Image was taken from wiki

    Mat rgb = imread(INPUT_FILE);     Mat gray;     cvtColor(rgb, gray, CV_BGR2GRAY);      Mat grad;     Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));     morphologyEx(gray, grad, MORPH_GRADIENT, morphKernel);      Mat bw;     threshold(grad, bw, 0.0, 255.0, THRESH_BINARY | THRESH_OTSU);      // connect horizontally oriented regions     Mat connected;     morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1));     morphologyEx(bw, connected, MORPH_CLOSE, morphKernel);      // find contours     Mat mask = Mat::zeros(bw.size(), CV_8UC1);     vector<vector<Point>> contours;     vector<Vec4i> hierarchy;     findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));      vector<Rect> mrz;     double r = 0;     // filter contours     for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])     {         Rect rect = boundingRect(contours[idx]);         r = rect.height ? (double)(rect.width/rect.height) : 0;         if ((rect.width > connected.cols * .7) && /* filter from rect width */             (r > 25) && /* filter from width:hight ratio */             (r < 36) /* filter from width:hight ratio */             )         {             mrz.push_back(rect);             rectangle(rgb, rect, Scalar(0, 255, 0), 1);         }         else         {             rectangle(rgb, rect, Scalar(0, 0, 255), 1);         }     }     if (2 == mrz.size())     {         // just assume we have found the two data strips in MRZ and combine them         CvRect max = cvMaxRect(&(CvRect)mrz[0], &(CvRect)mrz[1]);         rectangle(rgb, max, Scalar(255, 0, 0), 2);  // draw the MRZ          vector<Point2f> mrzSrc;         vector<Point2f> mrzDst;          // MRZ region in our image         mrzDst.push_back(Point2f((float)max.x, (float)max.y));         mrzDst.push_back(Point2f((float)(max.x+max.width), (float)max.y));         mrzDst.push_back(Point2f((float)(max.x+max.width), (float)(max.y+max.height)));         mrzDst.push_back(Point2f((float)max.x, (float)(max.y+max.height)));          // MRZ in our template         mrzSrc.push_back(Point2f(0.23f, 9.3f));         mrzSrc.push_back(Point2f(18.0f, 9.3f));         mrzSrc.push_back(Point2f(18.0f, 10.9f));         mrzSrc.push_back(Point2f(0.23f, 10.9f));          // find the transformation         Mat t = getPerspectiveTransform(mrzSrc, mrzDst);          // photo region in our template         vector<Point2f> photoSrc;         photoSrc.push_back(Point2f(0.0f, 0.0f));         photoSrc.push_back(Point2f(5.66f, 0.0f));         photoSrc.push_back(Point2f(5.66f, 7.16f));         photoSrc.push_back(Point2f(0.0f, 7.16f));          // surname region in our template         vector<Point2f> surnameSrc;         surnameSrc.push_back(Point2f(6.4f, 0.7f));         surnameSrc.push_back(Point2f(8.96f, 0.7f));         surnameSrc.push_back(Point2f(8.96f, 1.2f));         surnameSrc.push_back(Point2f(6.4f, 1.2f));          vector<Point2f> photoDst(4);         vector<Point2f> surnameDst(4);          // map the regions from our template to image         perspectiveTransform(photoSrc, photoDst, t);         perspectiveTransform(surnameSrc, surnameDst, t);         // draw the mapped regions         for (int i = 0; i < 4; i++)         {             line(rgb, photoDst[i], photoDst[(i+1)%4], Scalar(0,128,255), 2);         }         for (int i = 0; i < 4; i++)         {             line(rgb, surnameDst[i], surnameDst[(i+1)%4], Scalar(0,128,255), 2);         }     } 

Result: photo and surname regions in orange. MRZ in blue. enter image description here

like image 191
dhanushka Avatar answered Sep 30 '22 05:09

dhanushka