Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Continued - Vehicle License Plate Detection

Continuing from this thread:

What are good algorithms for vehicle license plate detection?

I've developed my image manipulation techniques to emphasise the license plate as much as possible, and overall I'm happy with it, here are two samples.

alt text

alt text

Now comes the most difficult part, actually detecting the license plate. I know there are a few edge detection methods, but my maths is quite poor so I'm unable to translate some of the complex formulas into code.

My idea so far is to loop through every pixel within the image (for loop based on img width & height) From this compare each pixel against a list of colours, from this an algorithm is checked to see if the colors keep differentiating between the license plate white, and the black of the text. If this happens to be true these pixels are built into a new bitmap within memory, then an OCR scan is performed once this pattern has stopped being detected.

I'd appreciate some input on this as it might be a flawed idea, too slow or intensive.

Thanks

like image 312
Ash Avatar asked Jan 18 '11 17:01

Ash


People also ask

How long can you drive around without plates?

How Long Can I Drive A Car I Just Bought With No Plates Or Registration? In most states, you can only legally drive a car without a license plate or registration for 3 days. If this is your first vehicle, then you may not realize that the seller keeps their license plate when you purchase their car.

Is open Alpr free?

OpenALPR started as a free download, and then evolved to a Software-as-a-Service subscription.

Why is automatic number plate recognition important?

ANPR largely acts as a deterrent. The knowledge that their number plate is being recorded and checked is usually enough to stop criminal behaviour in advance. ANPR is also useful for the police, who can browse the data collected and check for suspicious vehicles, or vehicles that were involved in a crime.


2 Answers

Your method of "see if the colors keep differentiating between the license plate white, and the black of the text" is basically searching for areas where the pixel intensity changes from black to white and vice-versa many times. Edge detection can accomplish essentially the same thing. However, implementing your own methods is still a good idea because you will learn a lot in the process. Heck, why not do both and compare the output of your method with that of some ready-made edge detection algorithm?

At some point you will want to have a binary image, say with black pixels corresponding to the "not-a-character" label, and white pixels corresponding to the "is-a-character" label. Perhaps the simplest way to do that is to use a thresholding function. But that will only work well if the characters have already been emphasized in some way.

As someone mentioned in your other thread, you can do that using the black hat operator, which results in something like this:

image after black hat operation

If you threshold the image above with, say, Otsu's method (which automatically determines a global threshold level), you get this:

alt text

There are several ways to clean that image. For instance, you can find the connected components and throw away those that are too small, too big, too wide or too tall to be a character:

alt text

Since the characters in your image are relatively large and fully connected this method works well.

Next, you could filter the remaining components based on the properties of the neighbors until you have the desired number of components (= number of characters). If you want to recognize the character, you could then calculate features for each character and input them to a classifier, which usually is built with supervised learning.

All the steps above are just one way to do it, of course.

By the way, I generated the images above using OpenCV + Python, which is a great combination for computer vision.

like image 191
carnieri Avatar answered Oct 05 '22 03:10

carnieri


Colour, as much as looks good, will present quite some challenges with shading and light conditions. Depends really how much you want to make it robust but real world cases have to deal with such issues.

I have done research on road footage (see my profile page and look here for sample) and have found that the real-world road footage is extremely noisy in terms of light conditions and your colours can change from Brown to White for a yellow back-number-plate.

Most algorithms use line detection and try to find a box with an aspect ratio within an acceptable range.

I suggest you do a literature review on the subject but this was achieved back in 1993 (if I remember correctly) so there will be thousands of articles.

This is quite a scientific domain so just an algorithm will not solve it and you will needs numerous pre/post processing steps.

In brief, my suggestion is to use Hough transform to find lines and then try to look for rectangles that could create acceptable aspect ratio.

Harris feature detection could provide important edges but if the car is light-coloured this will not work.

like image 41
Aliostad Avatar answered Oct 05 '22 05:10

Aliostad