My goal is to create an enhanced image with a more readable license plate number from a given sequence of images with indistinguishable license plates on driving cars, such as the sequence below.
As you can see, the plate number is, for the most part, indistinguishable. I am looking into implementations for enhancing such as super-resolution of multiple frames (as I have researched in this paper: http://users.soe.ucsc.edu/~milanfar/publications/journal/SRfinal.pdf). I have some experience with OpenCV, and I am looking for help in what direction to take, or if super-resolution is really a viable option for this kind of problem.
On the contrary, having a shift between images larger than one pixel does not prevent having sub pixel accuracy, i.e. the image can shift 3.3 pixels right, etc.
Still you'll need the subpixel accuracy to start with, to estimate the displacement between frames, something in the lines of:
cornerSubPix( imgA, cornersA, Size( win_size, win_size ), Size( -1, -1 ),
TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) );
cornerSubPix( imgB, cornersB, Size( win_size, win_size ), Size( -1, -1 ),
TermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) );
[...]
calcOpticalFlowPyrLK( imgA, imgB, cornersA, cornersB, features_found, feature_errors ,
Size( win_size, win_size ), 5,
cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.1 ), 0 );
You're in luck because your scene has no large changes in lightning (so PyrLK will be reasonably accurate) and its structure doesn't change much (because is a short sequence). That means you can get the estimated movement vector from frame to frame from the central part of the scene (where the car is), by removing the outliers and averaging the remaining ones. Note that this approach would not work if the car is getting closer to you...
With that, the easiest superresolution algorithm implies mapping every frame with their individual displacements onto a higher order grid (like, 2x width and 2x height), and averaging their results. This would tackle noise and would give you a very good impression as to how good your suppositions are. You should run against a model database for this (since you have a sequence database to test against, right?). If the approach is satisfactory, you can just get from the literature some sub-algorithms to remove the Point Spread Function, which are in general mask filtering.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With