I have an application using Haar cascade to detect eyes in the image capture from video camera. The method used is:
void CascadeClassifier::detectMultiScale(const Mat& image, vector<Rect>& objects, double scaleFactor=1.1, int minNeighbors=3, int flags=0, Size minSize=Size(), Size maxSize=Size())
This works quite fine with default value of scaleFactor
, minNeighbors
, and flags
but some people's eyes cannot be detected. So I want to improve the accuracy of eyes detection. It seems like "Cascade Classifier Training" and create the custom cascade classifier is a good solution but before going this way
would it be possible to improve detection accuracy by adjusting some parameters in the method? Please explain more the meaning of scaleFactor
, minNeighbors
, and flags
because those meaning from cascadeclassifier-detectmultiscale docs are not quite clear to me. Thank you.
The scaleFactor parameter is used to determine how many different sizes of eyes the function will look for. Usually this value is 1.1 for the best detection. Setting this parameter to 1.2 or 1.3 will detect eyes faster but doesn't find them as often, meaning the accuracy goes down.
minNeighbors is used for telling the detector how sure he should be when detected an eye. Normally this value is set to 3 but if you want more reliability you can set this higher. Higher values means less accuracy but more reliability
The flags are used for setting specific preferences, like looking for the largest object or skipping regions. Default this value = 0. Setting this value can make the detection go faster
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With