Basically i'm trying to check if a face is upside down in an image using this library https://github.com/takuya-takeuchi/FaceRecognitionDotNet. Take example of the image below
This is an image that is successfully detected using the FaceRecognition.Net library.The image is upside down.I have marked all face landmarks in the image with a blue ellipses.
This is the approach i follow
// Finding faceparts
var faceparts = dparameters._FaceRecognition.FaceLandmark(dparameters.FCImage);
// Drawing Ellipses over all points got from faceparts
foreach(var facepart in faceparts) {
foreach(var mypoint in facepart.Values) {
foreach(var x in mypoint) {
tempg.DrawEllipse(Pens.Blue, x.Point.X, x.Point.Y, 2, 2);
}
}
}
Now i'm checking if the image is rotated by comparing maximum Y coordinates of the lip and eyepoints
var temp = faceparts.FirstOrDefault();
IEnumerable < FacePoint > lippoints;
temp.TryGetValue(FacePart.BottomLip, out lippoints);
IEnumerable < FacePoint > eyepoints;
temp.TryGetValue(FacePart.LeftEye, out eyepoints);
var lippoint = lippoints.Max(r => r.Point.Y);
var topeyepoint = eyepoints.Max(r => r.Point.Y);
if (lippoint > topeyepoint) {
bool isinverted = true;
} else {
bool isinverted = false;
}
The issue is that even when the image is not upside down, the eyecoordinate is less than the face coordinate.This is because a false face is detected as outlined in the image.How to get over this issue?
It looks like this library does not provide a confidence ratio for the results. Otherwise, I would suggest to try both the input and its flipped copy and take the one with higher confidence before doing the "eye over mouth" check.
So maybe what could help is:
face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")
in the C# port it should be
_FaceRecognition.FaceLocations(image, 0, Model.Cnn)
That should give you a more accurate face bounding box which you can then compare with the bounding box of the landmarks. If you do the same for a flipped copy of the image, you can "emulate" the confidence I mentioned earlier and assume the orientation where the boxes match better. Then you can identify the orientation by the "eyes over mouth" test.
Cnn
model you need to train it by yourself. Selection of the dataset for training is of course very important. If you already performed the training, more/better training data might improve the accuracy.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With