Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Detecting if a Face is Upside down with Dlib.Net(FaceRecognition.Net)

Basically i'm trying to check if a face is upside down in an image using this library https://github.com/takuya-takeuchi/FaceRecognitionDotNet. Take example of the image below

enter image description here

This is an image that is successfully detected using the FaceRecognition.Net library.The image is upside down.I have marked all face landmarks in the image with a blue ellipses.

This is the approach i follow

 // Finding faceparts
var faceparts = dparameters._FaceRecognition.FaceLandmark(dparameters.FCImage);

// Drawing Ellipses over all points got from faceparts

foreach(var facepart in faceparts) {
  foreach(var mypoint in facepart.Values) {
    foreach(var x in mypoint) {
      tempg.DrawEllipse(Pens.Blue, x.Point.X, x.Point.Y, 2, 2);
    }
  }
}

Now i'm checking if the image is rotated by comparing maximum Y coordinates of the lip and eyepoints

var temp = faceparts.FirstOrDefault();
IEnumerable < FacePoint > lippoints;
temp.TryGetValue(FacePart.BottomLip, out lippoints);

IEnumerable < FacePoint > eyepoints;
temp.TryGetValue(FacePart.LeftEye, out eyepoints);

var lippoint = lippoints.Max(r => r.Point.Y);
var topeyepoint = eyepoints.Max(r => r.Point.Y);
if (lippoint > topeyepoint) {
  bool isinverted = true;
} else {
  bool isinverted = false;
}

The issue is that even when the image is not upside down, the eyecoordinate is less than the face coordinate.This is because a false face is detected as outlined in the image.How to get over this issue?

like image 641
techno Avatar asked Nov 07 '22 01:11

techno


1 Answers

It looks like this library does not provide a confidence ratio for the results. Otherwise, I would suggest to try both the input and its flipped copy and take the one with higher confidence before doing the "eye over mouth" check.

So maybe what could help is:

  • using the CNN model, in the original library it is called by
face_locations = face_recognition.face_locations(image, number_of_times_to_upsample=0, model="cnn")

in the C# port it should be

_FaceRecognition.FaceLocations(image, 0, Model.Cnn)

That should give you a more accurate face bounding box which you can then compare with the bounding box of the landmarks. If you do the same for a flipped copy of the image, you can "emulate" the confidence I mentioned earlier and assume the orientation where the boxes match better. Then you can identify the orientation by the "eyes over mouth" test.

  • as far as I noticed the library does not provide pre-trained data, so in order to use the Cnn model you need to train it by yourself. Selection of the dataset for training is of course very important. If you already performed the training, more/better training data might improve the accuracy.
like image 60
Isolin Avatar answered Nov 15 '22 05:11

Isolin