when using CoreImage face detection I get CIFaceFeature object that have eyes and mouth position properties. When using AVFoundation with AVMetadataObjectTypeFace as metadataObjectTypes i get AVMetadataFaceObject that have yaw and roll angle properties.
There is a way to get eyes and mouth position when using AVFoundation?
Thank you
Mouth and eye positions are not returned by the AVFoundation face detection. However the positions of mouths and eyes relative to the bounding box do not vary that much. For example, I have found that the eyes are nearly always positioned at:
I'm sure the same is true for the mouth.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With