I am playing with vision framework and getting all landmark points with this code:
if let allFaceLandmarks = landmarks.allPoints {
print(allFaceLandmarks)
}
But cant find mapping for these points. For example index numbers for right eye.
Looking for something the same as this, but for Vision framework instead.
I have no clue why apple doesn't provide a graphic of this. It seems like it would be super helpful information to give people in the docs. At any rate, I was able to read the allPoints property of the observation and draw them out with numbers. I'm not really sure about the difference between nose and nose crest. You can probably draw them out and see...
Here's a pic that hopefully helps!
This post was super helpful for me, so I figured I would update it for iOS 13 (the original scope of the question is iOS 11). Starting with iOS 13, you will get a different set of points (VNDetectFaceLandmarksRequestRevision3) unless you manually specify the VNDetectFaceLandmarksRequestRevision2 revision. The revision parameter is only available in iOS12, so you need something like:
let faceLandmarksRequest = VNDetectFaceLandmarksRequest(completionHandler: self.myFaceFunction)
if #available(iOS 12.0, *) {
// Force the revision to 2 (68-points) even on iOS 13 or greater
// when VNDetectFaceLandmarksRequestRevision3 is available.
faceLandmarksRequest.revision = 2
}
When I was updating my app talkr to iOS 13, I couldn't find a reference image for the new points like the one in this post, so I thought I would generate one. I hope it helps someone!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With