Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How is facial recognition built into Core ML vision framework

How exactly is the facial recognition done in this framework done? The docs state that it is part of the framework

Face Detection and Recognition

However, it is not clear which classes/methods allow us to do so. The closest thing I've found is VNFaceObservation which is lacking significant details.

Is it more of a manual process and we must include our own learned models someway? -- if so, how?

like image 509
rambossa Avatar asked Jun 29 '17 13:06

rambossa


People also ask

How ml is used in face recognition?

With ML Kit's face detection API, you can detect faces in an image, identify key facial features, and get the contours of detected faces. Note that the API detects faces, it does not recognize people .

Which algorithm is used in face recognition in ML?

The most common type of machine learning algorithm used for facial recognition is a deep learning Convolutional Neural Network (CNN). CNNs are a type of artificial neural network that are well-suited for image classification tasks.


1 Answers

The technical details of how the vision framework are unknown even though from the WWDC video they seem to be using deep learning.

Here is some sample code to locate an eye in your image:

let request = VNDetectFaceLandmarksRequest()
let handler = VNImageRequestHandler(cvPixelBuffer: buffer, orientation: orientation)
try! handler.perform([request])
guard let face = request.results?.first as? VNFaceObservation,
  let leftEye = face.landmarks?.leftEye else { return }

let box = face.boundingBox
let points = (0..<landmark.pointCount).map({ i in
  let point = landmark.point(at: i)
  let x = box.minX + box.width * CGFloat(point.x)
  let y = 1 - (box.minY + box.height * CGFloat(point.y))
  return CGPoint(x: x, y: y)
})

That will return you some points that you can see linked together in the WWDC video as:

enter image description here

You might want to watch the WWDC video until they improve the doc. Else Xcode autocomplete is your best friend.

Core ML is a different thing. It's not specifically targeted to faces. You can use your own models and predict whatever you want. So if you have a face recognition model, go for it! The vision framework has some support for CoreML models through VNCoreMLModel

like image 165
Guig Avatar answered Oct 01 '22 11:10

Guig