Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Swift 4: How to create a face map with ios11 vision framework from face landmark points

I am using the ios 11 vision framework to yield the face landmark points in real time. I am able to get the face landmark points and overlay the camera layer with the UIBezierPath of the face landmark points. However, I would like to get something like the bottom right picture. Currently I have something that looks like the left picture, and I tried looping through the points and adding midpoints, but I don't know how to generate all those triangles from the points. How would I go about generating the map on the right from the points on the left?

I'm not sure I can with all the points I have, not that it will help too much, but I also have points from the bounding box of the entire face. Lastly, is there any framework that would allow me to recognize all the points I need, such as openCV or something else, please let me know. Thanks!

face_map

Here is the code I've been using from https://github.com/DroidsOnRoids/VisionFaceDetection:

func detectLandmarks(on image: CIImage) {
    try? faceLandmarksDetectionRequest.perform([faceLandmarks], on: image)
    if let landmarksResults = faceLandmarks.results as? [VNFaceObservation] {

        for observation in landmarksResults {

            DispatchQueue.main.async {
                if let boundingBox = self.faceLandmarks.inputFaceObservations?.first?.boundingBox {
                    let faceBoundingBox = boundingBox.scaled(to: self.view.bounds.size)
                    //different types of landmarks



                    let faceContour = observation.landmarks?.faceContour
                    self.convertPointsForFace(faceContour, faceBoundingBox)

                    let leftEye = observation.landmarks?.leftEye
                    self.convertPointsForFace(leftEye, faceBoundingBox)

                    let rightEye = observation.landmarks?.rightEye
                    self.convertPointsForFace(rightEye, faceBoundingBox)

                    let leftPupil = observation.landmarks?.leftPupil
                    self.convertPointsForFace(leftPupil, faceBoundingBox)

                    let rightPupil = observation.landmarks?.rightPupil
                    self.convertPointsForFace(rightPupil, faceBoundingBox)

                    let nose = observation.landmarks?.nose
                    self.convertPointsForFace(nose, faceBoundingBox)

                    let lips = observation.landmarks?.innerLips
                    self.convertPointsForFace(lips, faceBoundingBox)

                    let leftEyebrow = observation.landmarks?.leftEyebrow
                    self.convertPointsForFace(leftEyebrow, faceBoundingBox)

                    let rightEyebrow = observation.landmarks?.rightEyebrow
                    self.convertPointsForFace(rightEyebrow, faceBoundingBox)

                    let noseCrest = observation.landmarks?.noseCrest
                    self.convertPointsForFace(noseCrest, faceBoundingBox)

                    let outerLips = observation.landmarks?.outerLips
                    self.convertPointsForFace(outerLips, faceBoundingBox)
                }
            }
        }
    }

}

func convertPointsForFace(_ landmark: VNFaceLandmarkRegion2D?, _ boundingBox: CGRect) {
    if let points = landmark?.points, let count = landmark?.pointCount {
        let convertedPoints = convert(points, with: count)



        let faceLandmarkPoints = convertedPoints.map { (point: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat) in
            let pointX = point.x * boundingBox.width + boundingBox.origin.x
            let pointY = point.y * boundingBox.height + boundingBox.origin.y

            return (x: pointX, y: pointY)
        }

        DispatchQueue.main.async {
            self.draw(points: faceLandmarkPoints)
        }
    }
}


func draw(points: [(x: CGFloat, y: CGFloat)]) {
    let newLayer = CAShapeLayer()
    newLayer.strokeColor = UIColor.blue.cgColor
    newLayer.lineWidth = 4.0

    let path = UIBezierPath()
    path.move(to: CGPoint(x: points[0].x, y: points[0].y))
    for i in 0..<points.count - 1 {
        let point = CGPoint(x: points[i].x, y: points[i].y)
        path.addLine(to: point)
        path.move(to: point)
    }
    path.addLine(to: CGPoint(x: points[0].x, y: points[0].y))
    newLayer.path = path.cgPath

    shapeLayer.addSublayer(newLayer)
}
like image 425
Ali Avatar asked Jul 09 '17 18:07

Ali


1 Answers

I did end up finding a solution that works. I used delaunay triangulation via https://github.com/AlexLittlejohn/DelaunaySwift, and I modified it to work with the points generated via the vision framework's face landmark detection request. This is not easily explained with a code snippet, so I have linked my github repo below that shows my solution. Note that this doesn't get the points from the forehead, as the vision framework only gets points from the eyebrows down.

https://github.com/ahashim1/Face

like image 148
Ali Avatar answered Nov 18 '22 20:11

Ali