Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

VNDetectFaceRectanglesRequest detecting of the face

I am using Vision framework to process images. Function that i am using for that is running well and do not return any error in completion handler but result is empty.

This is my function :

 func recognizeImage() {
        let request = VNDetectFaceRectanglesRequest { (res: VNRequest, error: Error?) in
            print("Reuslt : \(res.accessibilityActivationPoint)")
        }

        if let cgContet = image.image.cgImage  {
            let handler = VNImageRequestHandler(cgImage: cgContet)
            try? handler.perform([request])
        }
    }

Result of the function is :

Reuslt : (0.0, 0.0)
like image 752
Oleg Gordiichuk Avatar asked Oct 23 '25 19:10

Oleg Gordiichuk


2 Answers

If you want to detect faces and draw a rectangle on each, try this:

let request=VNDetectFaceRectanglesRequest{request, error in
    var final_image=UIImage(named: image_to_process)

    if let results=request.results as? [VNFaceObservation]{
        print(results.count, "faces found")
        for face_obs in results{
          //draw original image
          UIGraphicsBeginImageContextWithOptions(final_image.size, false, 1.0)
          final_image.draw(in: CGRect(x: 0, y: 0, width: final_image.size.width, height: final_image.size.height))

          //get face rect
          var rect=face_obs.boundingBox
          let tf=CGAffineTransform.init(scaleX: 1, y: -1).translatedBy(x: 0, y: -final_image.size.height)
          let ts=CGAffineTransform.identity.scaledBy(x: final_image.size.width, y: final_image.size.height)
          let converted_rect=rect.applying(ts).applying(tf)    

          //draw face rect on image
          let c=UIGraphicsGetCurrentContext()!
          c.setStrokeColor(UIColor.red.cgColor)
          c.setLineWidth(0.01*final_image.size.width)
          c.stroke(converted_rect)

          //get result image
          let result=UIGraphicsGetImageFromCurrentImageContext()
          UIGraphicsEndImageContext()

          final_image=result!
        }
    }

    //display final image
    DispatchQueue.main.async{
        self.image_view.image=final_image
    }
}


guard let ciimage=CIImage(image:image_to_process) else{
  fatalError("couldn't convert uiimage to ciimage")
}

let handler=VNImageRequestHandler(ciImage: ciimage)
DispatchQueue.global(qos: .userInteractive).async{
    do{
        try handler.perform([request])
    }catch{
        print(error)
    }
}
like image 154
Marie Dm Avatar answered Oct 26 '25 07:10

Marie Dm


There's not quite enough info here to be sure, but probably...

Face recognition requires that the image orientation be known. (Because accurately figuring out which blobs of pixels are and aren't faces is a heck of a lot easier when you're looking for right-side-up faces only.)

CGImage doesn't know it's own orientation, so you have to get that info separately and pass it to one of the VNImageRequestHandler initializers that takes an orientation.

Those initializers take an EXIF orientation value (aka CGImagePropertyOrientation). If you're starting from a UIImage, that enum's underlying numeric values don't match those of UIImageOrientation, so you'll need to convert them. There's a handy method for doing that in the sample code attached to the Vision session from WWDC17.

like image 41
rickster Avatar answered Oct 26 '25 09:10

rickster



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!