I am using CoreImage
on IOS to detect faces in an image. It works fine when there are multiple faces, but doesn't work for simple faces such as this one:
The face is really big and obvious, but CoreImage
cannot detect it.
I'm thinking it may be because of my kCGImagePropertyOrientation
.
I set it to 5, simply because that's what an online tutorial did.
However, the images which I am processing are user uploaded, so I do not know the orientation of the face beforehand.
Is there a way to try all orientations?
What is the proper way of implementing CoreImage
facial detection when the images are not known before hand?
This is my code:
var imageOptions = Dictionary<String,Any>()
imageOptions[CIDetectorImageOrientation] = 5
imageOptions[CIDetectorSmile] = true
let image = CIImage(cgImage: imageView.image!.cgImage!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: image, options: imageOptions as? [String : AnyObject])
You can do it like this
var imageOrienttion = 0
switch (YourUIImage.imageOrientation) {
case UIImageOrientation.up:
imageOrienttion = 1
break;
case UIImageOrientation.down:
imageOrienttion = 3
break;
case UIImageOrientation.left:
imageOrienttion = 8
break;
case UIImageOrientation.right:
imageOrienttion = 6
break;
case UIImageOrientation.upMirrored:
imageOrienttion = 2
break;
case UIImageOrientation.downMirrored:
imageOrienttion = 4
break;
case UIImageOrientation.leftMirrored:
imageOrienttion = 5
break;
case UIImageOrientation.rightMirrored:
imageOrienttion = 7
break;
}
imageOptions[CIDetectorImageOrientation] = imageOrienttion
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With