I'm trying to implement the built-in iOS 5 face detection API. I'm using an instance of UIImagePickerController
to allow the user to take a photo and then I'm trying to use CIDetector
to detect facial features. Unfortunately, featuresInImage
always returns an array of size 0.
Here's the code:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage* picture = [info objectForKey:UIImagePickerControllerOriginalImage];
NSNumber *orientation = [NSNumber numberWithInt:
[picture imageOrientation]];
NSDictionary *imageOptions =
[NSDictionary dictionaryWithObject:orientation
forKey:CIDetectorImageOrientation];
CIImage *ciimage = [CIImage imageWithCGImage:[picture CGImage]
options:imageOptions];
NSDictionary *detectorOptions =
[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow
forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:detectorOptions];
NSArray *features = [detector featuresInImage:ciimage];
NSLog(@"Feature size: %d", features.count);
}
This always returns 0 features. However, if I use a UIImage from a file built-in to the application, the face detection works great.
I'm using code from this Pragmatic Bookshelf article.
For what it's worth, I think the error is when I convert the UIImage from the camera to a CIImage, but it could be anything.
@tonyc Your solution only works for a specific case that is "UIImageOrientationRight". The problem comes from the difference between the orientation in UIImage and CIDetectorImageOrientation. From iOS docs:
CIDetectorImageOrientation
A key used to specify the display orientation of the image whose features you want to detect. This key is an NSNumber object with the same value as defined by the TIFF and EXIF specifications; values can range from 1 through 8. The value specifies where the origin (0,0) of the image is located. If not present, the default value is 1, which means the origin of the image is top, left. For details on the image origin specified by each value, see kCGImagePropertyOrientation.
Available in iOS 5.0 and later.
Declared in CIDetector.h.
So the problem now is the conversion between these two orientations, here is what I did in my code, I tested and it worked for all orientations:
int exifOrientation;
switch (self.image.imageOrientation) {
case UIImageOrientationUp:
exifOrientation = 1;
break;
case UIImageOrientationDown:
exifOrientation = 3;
break;
case UIImageOrientationLeft:
exifOrientation = 8;
break;
case UIImageOrientationRight:
exifOrientation = 6;
break;
case UIImageOrientationUpMirrored:
exifOrientation = 2;
break;
case UIImageOrientationDownMirrored:
exifOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
exifOrientation = 5;
break;
case UIImageOrientationRightMirrored:
exifOrientation = 7;
break;
default:
break;
}
NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; // TODO: read doc for more tuneups
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];
NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];
Sure enough, after spending a day looking into this and being stumped, I've found a solution an hour after posting this.
I eventually noticed that the face detection did work in landscape, but not in portrait.
Turns out I needed these options:
NSDictionary *imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6]
forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciimage options:imageOptions];
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With