Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Crop to face with face detection

Tags:

ios

I'm modifying the Apple SquareCam example face detection app so that it crops the face out before writing to the camera roll instead of drawing the red square around the face. I'm using the same CGRect to do the cropping as was used for drawing the red square. However the behavior is different. In portrait mode, if the face is in the horizontal center of the screen, it crops the face as expected (the same place the red square would have been). If the face is off to the left or right, the crop seems to always be taken from the middle of the screen instead of where the red square would have been.

Here is apples original code:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;
    CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
    CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
    CGContextClearRect(bitmapContext, backgroundImageRect);
    CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
    CGFloat rotationDegrees = 0.;

    switch (orientation) {
        case UIDeviceOrientationPortrait:
            rotationDegrees = -90.;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            rotationDegrees = 90.;
            break;
        case UIDeviceOrientationLandscapeLeft:
            if (isFrontFacing) rotationDegrees = 180.;
            else rotationDegrees = 0.;
            break;
        case UIDeviceOrientationLandscapeRight:
            if (isFrontFacing) rotationDegrees = 0.;
            else rotationDegrees = 180.;
            break;
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
        default:
            break; // leave the layer in its last known orientation
    }
    UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
        CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
    }
    returnImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease (bitmapContext);

    return returnImage;
}

and my replacement:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;

    //I'm only taking pics with one face. This is just for testing
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect);
    }

    return returnImage;
}

Update *

Based on Wains input, I tried to make my code more like the original but the result was the same:

- (NSArray*)extractFaceImages:(NSArray *)features
              fromCGImage:(CGImageRef)sourceImage
          withOrientation:(UIDeviceOrientation)orientation
              frontFacing:(BOOL)isFrontFacing
{
 NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease];


CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
    case UIDeviceOrientationPortrait:
        rotationDegrees = -90.;
        break;
    case UIDeviceOrientationPortraitUpsideDown:
        rotationDegrees = 90.;
        break;
    case UIDeviceOrientationLandscapeLeft:
        if (isFrontFacing) rotationDegrees = 180.;
        else rotationDegrees = 0.;
        break;
    case UIDeviceOrientationLandscapeRight:
        if (isFrontFacing) rotationDegrees = 0.;
        else rotationDegrees = 180.;
        break;
    case UIDeviceOrientationFaceUp:
    case UIDeviceOrientationFaceDown:
    default:
        break; // leave the layer in its last known orientation
}

// features found by the face detector
for ( CIFaceFeature *ff in features ) {
    CGRect faceRect = [ff bounds];

    NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));

    returnImage = CGBitmapContextCreateImage(bitmapContext);
    returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
    UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
    [faceImages addObject:clippedFace];
}

CGContextRelease (bitmapContext);

return faceImages;

}

I took three pictures and logged faceRect with these results;

Pic taken with face positioned near left edge of device. Capture image completely misses face to the right: faceRect={{972, 43.0312}, {673.312, 673.312}}

Pic taken with face positioned in middle of device. Capture image is good: faceRect={{1060.59, 536.625}, {668.25, 668.25}}

Pic taken with face positioned near right edge of device. Capture image completely misses face to the left: faceRect={{982.125, 999.844}, {804.938, 804.938}}

So it appears that "x" and "y" are reversed. I'm holding the device in portrait but faceRect seems to be landscape based. However, I can't figure out what part of Apples original code is accounting for this. The orientation code in that method appears to only affect the red square overlay image itself.

like image 573
ax123man Avatar asked Jun 24 '13 01:06

ax123man


People also ask

Can facial recognition work from a photo?

Facial recognition is a way of identifying or confirming an individual's identity using their face. Facial recognition systems can be used to identify people in photos, videos, or in real-time. Facial recognition is a category of biometric security.

How do you crop a face in Python?

Explanation: We use cv2. CascadeClassifier for load haarcascade file in face_cascade. detectMultiScale() function used for detect faces.It takes 3 parameters: Gray: input image(gray scale image)


1 Answers

You should keep all of the original code and just add one line before the return (with a tweak to put the image generation inside the loop as you are only cropping the first face):

returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);

This allows the image to be rendered with the correct orientation which means the face rect will be in the correct place.

like image 187
Wain Avatar answered Oct 02 '22 01:10

Wain