Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Face Detection and scaling with ImageView not quite working due to scaling?

Once again, I'm close, but no banana.

I'm trying to follow some tutorials on facial recognition. Almost there with the following code, but I think there is something I'm missing with how it's scaling and placing the UIImageView borders around the faces.

My photos in the Photo Library are different sizes (for some inexplicable reason), and so I'm imagining that the CIDetector is finding the faces, I'm applying the CGAffineTransforms etc, and trying to place them in the UIImageView. But, as you can see from the image (also below), it's not getting drawn in the correct place.

The UIImageView is 280x500 and set as Scale to Fill.

Any help to work out what's happening would be great!

-(void)detectFaces {

    CIContext *context = [CIContext contextWithOptions:nil];
    CIImage *image = [CIImage imageWithCGImage:_imagePhotoChosen.image.CGImage options:nil];
    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -_imagePhotoChosen.image.size.height);
    NSArray *features = [detector featuresInImage:image];
    NSLog(@"I have found %lu faces", (long unsigned)features.count);
    for (CIFaceFeature *faceFeature in features)
    {   
        const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
        NSLog(@"I have the original frame as: %@", NSStringFromCGRect(faceRect));
        const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
        const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;

        CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);

        UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
        NSLog(@"I have the bounds as: %@", NSStringFromCGRect(faceFrame));
        faceView.layer.borderColor = [[UIColor redColor] CGColor];
        faceView.layer.borderWidth = 1.0f;

        [self.view addSubview:faceView];
    }

}

-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{

    _imagePhotoChosen.image = info[UIImagePickerControllerOriginalImage];
    //[_imagePhotoChosen sizeToFit];

    [self.view addSubview:_viewChosenPhoto];
    [picker dismissViewControllerAnimated:YES completion:nil];
    [self detectFaces];

}

I've left in the NSLog statements as I've been trying to work out if the Mathematics is wrong, but can't seem to see if it is! And I'm a Maths teacher too ... sigh ....

Not quite right now is it?

Thanks again for anything you can do to point me in the right direction.

Update

In response to people wanting to know how I solved it ... it really was a silly mistake on my part.

I was adding the subviews to the main window, rather than the UIImageView. Hence I removed the line:

[self.view addSubview:faceView];

And replaced it with:

[_imagePhotoChosen addSubview:faceView];

This allowed the frames to be placed in the correct place. The accepted solution provided me the clue! So, the updated code (and I've moved on a little since then becomes:

-(void)detectFaces:(UIImage *)selectedImage {

    _imagePhotoChosen.image = selectedImage;

    CIImage *image = [CIImage imageWithCGImage:selectedImage.CGImage options:nil];

    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -selectedImage.size.height);
    NSArray *features = [detector featuresInImage:image];
    int i = 0;
    for (CIFaceFeature *faceFeature in features)
    {

        const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

        const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
        const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;

        CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);

        UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
        faceView.layer.borderColor = [[UIColor redColor] CGColor];
        faceView.layer.borderWidth = 1.0f;
        faceView.tag = i;

        UITapGestureRecognizer *selectPhotoTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(selectPhoto)];
        selectPhotoTap.numberOfTapsRequired = 1;
        selectPhotoTap.numberOfTouchesRequired = 1;
        [faceView addGestureRecognizer:selectPhotoTap];

        [_imagePhotoChosen addSubview:faceView];
        i++;
    }

}
like image 937
Darren Avatar asked Nov 01 '22 06:11

Darren


1 Answers

Actually what you did is exactly correct, just replace this line with

    CGRect faceFrame = CGRectMake(_imagePhotoChosen.frame.origin.x+ faceRect.origin.x * scaleWidth,_imagePhotoChosen.frame.origin.y+ faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);
like image 116
santhu Avatar answered Nov 11 '22 13:11

santhu