Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to crop an image from AVCapture to a rect seen on the display

This is driving me crazy because I can't get it to work. I have the following scenario:

I'm using an AVCaptureSession and an AVCaptureVideoPreviewLayer to create my own camera interface. The interface shows a rectangle. Below is the AVCaptureVideoPreviewLayer that fills the whole screen.

I want to the captured image to be cropped in a way, that the resulting image shows exactly the content seen in the rect on the display.

My setup looks like this:

_session = [[AVCaptureSession alloc] init]; AVCaptureSession *session = _session; session.sessionPreset = AVCaptureSessionPresetPhoto;  AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if (camera == nil) {     [self showImagePicker];     _isSetup = YES;     return; } AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session]; captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;  captureVideoPreviewLayer.frame = self.liveCapturePlaceholderView.bounds; [self.liveCapturePlaceholderView.layer addSublayer:captureVideoPreviewLayer];  NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error]; if (error) {     HGAlertViewWrapper *av = [[HGAlertViewWrapper alloc] initWithTitle:kFailedConnectingToCameraAlertViewTitle message:kFailedConnectingToCameraAlertViewMessage cancelButtonTitle:kFailedConnectingToCameraAlertViewCancelButtonTitle otherButtonTitles:@[kFailedConnectingToCameraAlertViewRetryButtonTitle]];     [av showWithBlock:^(NSString *buttonTitle){         if ([buttonTitle isEqualToString:kFailedConnectingToCameraAlertViewCancelButtonTitle]) {             [self.delegate gloameCameraViewControllerDidCancel:self];         }         else {             [self setupAVSession];         }     }]; } [session addInput:input];  NSDictionary *options = @{ AVVideoCodecKey : AVVideoCodecJPEG }; _stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; [_stillImageOutput setOutputSettings:options];  [session addOutput:_stillImageOutput];  [session startRunning]; _isSetup = YES; 

I'm capturing the image like this:

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)  {      if (error) {          MWLogDebug(@"Error capturing image from camera. %@, %@", error, [error userInfo]);          _capturePreviewLayer.connection.enabled = YES;      }      else      {          NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];          UIImage *image = [[UIImage alloc] initWithData:imageData];           CGRect cropRect = [self createCropRectForImage:image];          UIImage *croppedImage;// = [self cropImage:image toRect:cropRect];          UIGraphicsBeginImageContext(cropRect.size);          [image drawAtPoint:CGPointMake(-cropRect.origin.x, -cropRect.origin.y)];          croppedImage = UIGraphicsGetImageFromCurrentImageContext();          UIGraphicsEndImageContext();          self.capturedImage = croppedImage;          [_session stopRunning];                   }  }]; 

In the createCropRectForImage: method I've tried various ways to calculate the rect to cut out of the image, but with no success so far.

- (CGRect)createCropRectForImage:(UIImage *)image {     CGPoint maskTopLeftCorner = CGPointMake(self.maskRectView.frame.origin.x, self.maskRectView.frame.origin.y);     CGPoint maskBottomRightCorner = CGPointMake(self.maskRectView.frame.origin.x + self.maskRectView.frame.size.width, self.maskRectView.frame.origin.y + self.maskRectView.frame.size.height);      CGPoint maskTopLeftCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskTopLeftCorner fromLayer:self.maskRectView.layer.superlayer];     CGPoint maskBottomRightCornerInLayerCoords = [_capturePreviewLayer convertPoint:maskBottomRightCorner fromLayer:self.maskRectView.layer.superlayer];     CGPoint maskTopLeftCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskTopLeftCornerInLayerCoords];     CGPoint maskBottomRightCornerInDeviceCoords = [_capturePreviewLayer captureDevicePointOfInterestForPoint:maskBottomRightCornerInLayerCoords];      float x = maskTopLeftCornerInDeviceCoords.x * image.size.width;     float y = (1 - maskTopLeftCornerInDeviceCoords.y) * image.size.height;     float width = fabsf(maskTopLeftCornerInDeviceCoords.x - maskBottomRightCornerInDeviceCoords.x) * image.size.width;     float height = fabsf(maskTopLeftCornerInDeviceCoords.y - maskBottomRightCornerInDeviceCoords.y) * image.size.height;      return CGRectMake(x, y, width, height); } 

That is my current version but doesn't even get the proportions right. Could some one please help me!

I have also tried using this method to crop my image:

- (UIImage*)cropImage:(UIImage*)originalImage toRect:(CGRect)rect{      CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], rect);      CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);     CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);     CGContextRef bitmap = CGBitmapContextCreate(NULL, rect.size.width, rect.size.height, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);      if (originalImage.imageOrientation == UIImageOrientationLeft) {         CGContextRotateCTM (bitmap, radians(90));         CGContextTranslateCTM (bitmap, 0, -rect.size.height);      } else if (originalImage.imageOrientation == UIImageOrientationRight) {         CGContextRotateCTM (bitmap, radians(-90));         CGContextTranslateCTM (bitmap, -rect.size.width, 0);      } else if (originalImage.imageOrientation == UIImageOrientationUp) {         // NOTHING     } else if (originalImage.imageOrientation == UIImageOrientationDown) {         CGContextTranslateCTM (bitmap, rect.size.width, rect.size.height);         CGContextRotateCTM (bitmap, radians(-180.));     }      CGContextDrawImage(bitmap, CGRectMake(0, 0, rect.size.width, rect.size.height), imageRef);     CGImageRef ref = CGBitmapContextCreateImage(bitmap);      UIImage *resultImage=[UIImage imageWithCGImage:ref];     CGImageRelease(imageRef);     CGContextRelease(bitmap);     CGImageRelease(ref);      return resultImage; } 

Does anybody have the 'right combination' of methods to make this work? :)

like image 469
Tobi Avatar asked Apr 11 '13 14:04

Tobi


2 Answers

In Swift 3:

private func cropToPreviewLayer(originalImage: UIImage) -> UIImage {     let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: previewLayer.bounds)     var cgImage = originalImage.cgImage!     let width = CGFloat(cgImage.width)     let height = CGFloat(cgImage.height)     let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)          cgImage = cgImage.cropping(to: cropRect)!     let croppedUIImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: originalImage.imageOrientation)          return croppedUIImage } 
like image 172
Patrick Montalto Avatar answered Sep 28 '22 19:09

Patrick Montalto


I've solved this problem by using metadataOutputRectOfInterestForRect function.

It works with any orientation.

[_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection                                                completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)  {      if (error)      {          [_delegate cameraView:self error:@"Take picture failed"];      }      else      {           NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];          UIImage *takenImage = [UIImage imageWithData:jpegData];           CGRect outputRect = [_previewLayer metadataOutputRectOfInterestForRect:_previewLayer.bounds];          CGImageRef takenCGImage = takenImage.CGImage;          size_t width = CGImageGetWidth(takenCGImage);          size_t height = CGImageGetHeight(takenCGImage);          CGRect cropRect = CGRectMake(outputRect.origin.x * width, outputRect.origin.y * height, outputRect.size.width * width, outputRect.size.height * height);           CGImageRef cropCGImage = CGImageCreateWithImageInRect(takenCGImage, cropRect);          takenImage = [UIImage imageWithCGImage:cropCGImage scale:1 orientation:takenImage.imageOrientation];          CGImageRelease(cropCGImage);       }  }  ]; 

The takenImage is still imageOrientation dependent image. You can delete orientation information for further image processing.

UIGraphicsBeginImageContext(takenImage.size); [takenImage drawAtPoint:CGPointZero]; takenImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); 
like image 21
UnknownStack Avatar answered Sep 28 '22 19:09

UnknownStack