Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Perspective Transform + Crop in iOS with OpenCV

I'm trying to implement a cropping & perspective correction feature into an upcoming app. Whilst doing research, I came across:

Executing cv::warpPerspective for a fake deskewing on a set of cv::Point

http://sudokugrab.blogspot.ch/2009/07/how-does-it-all-work.html

So I decided to try implementing this feature with OpenCV - the framework is there so the installation was fast. However, I'm not getting the results I hoped for: (2nd picture is the result)

Original Photo & cropping box

Cropped Photo, bad result

I've translated all the code to work with Xcode and triple checked the Coordinates. Can you tell me what is wrong with my code? For the sake of completeness, I've also included the UIImage -> Mat conversion + reversal:

- (void)confirmedImage
{
    if ([_adjustRect frameEdited]) {

    cv::Mat src = [self cvMatFromUIImage:_sourceImage];

    // My original Coordinates
    // 4-------3
    // |       |
    // |       |
    // |       |
    // 1-------2

    CGFloat scaleFactor =  [_sourceImageView contentScale];
    CGPoint p1 = [_adjustRect coordinatesForPoint:4 withScaleFactor:scaleFactor];
    CGPoint p2 = [_adjustRect coordinatesForPoint:3 withScaleFactor:scaleFactor];
    CGPoint p3 = [_adjustRect coordinatesForPoint:1 withScaleFactor:scaleFactor];
    CGPoint p4 = [_adjustRect coordinatesForPoint:2 withScaleFactor:scaleFactor];

    std::vector<cv::Point2f> c1;
    c1.push_back(cv::Point2f(p1.x, p1.y));
    c1.push_back(cv::Point2f(p2.x, p2.y));
    c1.push_back(cv::Point2f(p3.x, p3.y));
    c1.push_back(cv::Point2f(p4.x, p4.y));

    cv::RotatedRect box = minAreaRect(cv::Mat(c1));
    cv::Point2f pts[4];
    box.points(pts);

    cv::Point2f src_vertices[3];
    src_vertices[0] = pts[0];
    src_vertices[1] = pts[1];
    src_vertices[2] = pts[3];

    cv::Point2f dst_vertices[4];
    dst_vertices[0].x = 0;
    dst_vertices[0].y = 0;

    dst_vertices[1].x = box.boundingRect().width-1;
    dst_vertices[1].y = 0;

    dst_vertices[2].x = 0;
    dst_vertices[2].y = box.boundingRect().height-1;

    dst_vertices[3].x = box.boundingRect().width-1;
    dst_vertices[3].y = box.boundingRect().height-1;

    cv::Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    warpAffine(src, rotated, warpAffineMatrix, size, cv::INTER_LINEAR, cv::BORDER_CONSTANT);


    [_sourceImageView setNeedsDisplay];
    [_sourceImageView setImage:[self UIImageFromCVMat:rotated]];
    [_sourceImageView setContentMode:UIViewContentModeScaleAspectFit];

    rotated.release();
    src.release();

    }
}

- (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
    CGColorSpaceRef colorSpace;
    if ( cvMat.elemSize() == 1 ) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    }
    else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }
    CGDataProviderRef provider = CGDataProviderCreateWithCFData( (__bridge CFDataRef)data );
    CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault );
    UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
    CGImageRelease( imageRef );
    CGDataProviderRelease( provider );
    CGColorSpaceRelease( colorSpace );
    return finalImage;
}

- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
    CGColorSpaceRef colorSpace = CGImageGetColorSpace( image.CGImage );
    CGFloat cols = image.size.width;
    CGFloat rows = image.size.height;
    cv::Mat cvMat( rows, cols, CV_8UC4 );
    CGContextRef contextRef = CGBitmapContextCreate( cvMat.data, cols, rows, 8, cvMat.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault );
    CGContextDrawImage( contextRef, CGRectMake(0, 0, rows, cols), image.CGImage );
    CGContextRelease( contextRef );
    CGColorSpaceRelease( colorSpace );
    return cvMat;
}

Is this the correct approach to my problem? Do you have any sample code that could help me out?

Thank you for reading my question!

UDATE:

I've actually Open Sourced my UIImagePickerController replacement here: https://github.com/mmackh/MAImagePickerController-of-InstaPDF which includes the adjustable cropping view, filters and perspective correction.

like image 372
mmackh Avatar asked Nov 07 '12 12:11

mmackh


2 Answers

So after a few days of trying to solve it, I came up with a solution (Ignore the blue dots on the second image):

Original Image, with adjusted frameCropped Frame

As promised, here's a complete copy of the code:

- (void)confirmedImage
{
    cv::Mat originalRot = [self cvMatFromUIImage:_sourceImage];
    cv::Mat original;
    cv::transpose(originalRot, original);

    originalRot.release();

    cv::flip(original, original, 1);


    CGFloat scaleFactor =  [_sourceImageView contentScale];

    CGPoint ptBottomLeft = [_adjustRect coordinatesForPoint:1 withScaleFactor:scaleFactor];
    CGPoint ptBottomRight = [_adjustRect coordinatesForPoint:2 withScaleFactor:scaleFactor];
    CGPoint ptTopRight = [_adjustRect coordinatesForPoint:3 withScaleFactor:scaleFactor];
    CGPoint ptTopLeft = [_adjustRect coordinatesForPoint:4 withScaleFactor:scaleFactor];

    CGFloat w1 = sqrt( pow(ptBottomRight.x - ptBottomLeft.x , 2) + pow(ptBottomRight.x - ptBottomLeft.x, 2));
    CGFloat w2 = sqrt( pow(ptTopRight.x - ptTopLeft.x , 2) + pow(ptTopRight.x - ptTopLeft.x, 2));

    CGFloat h1 = sqrt( pow(ptTopRight.y - ptBottomRight.y , 2) + pow(ptTopRight.y - ptBottomRight.y, 2));
    CGFloat h2 = sqrt( pow(ptTopLeft.y - ptBottomLeft.y , 2) + pow(ptTopLeft.y - ptBottomLeft.y, 2));

    CGFloat maxWidth = (w1 < w2) ? w1 : w2;
    CGFloat maxHeight = (h1 < h2) ? h1 : h2;

    cv::Point2f src[4], dst[4];
    src[0].x = ptTopLeft.x;
    src[0].y = ptTopLeft.y;
    src[1].x = ptTopRight.x;
    src[1].y = ptTopRight.y;
    src[2].x = ptBottomRight.x;
    src[2].y = ptBottomRight.y;
    src[3].x = ptBottomLeft.x;
    src[3].y = ptBottomLeft.y;

    dst[0].x = 0;
    dst[0].y = 0;
    dst[1].x = maxWidth - 1;
    dst[1].y = 0;
    dst[2].x = maxWidth - 1;
    dst[2].y = maxHeight - 1;
    dst[3].x = 0;
    dst[3].y = maxHeight - 1;

    cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC1);
    cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));

    UIImage *newImage = [self UIImageFromCVMat:undistorted];

    undistorted.release();
    original.release();

    [_sourceImageView setNeedsDisplay];
    [_sourceImageView setImage:newImage];
    [_sourceImageView setContentMode:UIViewContentModeScaleAspectFit];

}

- (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];

    CGColorSpaceRef colorSpace;

    if (cvMat.elemSize() == 1) {
        colorSpace = CGColorSpaceCreateDeviceGray();
    } else {
        colorSpace = CGColorSpaceCreateDeviceRGB();
    }

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);

    CGImageRef imageRef = CGImageCreate(cvMat.cols,                                     // Width
                                        cvMat.rows,                                     // Height
                                        8,                                              // Bits per component
                                        8 * cvMat.elemSize(),                           // Bits per pixel
                                        cvMat.step[0],                                  // Bytes per row
                                        colorSpace,                                     // Colorspace
                                        kCGImageAlphaNone | kCGBitmapByteOrderDefault,  // Bitmap info flags
                                        provider,                                       // CGDataProviderRef
                                        NULL,                                           // Decode
                                        false,                                          // Should interpolate
                                        kCGRenderingIntentDefault);                     // Intent

    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpace);

    return image;
}

- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols = image.size.height;
    CGFloat rows = image.size.width;

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to backing data
                                                    cols,                       // Width of bitmap
                                                    rows,                       // Height of bitmap
                                                    8,                          // Bits per component
                                                    cvMat.step[0],              // Bytes per row
                                                    colorSpace,                 // Colorspace
                                                    kCGImageAlphaNoneSkipLast |
                                                    kCGBitmapByteOrderDefault); // Bitmap info flags

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);

    return cvMat;
}

Hope it helps you + happy coding!

like image 64
mmackh Avatar answered Sep 22 '22 00:09

mmackh


I think the point correspondence in getAffineTransform is incorrect.

Check the point coordinates output by box.points(pts);

Why not just use p1 p2 p3 p4 to calculate the transformation?

like image 1
luhb Avatar answered Sep 23 '22 00:09

luhb