I've been working on this all day, and have looked at lots of questions here on SO and google, but so far I can't come up with anything quite right.
I have taken a photo on an iPad running iOS 5.1.1 and cropped it using the Photos app. I then get a reference to it from the assets library and am getting the full resolution image which is un-cropped.
I've found that the cropping information is contained in the AdjustmentXMP
key of metadata
on my ALAssetRepresentation
object.
So I crop the photo using the XMP info and here is what I get:
Original Photo (1,936 x 2,592):
Properly Cropped Photo, as seen in the Photos App (1,420 x 1,938):
Photo Cropped With Code Below
(also 1,420 x 1,938 but cropped roughly 200 pixels too far to the right):
This is the XMP data from the photo:
<x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP Core 4.4.0">
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<rdf:Description rdf:about=""
xmlns:aas="http://ns.apple.com/adjustment-settings/1.0/">
<aas:AffineA>1</aas:AffineA>
<aas:AffineB>0</aas:AffineB>
<aas:AffineC>0</aas:AffineC>
<aas:AffineD>1</aas:AffineD>
<aas:AffineX>-331</aas:AffineX>
<aas:AffineY>-161</aas:AffineY>
<aas:CropX>0</aas:CropX>
<aas:CropY>0</aas:CropY>
<aas:CropW>1938</aas:CropW>
<aas:CropH>1420</aas:CropH>
</rdf:Description>
</rdf:RDF>
</x:xmpmeta>
Here is the code that I am using to crop the photo:
ALAssetRepresentation *rep = // Get asset representation
CGImageRef defaultImage = [rep fullResolutionImage];
// Values obtained from XMP data above:
CGRect cropBox = CGRectMake(0, 0, 1938, 1420);
CGAffineTransform transform = CGAffineTransformMake(1, 0, 0, 1, 331, 161);
// Apply the Affine Transform to the crop box:
CGRect transformedCropBox = CGRectApplyAffineTransform(cropBox, transform);
// Created a new cropped image:
CGImageRef croppedImage = CGImageCreateWithImageInRect(defaultImage, transformedCropBox);
// Create the UIImage:
UIImage *image = [UIImage imageWithCGImage:croppedImage scale:[rep scale] orientation:[rep orientation]];
CGImageRelease(croppedImage);
I've reproduced the problem with multiple images. If I just use the fullScreenImage
it displays perfectly, but I need the full size image.
This is a tricky one! There is apparently no documentation for this XMP data, so we'll have to guess at how to interpret it. There are a number of choices to make, and getting it wrong can lead to subtly wrong results.
TL;DR: In theory your code looks correct, but in practice it's giving the wrong result, and there's a fairly obvious adjustment we can try.
Image files may contain additional metadata specifying whether (and how) the raw data of the image should be rotated and/or flipped when displayed. UIImage
expresses this with its imageOrientation
property, and ALAssetRepresentation
is similar.
However, CGImage
s are just bitmaps, with no orientation stored in them. -[ALAssetRepresentation fullResolutionImage]
gives you a CGImage
in the original orientation, with no adjustments applied.
In your case, the orientation is 3
, meaning ALAssetOrientationRight
or UIImageOrientationRight
. The viewing software (for instance, UIImage
) looks at this value, sees that the image is oriented 90° to the right (clockwise), then rotates it by 90° to the left (counterclockwise) before displaying it. Or, to say it another way, the CGImage
is rotated 90° clockwise from the image you're looking at on your screen.
(To verify this, get the width and height of the CGImage by using CGImageGetWidth()
and CGImageGetHeight()
. You should find that the CGImage is 2592 wide and 1936 high. This is rotated 90° from the ALAssetRepresentation
, whose dimensions
should be 1936 wide by 2592 high. You could also create a UIImage
from the CGImage
using the normal orientation UIImageOrientationUp
, write the UIImage
to a file, and see what it looks like.)
The values in the XMP dictionary appear to be relative to the CGImage
's orientation. For instance, the crop rect is wider than it is tall, the X translation is greater than the Y translation, etc. Makes sense.
We also have to decide what coordinate system the XMP values are supposed to be in. Most likely it's one of these two:
CGImageCreateWithImageInRect()
interprets its rect
argument this way.Let's assume that "flipped" is correct, since it's generally more convenient. Your code is already trying to do it that way, anyway.
The dictionary contains an affine transform and a crop rect. Let's guess that it should be interpreted in this order:
If we try this by hand, the numbers seem to work out. Here's a rough diagram, with the crop rect in translucent purple:
We don't actually have to follow those exact steps, in terms of calling CG, but we should act as if we had.
We just want to call CGImageCreateWithImageInRect
, and it's pretty obvious how to compute the appropriate crop rect (331,161,1938,1420)
. Your code appears to do this correctly.
If we crop the image to that rect, then create a UIImage
from it (specifying the correct orientation, UIImageOrientationRight
), then we should get the correct results.
But, the results are wrong! What you get was as if we did the operations in a Cartesian coordinate system:
Alternatively, it's as if the image was rotated the opposite direction, UIImageOrientationLeft
, but we kept the same crop rect:
That's all very odd, and I don't understand what went wrong, although I'd love to.
But a fix seems fairly straightforward: just flip the clip rect. After computing it as above:
// flip the transformedCropBox in the image
transformedCropBox.origin.y = CGImageGetHeight(defaultImage) - CGRectGetMaxY(transformedCropBox);
Does that work? (For this case, and for images with other orientations?)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With