The problem of image decompression has been much discussed in Stack Overflow but up to this question there were 0 mentions of kCGImageSourceShouldCacheImmediately
, an option introduced in iOS 7 that, in theory, takes care of this problem. From the headers:
Specifies whether image decoding and caching should happen at image creation time.
In Objc.io #7 Peter Steinberger suggested this approach:
+ (UIImage *)decompressedImageWithData:(NSData *)data
{
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)@{(id)kCGImageSourceShouldCacheImmediately: @YES});
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(source);
return image;
}
Libraries like AFNetworking and SDWebImage still do image decompression with the CGContextDrawImage
method. From SDWebImage:
+ (UIImage *)decodedImageWithImage:(UIImage *)image {
if (image.images) {
// Do not decode animated images
return image;
}
CGImageRef imageRef = image.CGImage;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGRect imageRect = (CGRect){.origin = CGPointZero, .size = imageSize};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
int infoMask = (bitmapInfo & kCGBitmapAlphaInfoMask);
BOOL anyNonAlpha = (infoMask == kCGImageAlphaNone ||
infoMask == kCGImageAlphaNoneSkipFirst ||
infoMask == kCGImageAlphaNoneSkipLast);
// CGBitmapContextCreate doesn't support kCGImageAlphaNone with RGB.
// https://developer.apple.com/library/mac/#qa/qa1037/_index.html
if (infoMask == kCGImageAlphaNone && CGColorSpaceGetNumberOfComponents(colorSpace) > 1) {
// Unset the old alpha info.
bitmapInfo &= ~kCGBitmapAlphaInfoMask;
// Set noneSkipFirst.
bitmapInfo |= kCGImageAlphaNoneSkipFirst;
}
// Some PNGs tell us they have alpha but only 3 components. Odd.
else if (!anyNonAlpha && CGColorSpaceGetNumberOfComponents(colorSpace) == 3) {
// Unset the old alpha info.
bitmapInfo &= ~kCGBitmapAlphaInfoMask;
bitmapInfo |= kCGImageAlphaPremultipliedFirst;
}
// It calculates the bytes-per-row based on the bitsPerComponent and width arguments.
CGContextRef context = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
CGImageGetBitsPerComponent(imageRef),
0,
colorSpace,
bitmapInfo);
CGColorSpaceRelease(colorSpace);
// If failed, return undecompressed image
if (!context) return image;
CGContextDrawImage(context, imageRect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *decompressedImage = [UIImage imageWithCGImage:decompressedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(decompressedImageRef);
return decompressedImage;
}
My question is should we move to the kCGImageSourceShouldCacheImmediately
approach in iOS 7?
There are a few problems with the implementation as far as I can tell.
This "new method" requires some kind of rendering on the main thread. You can load the image and set the should cache immediately flag, but this will set some operation up in the main thread to process. This will then cause stuttering when you load scroll views and collection views. It stutters more for me than doing the old way with dispatch queues in the background.
If you're using your own memory buffers instead of files, you'll need to create data providers that copy the data, as it looks like the data providers expect the memory buffers to hang around. That sounds obvious, but the flags in this function lead you to believe you can do this:
It doesn't do that though, because it will wait for the main thread where the decompression will take place. It thinks everything is OK because it's held references to all these intermediary objects that you released. You released all these objects thinking that it decompressed IMMEDIATELY as the flags said it would. If you threw away that memory buffer as well, and that memory buffer was passed in a no-copy sense, you're going to then end up with garbage. Or, if the memory buffer was reused as in my case, to load another image, you also will get garbage.
You actually have no way of knowing when this image is going to be decompressed and ready for use.
TL;DR = "Consider kCGImageSourceShouldCacheImmediately to mean when convenient to the OS"
When you do it the "old way", you 100% know what's going to be available and when. Because it's not deferring you can avoid some copying. I don't think that this API is doing anything magical anyway, I think it's just holding the memory buffer and then doing things the "old way" under the hood.
So there is basically no free lunch here. Looking at the stack trace from where this thing crashed when I went about reusing my memory buffer after I thought it was all signed off, I see it calling out to CA::Transaction, CA::Layer, CA::Render, and from there into ImageProviderCopy... all the way down to JPEGParseJPEGInfo (where it crashed accessing my buffer).
This means that kCGImageSourceShouldCacheImmediately does nothing except set a flag to tell the image to decompress in the main thread as soon as possible after you have created it and not actually IMMEDIATELY as you think IMMEDIATELY means (on reading). It would have done the exact same thing if you handed the image over to a scroll view to display and the image went to draw. If you're lucky there were some spare cycles between scrolling and this would improve things, but basically I think it just sounds a lot more hopeful that it will be doing more than it actually does.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With