In background thread, my application needs to read images from disk, downscale them to the size of screen (1024x768 or 2048x1536) and save them back to disk. Original images are mostly from the Camera Roll but some of them may have larger sizes (e.g. 3000x3000).
Later, in a different thread, these images will frequently get downscaled to different sizes around 500x500 and saved to the disk again.
This leads me to wonder: what is the most efficient way to do this in iOS, performance and memory-wise? I have used two different APIs:
CGImageSource
and CGImageSourceCreateThumbnailAtIndex
from ImageIO;CGBitmapContext
and saving results to disk with CGImageDestination
.Both worked for me but I'm wondering if they have any difference in performance and memory usage. And if there are better options, of course.
While I can't definetly say it will help, I think it's worth trying to push the work to the GPU. You can either do that yourself by rendering a textured quad at a given size, or by using GPUImage and its resizing capabilities. While it has some texture size limitations on older devices, it should have much better performance than CPU based solution
With libjpeg-turbo you can use the scale_num
and scale_denom
fields of jpeg_decompress_struct
, and it will decode only needed blocks of an image. It gave me 250 ms decoding+scaling time in background thread on 4S with 3264x2448 original image (from camera, image data placed in memory) to iPhone's display resolution. I guess it's OK for an image that large, but still not great.
(And yes, that is memory efficient. You can decode and store the image almost line by line)
What you said on twitter does not match your question.
If you are having memory spikes, look at Instruments to figure out what is consuming the memory. Just the data alone for your high resolution image is 10 megs, and your resulting images are going to be about 750k, if they contain no alpha channel.
The first issue is keeping the memory usage low, for that, make sure that all of the images that you load are disposed as soon as you are done using them, that will ensure that the underlying C/Objective-C API disposes the memory immediately, instead of waiting for the GC to run, so something like:
using (var img = UIImage.FromFile ("..."){
using (var scaled = Scaler (img)){
scaled.Save (...);
}
}
As for the scaling, there are a variety of ways of scaling the images. The simplest way is to create a context, then draw on it, and then get the image out of the context. This is how MonoTouch's UIImage.Scale method is implemented:
public UIImage Scale (SizeF newSize)
{
UIGraphics.BeginImageContext (newSize);
Draw (new RectangleF (0, 0, newSize.Width, newSize.Height));
var scaledImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
return scaledImage;
}
The performance will be governed by the context features that you enable. For example, a higher-quality scaling would require changing the interpolation quality:
context.InterpolationQuality = CGInterpolationQuality.High
The other option is to run your scaling not on the CPU, but on the GPU. To do that, you would use the CoreImage API and use the CIAffineTransform filter.
As to which one is faster, it is something left for someone else to benchmark
CGImage Scale (string file)
{
var ciimage = CIImage.FromCGImage (UIImage.FromFile (file));
// Create an AffineTransform that makes the image 1/5th of the size
var transform = CGAffineTransform.MakeScale (0.5f, 0.5f);
var affineTransform = new CIAffineTransform () {
Image = ciimage,
Transform = transform
};
var output = affineTransform.OutputImage;
var context = CIContext.FromOptions (null);
return context.CreateCGImage (output, output.Extent);
}
If either is more efficient of the two then it'll be the former.
When you create a CGImageSource
you create just what the name says — some sort of opaque thing from which an image can be obtained. In your case it'll be a reference to a thing on disk. When you ask ImageIO to create a thumbnail you explicitly tell it "do as much as you need to output this many pixels".
Conversely if you draw to a CGBitmapContext
then at some point you explicitly bring the whole image into memory.
So the second approach definitely has the whole image in memory at once at some point. Conversely the former needn't necessarily (in practice there'll no doubt be some sort of guesswork within ImageIO as to the best way to proceed). So across all possible implementations of the OS either the former will be advantageous or there'll be no difference between the two.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With