Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The memory-efficient way of using Core Image on iOS?

I'm using Core Image filters in my app, everything works fine on my iPhone 5 device running iOS 7, but when I test it on iPhone 4s, which has only a total memory of 512MB, the app crashes.

Here's the situation, I have 2 images taken from the camera, with a resolution of 2448x3264 each. In my iPhone 5, the whole process takes up 150MB at the peak according to instruments.

instruments memory usage

However, when I try to run the same code on iPhone 4s, the instruments gave me memory low warning all the time, even if the whole memory use is quite low (around 8 MB). Here's the screenshot below.

iphone 4s memory usage

And here's the code, basically, I loaded two images from documents folder of my app, and applied 2 filters in a row:

    CIImage *foreground = [[CIImage alloc] initWithContentsOfURL:foregroundURL];
    CIImage *background = [[CIImage alloc] initWithContentsOfURL:backgroundURL];
    CIFilter *softLightBlendFilter = [CIFilter filterWithName:@"CISoftLightBlendMode"];
    [softLightBlendFilter setDefaults];
    [softLightBlendFilter setValue:foreground forKey:kCIInputImageKey];
    [softLightBlendFilter setValue:background forKey:kCIInputBackgroundImageKey];

    foreground = [softLightBlendFilter outputImage];
    background = nil;
    softLightBlendFilter = nil;

    CIFilter *gammaAdjustFilter = [CIFilter filterWithName:@"CIGammaAdjust"];
    [gammaAdjustFilter setDefaults];
    [gammaAdjustFilter setValue:foreground forKey:kCIInputImageKey];
    [gammaAdjustFilter setValue:[NSNumber numberWithFloat:value] forKey:@"inputPower"];
    foreground = [gammaAdjustFilter valueForKey:kCIOutputImageKey];

    gammaAdjustFilter = nil;

    CIContext *context = [CIContext contextWithOptions:nil];
    CGRect extent = [foreground extent];
    CGImageRef cgImage = [context createCGImage:foreground fromRect:extent];

    UIImage *image = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:imgOrientation];
    CFRelease(cgImage);
    foreground = nil;

    return image;

The app crashed at this line: CGImageRef cgImage = [context createCGImage:foreground fromRect:extent];

Is there any more memory-efficient way of handling this situation, or what am I doing wrong here?

Big thanks!

like image 294
Void Main Avatar asked Dec 24 '13 13:12

Void Main


People also ask

What is Core Image IOS?

Core Image is an image processing and analysis technology that provides high-performance processing for still and video images. Use the many built-in image filters to process images and build complex effects by chaining filters.

What is CIImage?

A CIImage is a immutable object that represents an image. It is not an image. It only has the image data associated with it. It has all the information necessary to produce an image. You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter, CIContext, CIColor, and CIVector.

How do I create a custom filter in Swift?

Here's what you need in your subclass: class CustomFilter:CIFilter { let kernel = CIColorKernel(source: "kernel vec4 brightenEffect (sampler src , float k) { \n " + "vec4 currentSource = sample (src, samplerCoord (src));" + "currentSource. rgb = currentSource. rgb + k * currentSource.

What is Core Image and how does it work?

Core Image can analyze the quality of an image and provide a set of filters with optimal settings for adjusting such things as hue, contrast, and tone color, and for correcting for flash artifacts such as red eye. It does all this with one method call on your part.

What is ACORE image processing?

Core Image is an image processing and analysis technology designed to provide near real-time processing for still and video images. It operates on image data types from the Core Graphics, Core Video, and Image I/O frameworks, using either a GPU or CPU rendering path.

Why is iOS twice as memory-efficient as Android?

iOS is twice as memory-efficient as Android. Here’s why. But specs don’t always — or even most of the time — tell the whole story. As it turns out, an iPhone 6 with 1GB of RAM runs much faster than a similarly specced Android smartphone with 2GB of RAM. And it all has to do with the fundamental difference in the way iOS and Android handle apps.

Why do iOS devices run better than Android devices with Ram?

According to Glyn Williams over on Quora, iOS devices run better than Android devices with twice the RAM because Android apps use Java, and need all the extra RAM to do something called garbage collection.


1 Answers

Short version:

While it seems trivial in concept, this is actually a pretty memory intensive task for the device in question.

Long version:

Consider this: 2 images * 8 bits each for RGBA * 2448 * 3264 ~= 64MB. Then CoreImage will require another ~32MB for the output of the filter operation. Then getting that from a CIContext into a CGImage is likely going to consume another 32MB. I would expect the UIImage copy to share the CGImage's memory representation at least by mapping the image using VM with copy-on-write, although you may get dinged for the double usage anyway since despite not consuming "real" memory, it still counts against pages mapped.

So at a bare minimum, you're using 128MB (Plus any other memory your app happens to use). This is a considerable amount of RAM for a device like the 4S which only starts with 512MB to begin with. IME, I would say that this would sorta be on the outer edge of what would be possible. I would expect it to work at least some of the time, but it does not surprise me to hear that it's getting memory warnings and memory pressure kills. You will want to make sure that the CIContext and all the input images are deallocated/disposed as soon after making the CGImage as possible, and before making the UIImage from the CGImage.

In general, this could be made easier by scaling down the image size.

Without testing, and assuming ARC, I present the following as a potential improvement:

- (UIImage*)imageWithForeground: (NSURL*)foregroundURL background: (NSURL*)backgroundURL orientation:(UIImageOrientation)orientation value: (float)value
{
    CIImage* holder = nil;
    @autoreleasepool
    {
        CIImage *foreground = [[CIImage alloc] initWithContentsOfURL:foregroundURL];
        CIImage *background = [[CIImage alloc] initWithContentsOfURL:backgroundURL];
        CIFilter *softLightBlendFilter = [CIFilter filterWithName:@"CISoftLightBlendMode"];
        [softLightBlendFilter setDefaults];
        [softLightBlendFilter setValue:foreground forKey:kCIInputImageKey];
        [softLightBlendFilter setValue:background forKey:kCIInputBackgroundImageKey];

        holder = [softLightBlendFilter outputImage];
        // This probably the peak usage moment -- I expect both source images as well as the output to be in memory.
    }
    //  At this point, I expect the two source images to be flushed, leaving the one output image
    @autoreleasepool
    {
        CIFilter *gammaAdjustFilter = [CIFilter filterWithName:@"CIGammaAdjust"];
        [gammaAdjustFilter setDefaults];
        [gammaAdjustFilter setValue:holder forKey:kCIInputImageKey];
        [gammaAdjustFilter setValue:[NSNumber numberWithFloat:value] forKey:@"inputPower"];
        holder = [gammaAdjustFilter outputImage];
        // At this point, I expect us to have two images in memory, input and output
    }
    // Here we should be back down to just one image in memory
    CGImageRef cgImage = NULL;

    @autoreleasepool
    {
        CIContext *context = [CIContext contextWithOptions:nil];
        CGRect extent = [holder extent];
        cgImage = [context createCGImage: holder fromRect:extent];
        // One would hope that CG and CI would be sharing memory via VM, but they probably aren't. So we probably have two images in memory at this point too
    }
    // Now I expect all the CIImages to have gone away, and for us to have one image in memory (just the CGImage)
    UIImage *image = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:orientation];
    // I expect UIImage to almost certainly be sharing the image data with the CGImageRef via VM, but even if it's not, we only have two images in memory
    CFRelease(cgImage);
    // Now we should have only one image in memory, the one we're returning.
    return image;
}

As indicated in the comments, the high watermark is going to be the operation that takes two input images and creates one output image. That will always require 3 images to be in memory, no matter what. To get the high watermark down any further from there, you'd have to do the images in sections/tiles or scale them down to a smaller size.

like image 92
ipmcc Avatar answered Oct 29 '22 21:10

ipmcc