Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Converting CIImage to CGImage too slow

I need to convert a CIImage to a CGImage. This is the code I am currently using:

CGImageRef img = [myContext createCGImage:ciImage fromRect:[ciImage extent]];

But his line of code is very slow for images with a regular size. Much to slow that I can use it properly. Is there another faster method to convert these CIImages to CGImages?

In the end I use Core Graphics to draw the CGImages to a CGContext. If there is a way to directly draw CIImages to a CGContext i think that should also be faster. Is this possible?

thanks

like image 292
Laurens Avatar asked Jan 29 '15 09:01

Laurens


2 Answers

From the CIImage documentation:

Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.

This means that any filtering you may have applied (or better requested to apply) to your CIImage won't actually occur until you render the image, which in your case happens when you create the CGImageRef. That's probably the reason you're experiencing the slow down.

That said, filtering on CIImage is generally a really fast process so based on your image size you should give us your definition of slow.

Finally to make sure the bottleneck is really where you think it is you should profile your application by using Instruments.

Edit

I've just tried building a sample project that filters an image stream coming from the device camera (960x540) on an iPad3 by applying the CIColorMap color filter and I'm getting over 60fps (~16ms).

Depending on your application you'll get better performances by reusing the CIContext, the ColorMapFilter, the inputGradientImage (if it doesn't change over time) and updating only the inputImage on every iteration.

For example you would call prepareFrameFiltering once and then repeatably call applyFilterToFrame: on every frame you want to process.

@property (nonatomic, strong) CIContext *context;
@property (nonatomic, strong) CIFilter *colorMapFilter;

- (void)prepareFrameFiltering {
    self.context = [CIContext contextWithOptions:nil];
    CIImage *colorMap = [CIImage imageWithCGImage:[UIImage imageNamed:@"gradient.jpg"].CGImage];
    self.colorMapFilter = [CIFilter filterWithName:@"CIColorMap"];
    [self.colorMapFilter setValue:colorMap forKey:@"inputGradientImage"];
}

- (void)applyFilterToFrame:(CIImage *)ciFrame {    
    // filter
    [self.colorMapFilter setValue:ciFrame forKey:@"inputImage"];
    CIImage *ciImageResult = [self.colorMapFilter valueForKey: @"outputImage"];

    CGImageRef ref = [self.context createCGImage:ciImageResult fromRect:ciFrame.extent];

    // do whatever you need

    CGImageRelease(ref);
}
like image 85
Tomas Camin Avatar answered Nov 11 '22 04:11

Tomas Camin


The fastest way to render a CIImage is to render into a EAGLContext.

Rendering a CIImage into a CGContext is not as performant because CI works on the GPU but CG is a CPU renderer. CI must read its results back from the GPU to the CPU so that CG can render it.

Note: This read-back will happen when createCGImage: is called if the image is smaller than 4K. It will happen when CGContextDrawImage is called if the image is larger.

like image 23
David Hayward Avatar answered Nov 11 '22 02:11

David Hayward