I'm creating an app that requires real-time application of filters to images. Converting the UIImage
to a CIImage
, and applying the filters are both extremely fast operations, yet it takes too long to convert the created CIImage
back to a CGImageRef
and display the image (1/5 of a second, which is actually a lot if editing needs to be real-time).
The image is about 2500 by 2500 pixels big, which is most likely part of the problem
Currently, I'm using
let image: CIImage //CIImage with applied filters
let eagl = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
let context = CIContext(EAGLContext: eagl, options: [kCIContextWorkingColorSpace : NSNull()])
//this line takes too long for real-time processing
let cg: CGImage = context.createCGImage(image, fromRect: image.extent)
I've looked into using EAGLContext.drawImage()
context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
Yet I can't find any solid documentation on exactly how this is done, or if it would be any faster
Is there any faster way to display a CIImage
to the screen (either in a UIImageView
, or directly on a CALayer
)? I would like to avoid decreasing the image quality too much, because this may be noticeable to the user.
It may be worth considering Metal and displaying with a MTKView
.
You'll need a Metal device which can be created with MTLCreateSystemDefaultDevice()
. That's used to create a command queue and Core Image context. Both these objects are persistent and quite expensive to instantiate, so ideally should be created once:
lazy var commandQueue: MTLCommandQueue =
{
return self.device!.newCommandQueue()
}()
lazy var ciContext: CIContext =
{
return CIContext(MTLDevice: self.device!)
}()
You'll also need a color space:
let colorSpace = CGColorSpaceCreateDeviceRGB()!
When it comes to rendering a CIImage
, you'll need to create a short lived command buffer:
let commandBuffer = commandQueue.commandBuffer()
You'll want to render your CIImage
(let's call it image
) to the currentDrawable?.texture
of a MTKView
. If that's bound to targetTexture
, the rendering syntax is:
ciContext.render(image,
toMTLTexture: targetTexture,
commandBuffer: commandBuffer,
bounds: image.extent,
colorSpace: colorSpace)
commandBuffer.presentDrawable(currentDrawable!)
commandBuffer.commit()
I have a working version here.
Hope that helps!
Simon
I ended up using the context.drawImage(image, inRect: destinationRect, fromRect: image.extent)
method. Here's the image view class that I created
import Foundation
//GLKit must be linked and imported
import GLKit
class CIImageView: GLKView{
var image: CIImage?
var ciContext: CIContext?
//initialize with the frame, and CIImage to be displayed
//(or nil, if the image will be set using .setRenderImage)
init(frame: CGRect, image: CIImage?){
super.init(frame: frame, context: EAGLContext(API: EAGLRenderingAPI.OpenGLES2))
self.image = image
//Set the current context to the EAGLContext created in the super.init call
EAGLContext.setCurrentContext(self.context)
//create a CIContext from the EAGLContext
self.ciContext = CIContext(EAGLContext: self.context)
}
//for usage in Storyboards
required init?(coder aDecoder: NSCoder){
super.init(coder: aDecoder)
self.context = EAGLContext(API: EAGLRenderingAPI.OpenGLES2)
EAGLContext.setCurrentContext(self.context)
self.ciContext = CIContext(EAGLContext: self.context)
}
//set the current image to image
func setRenderImage(image: CIImage){
self.image = image
//tell the processor that the view needs to be redrawn using drawRect()
self.setNeedsDisplay()
}
//called automatically when the view is drawn
override func drawRect(rect: CGRect){
//unwrap the current CIImage
if let image = self.image{
//multiply the frame by the screen's scale (ratio of points : pixels),
//because the following .drawImage() call uses pixels, not points
let scale = UIScreen.mainScreen().scale
let newFrame = CGRectMake(rect.minX, rect.minY, rect.width * scale, rect.height * scale)
//draw the image
self.ciContext?.drawImage(
image,
inRect: newFrame,
fromRect: image.extent
)
}
}
}
Then, to use it, simply
let myFrame: CGRect //frame in self.view where the image should be displayed
let myImage: CIImage //CIImage with applied filters
let imageView: CIImageView = CIImageView(frame: myFrame, image: myImage)
self.view.addSubview(imageView)
Resizing the UIImage to the screen size before converting it to a CIImage also helps. It speeds things up a lot in the case of high quality images. Just make sure to use the full-size image when actually saving it.
Thats it! Then, to update the image in the view
imageView.setRenderImage(newCIImage)
//note that imageView.image = newCIImage won't work because
//the view won't be redrawn
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With