Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Most efficient way to draw part of an image in iOS

Given an UIImage and a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect (without scaling)?

For reference, this is how I currently do it:

- (void)drawRect:(CGRect)rect {     CGContextRef context = UIGraphicsGetCurrentContext();     CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);         CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);     CGContextTranslateCTM(context, 0, rect.size.height);     CGContextScaleCTM(context, 1.0, -1.0);     CGContextDrawImage(context, rect, imageRef);     CGImageRelease(imageRef); } 

Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay frequency. Playing with UIImageView's frame and clipToBounds produces better results (with less flexibility).

like image 491
hpique Avatar asked Nov 07 '11 11:11

hpique


2 Answers

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.


Trust Apple for Regular UI stuff

Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.

Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).

If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.

Anyway, you need to know all of the low-level details to use them properly for optimization.


Why is that one of the fastest ways?

UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).

Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.


Why your method is slower than UIImageView?

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

IMO, CGImage drawing methods are not implemented with GPU. I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.

  • Here's another thread about whether the CoreGraphics on iOS is using OpenGL or not: iOS: is Core Graphics implemented on top of OpenGL?

About Limitations

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.


About Some Future

Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.

like image 77
eonil Avatar answered Sep 30 '22 06:09

eonil


It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.

Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.

My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.

+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {     return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp]; }  // Draw a full image into a crop-sized area and offset to produce a cropped, rotated image + (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {              // convert y coordinate to origin bottom-left     CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,             orgX = -aperture.origin.x,             scaleX = 1.0,             scaleY = 1.0,             rot = 0.0;     CGSize size;      switch (orientation) {         case UIImageOrientationRight:         case UIImageOrientationRightMirrored:         case UIImageOrientationLeft:         case UIImageOrientationLeftMirrored:             size = CGSizeMake(aperture.size.height, aperture.size.width);             break;         case UIImageOrientationDown:         case UIImageOrientationDownMirrored:         case UIImageOrientationUp:         case UIImageOrientationUpMirrored:             size = aperture.size;             break;         default:             assert(NO);             return nil;     }       switch (orientation) {         case UIImageOrientationRight:             rot = 1.0 * M_PI / 2.0;             orgY -= aperture.size.height;             break;         case UIImageOrientationRightMirrored:             rot = 1.0 * M_PI / 2.0;             scaleY = -1.0;             break;         case UIImageOrientationDown:             scaleX = scaleY = -1.0;             orgX -= aperture.size.width;             orgY -= aperture.size.height;             break;         case UIImageOrientationDownMirrored:             orgY -= aperture.size.height;             scaleY = -1.0;             break;         case UIImageOrientationLeft:             rot = 3.0 * M_PI / 2.0;             orgX -= aperture.size.height;             break;         case UIImageOrientationLeftMirrored:             rot = 3.0 * M_PI / 2.0;             orgY -= aperture.size.height;             orgX -= aperture.size.width;             scaleY = -1.0;             break;         case UIImageOrientationUp:             break;         case UIImageOrientationUpMirrored:             orgX -= aperture.size.width;             scaleX = -1.0;             break;     }      // set the draw rect to pan the image to the right spot     CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);      // create a context for the new image     UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);     CGContextRef gc = UIGraphicsGetCurrentContext();      // apply rotation and scaling     CGContextRotateCTM(gc, rot);     CGContextScaleCTM(gc, scaleX, scaleY);      // draw the image to our clipped context using the offset rect     CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);      // pull the image from our cropped context     UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();      // pop the context to get back to the default     UIGraphicsEndImageContext();      // Note: this is autoreleased     return cropped; } 
like image 43
Scott Lahteine Avatar answered Sep 30 '22 05:09

Scott Lahteine