I always used CoreGraphics and CoreAnimation, I understand how each of them works on their own, but not those edge cases when one have to talk with the other. I also understand that UIView
s are a nice wrapper for CALayer
, where CALayer
does all the heavy lifting of rendering, and the UIView
adds the touch-based responsiveness.
But, all the questions I have seen thus far, attack the problem from one side or the other, not the interplay between them, specially between CoreGraphics and CALayer
.
Anyway, my question is ...
My understanding is that a CALayer
wraps the CoreGraphics methods to draw itself, but does it once, and can live with the snapshot of itself until invalidated. But, how these drawing methods interplay with the sublayers of that layer? Are they exclusive?
For example, what happens when I have a UIView
that has sub-views, and I overload the drawRect
method? How does that affect the drawing of its sublayers?
Is it even a good idea to intermix the two inside the same function?
Also, I'm asking only about iOS, I understand that Mac is a different beast (and also have those fancy CIFilters
, bastards!).
Here's some related questions I've researched beforehand:
Working with UIViews happens on the main thread, it means it is using CPU power. CALayer: Layers on other hand have simpler hierarchy. That means they are faster to resolve and quicker to draw on the screen. There is no responder chain overhead unlike with views.
Core Animation is supposed to be the graphics system of the framework, but there is also Core Graphics. Core Graphics is entirely done on the CPU, and cannot be performed on the GPU.
An object that manages image-based content and allows you to perform animations on that content.
Overview. Core Animation provides high frame rates and smooth animations without burdening the CPU and slowing down your app. Most of the work required to draw each frame of an animation is done for you.
I'll try to answer your question at a conceptual, 20,000ft level. I will try to disclaim my points where I'm over-generalizing, but I'll attempt to hit the common case.
Perhaps the easiest way to think about it is this: In the GPU's memory you have textures which, for the purposes of this discussion, are bitmap images. A CALayer might have a texture associated with it, or it might not. These cases would correspond to a layer with a -drawRect:
method, and a layer that exists solely to contain sublayers, respectively. Conceptually, each layer that has a texture associated with it has a different texture all it's own (there are some details and optimizations that make this not strictly/universally true, but in the general, abstract case, it can help to think of it this way.) With that in mind, a superlayer's -drawRect:
method has no effect on any of its sublayers' -drawRect:
methods, and (again, in the general case) a sublayer's -drawRect:
method has no effect on its superlayer's -drawRect:
method. Each draws into its own texture (also called a "backing store") and then, based on the layer tree and the associated geometries and transforms, the GPU composites all these textures together into what you see on the screen. When one of the layers is invalidated, directly or indirectly (via -setNeedsDisplayInRect:
), then when CA goes to display the next frame on screen, the invalid layers will be redrawn by virtue of having their -drawRect:
methods called. That will update the associated texture, and once all the invalid layers' textures are updated, the GPU will composite them, generating the final bitmap that you see on-screen.
So to answer your first question: In the general case, no, there is no interplay between the -drawRect:
methods of distinct CALayers.
As to your second question: For the purposes of this discussion you can think of UIViews as being the same as CALayers. The interrelationship with respect to drawing and textures is largely unchanged from that of non-UIView CALayers.
To your third question: Is it a good idea to intermix UIViews and CALayers? Every UIView has a CALayer backing it (all views in UIKit are layer-backed, which is not normally the case on OSX.) So at some level, they're "intermixed" whether you want them to be or not. It is perfectly fine to add CALayer sublayers to the layer that backs a UIView, although that layer will not have all the added functionality that UIView brings to the party. If that layer's purpose is just to generate an image for display, then that's fine. If you want the sub-layer/view to be a first class participant in touch handling, or to be positioned/sized using AutoLayout, then it will need to be a UIView. It's probably best to think of a UIView as a CALayer with a bunch of extra functionality added to it. If you need that functionality, use a UIView. If you don't, use a CALayer.
In sum: CoreGraphics is an API for drawing 2D graphics. One common use of CG (but not the only one) is to draw bitmap images. CoreAnimation is an API providing an organized framework for manipulating bitmaps on-screen. You could meaningfully use CoreAnimation without ever calling a CoreGraphics drawing primitive, for example, if all your textures were backed by images that were compiled into your application at build time.
Hopefully this is helpful. Leave comments if you need clarification, and I'll edit to oblige.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With