From the docs:
"Layers are lightweight objects (CALayer) that, though similar to views, are actually model objects assigned to views."
Lightweight for me excludes any heavy bitmap for content. I believed a CALayer is the "real" thing, while a UIView is just the wrapper around it. Every view has 3 CALayers in different trees (model, presentation, render). So no 3 bitmaps? Only one?
The way I understand it, a CALayer is a representation of a Quartz drawing surface. You can't really think of it in terms of bitmaps, but rather of a container that encapsulates the current state of a drawing context, including its contents, transformation, shadows, and so on. It is, basically, as close as you can get to the GPU while remaining within Cocoa, but it's not the same as representing a bitmap—rather, it represents all the information necessary to reproduce its contents as you instruct it to. So, for example, if you draw a line on it, the layer internally could simply pass the coordinates of the line to the GPU and let the latter draw it, without having to worry about the pixels needed to render it.
Compared to UIView, a layer is “lightweight” in the sense that it concerns itself exclusively with display operations, and doesn't deal with things like responding to events or touches, and so on.
The reason for having both a model and a presentation layer is that the latter represents the current state of the layer, keeping all animations into account. That's why, for example, the documentation recommends that you do hit testing against the presentation layer.
The term "lightweight" in reference to a CALayer comes from that piece of documentation originating on the Mac. As Joe points out, an NSView is a fairly complex UI element when compared to the iPhone's UIView. You can animate dozens of UIViews around the screen on even a resource-constrained mobile device, but NSViews put a lot more strain on the system as you start adding many of them to the screen. This is one of the things gained by the fresh start of UIKit over AppKit, because UIKit has had Core Animation from the beginning, and Apple had a chance to learn from what worked and what didn't in AppKit.
In comparison, a CALayer adds very little to the underlying GPU-based bitmapped rectangular texture that it is drawing, so they don't add a lot of overhead. On an iPhone, this isn't very different from a UIView, because a UIView is then just a lightweight wrapper around a CALayer.
I'm going to disagree with Count Chocula on this, and say that a CALayer does appear to wrap a bitmapped texture on the GPU. Yes, you can specify custom Quartz drawing to make up the layer's content, but that drawing only takes place when necessary. Once the content in a layer is drawn, it does not need to be redrawn for the layer to be moved or otherwise animated around. If you apply a transform to a layer, you'll see it get pixelated as you zoom in, a sign that it is not dealing with vector graphics.
Additionally, with the Core Plot framework (and in my own applications), we had to override the normal drawing process of CALayers because the normal -renderInContext:
approach did not work well for PDFs. If you use this to render a layer and its sublayers into a PDF, you'll find that the layers are represented by raster bitmaps in the final PDF, not the vector elements they should be. Only by using a different rendering path were we able to get the right output for our PDFs.
I've yet to play with the new shouldRasterize
and rasterizationScale
properties in iOS 3.2 to see if they change the way this is handled.
In fact, you'll find that CALayers (and UIViews with their backing layers) do consume a lot of memory when you take their bitmapped contents into account. The "lightweight" measure is how much they add on top of the contents, which is very little. You might not see the memory usage from an instrument like Object Allocations, but look at Memory Monitor when you add large layers to your application and you'll see memory spikes in either your application or SpringBoard (which owns the Core Animation server).
When it comes to the presentation layer vs. the model one, the bitmap is not duplicated between them. There should only be the one bitmapped texture being displayed to the screen at a given moment. The different layers merely track the properties and animations at any given moment, so very little information is stored in each.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With