My problem is this: I have a simple block image that I want to make a grid out of. I have created an array of CGPoints in my UIView. Then I used
blackImage = [UIImage imageNamed:@"black.png"];
and then
- (void)drawRect:(CGRect)rect {
for (int j = 0 ; j <= 11; j++)
{
for (int i = 0 ; i <= 7; i++)
{
[blackImage drawAtPoint: (CGPointFromString([[self.points objectAtIndex:j] objectAtIndex:i]))];
}
}
[whiteImage drawAtPoint:(CGPointMake((CGFloat)(floor((touchX+0.001)/40)*40), (CGFloat)(floor((touchY+0.001)/40))*40))];
// [whiteImage drawAtPoint:(CGPointMake(240.0, 320.0))];
}
Where TouchX and TouchY are the CGPoints where the user has touched the screen. So I'm basically displaying a different image at the point on the grid where the user touched.
First of all, the problem is that the whole screen is redrawn every time I call DrawRect. I want to be able to save the state (by changing an array of BOOL's?) and have the user drag on the screen and change multiple images. However, I'm unable to use the "drawAtPoint" method on the UImage when I'm outside of DrawRect. (throws an error).
Anyhow, I started looking into the CoreAnimation and Quartz docs, and got really confused. I'm not sure if I should be creating multiple UIViews for each brick (seems excessive), or a grid of CALayers or CGLayers... I think it's pretty simple what I want to do... but I don't really understand Layers, and the difference between using a CALayer and a CGLayer. Apparently "All UIViews are layer backed" What does that mean?
I'm kind of lost on how to implement this. I basically want to swap images in the grid so that the user can essentially 'paint' across the screen and toggle the images.
I've tried implementing this using [self setNeedsDisplayinRect], and passing in the CGRect that is under the user's finger. However, this is still calling DrawRect within TouchesMoved, which IS giving me the effect I want, but is way too slow on the iPhone. Is there a more efficient way to accomplish the same effect using CALayers? I'm new to Core Animation and don't really understand how to use a layer to solve this problem.
I think in your case, UIImage should handle this app and give good performance. The issue looks to be that you are forcing the whole screen to redraw in response to every drawRect:
message. The rect that is passed in specifies the 'dirty area' that you need to redraw. Any other drawing is just wasting time.
Similarly the rect you pass to your view's setNeedsDisplay:
is the rect you specifically want to invalidate. Calculate a rect around the touch and invaliate that. UIKit will (potentially) union mutiple dirty rects and send a single drawRect:
.
I would suggest you perform the following optimisations:
drawRect:
(in the debugger) and if it is less than the full screen write code to make sure you only repaint images within the dirty rectIf you need to display bitmaps and 1 & 2 don't get the performance you need, you will have to resort to lower level technology. There are lots of examples around including this question on SO.
To invalidate only a portion of your view, call setNeedsDisplayInRect:
with just the area that has changed. If multiple rects have changed, you can call it twice with different rects.
The other method of tackling this is to create multiple CALayer
s and move/hide them.
Also, you should never call drawRect:
directly, use setNeedsDisplay
instead. drawRect:
will be called by UIKit whenever an area of a view is invalid and needs painting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With