Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Painfully slow software vectors, particularly CoreGraphics vs. OpenGL

I'm working on an iOS app that requires drawing Bézier curves in real time in response to the user's input. At first, I decided to try using CoreGraphics, which has a fantastic vector drawing API. However, I quickly discovered that performance was painfully, excruciatingly slow, to the point where the framerate started dropping severely with just ONE curve on my retina iPad. (Admittedly, this was a quick test with inefficient code. For example, the curve was getting redrawn every frame. But surely today's computers are fast enough to handle drawing a simple curve every 1/60th of a second, right?!)

After this experiment, I switched to OpenGL and the MonkVG library, and I couldn't be happier. I can now render HUNDREDS of curves simultaneously without any framerate drop, with only a minimal impact on fidelity (for my use case).

  1. Is it possible that I misused CoreGraphics somehow (to the point where it was several orders of magnitude slower than the OpenGL solution), or is performance really that terrible? My hunch is that the problem lies with CoreGraphics, based on the number of StackOverflow/forum questions and answers regarding CG performance. (I've seen several people state that CG isn't meant to go in a run loop, and that it should only be used for infrequent rendering.) Why is this the case, technically speaking?
  2. If CoreGraphics really is that slow, how on earth does Safari work so smoothly? I was under the impression that Safari isn't hardware-accelerated, and yet it has to display hundreds (if not thousands) of vector characters simultaneously without dropping any frames.
  3. More generally, how do applications with heavy vector use (browsers, Illustrator, etc.) stay so fast without hardware acceleration? (As I understand it, many browsers and graphics suites now come with a hardware acceleration option, but it's often not turned on by default.)

UPDATE:

I have written a quick test app to more accurately measure performance. Below is the code for my custom CALayer subclass.

With NUM_PATHS set to 5 and NUM_POINTS set to 15 (5 curve segments per path), the code runs at 20fps in non-retina mode and 6fps in retina mode on my iPad 3. The profiler lists CGContextDrawPath as having 96% of the CPU time. Yes — obviously, I can optimize by limiting my redraw rect, but what if I really, truly needed full-screen vector animation at 60fps?

OpenGL eats this test for breakfast. How is it possible for vector drawing to be so incredibly slow?

#import "CGTLayer.h"

@implementation CGTLayer

- (id) init
{
    self = [super init];
    if (self)
    {
        self.backgroundColor = [[UIColor grayColor] CGColor];
        displayLink = [[CADisplayLink displayLinkWithTarget:self selector:@selector(updatePoints:)] retain];
        [displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes];
        initialized = false;

        previousTime = 0;
        frameTimer = 0;
    }
    return self;
}

- (void) updatePoints:(CADisplayLink*)displayLink
{
    for (int i = 0; i < NUM_PATHS; i++)
    {
        for (int j = 0; j < NUM_POINTS; j++)
        {
            points[i][j] = CGPointMake(arc4random()%768, arc4random()%1024);
        }
    }

    for (int i = 0; i < NUM_PATHS; i++)
    {
        if (initialized)
        {
            CGPathRelease(paths[i]);
        }

        paths[i] = CGPathCreateMutable();

        CGPathMoveToPoint(paths[i], &CGAffineTransformIdentity, points[i][0].x, points[i][0].y);

        for (int j = 0; j < NUM_POINTS; j += 3)
        {
            CGPathAddCurveToPoint(paths[i], &CGAffineTransformIdentity, points[i][j].x, points[i][j].y, points[i][j+1].x, points[i][j+1].y, points[i][j+2].x, points[i][j+2].y);
        }
    }

    [self setNeedsDisplay];

    initialized = YES;

    double time = CACurrentMediaTime();

    if (frameTimer % 30 == 0)
    {
        NSLog(@"FPS: %f\n", 1.0f/(time-previousTime));
    }

    previousTime = time;
    frameTimer += 1;
}

- (void)drawInContext:(CGContextRef)ctx
{
//    self.contentsScale = [[UIScreen mainScreen] scale];

    if (initialized)
    {
        CGContextSetLineWidth(ctx, 10);

        for (int i = 0; i < NUM_PATHS; i++)
        {
            UIColor* randomColor = [UIColor colorWithRed:(arc4random()%RAND_MAX/((float)RAND_MAX)) green:(arc4random()%RAND_MAX/((float)RAND_MAX)) blue:(arc4random()%RAND_MAX/((float)RAND_MAX)) alpha:1];
            CGContextSetStrokeColorWithColor(ctx, randomColor.CGColor);

            CGContextAddPath(ctx, paths[i]);
            CGContextStrokePath(ctx);
        }
    }
}

@end
like image 998
Archagon Avatar asked Mar 12 '13 19:03

Archagon


1 Answers

You really should not compare Core Graphics drawing with OpenGL, you are comparing completely different features for very different purposes.

In terms of image quality, Core Graphics and Quartz are going to be far superior than OpenGL with less effort. The Core Graphics framework is designed for optimal appearance , naturally antialiased lines and curves and a polish associated with Apple UIs. But this image quality comes at a price: rendering speed.

OpenGL on the other hand is designed with speed as a priority. High performance, fast drawing is hard to beat with OpenGL. But this speed comes at a cost: It is much harder to get smooth and polished graphics with OpenGL. There are many different strategies to do something as "simple" as antialiasing in OpenGL, something which is more easily handled by Quartz/Core Graphics.

like image 188
johnbakers Avatar answered Sep 22 '22 01:09

johnbakers