Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS Games and Run-Loop Management

First, my question: How do you manage your iOS Run-Loop?

Next my reason: I've been researching this with a variety of prototypes (v. early stage development) and have found a number of perplexing issues.

  • First, input issues and the run loop lead me to try the following:
    • when using the most recommended system (CADisplayLink) I noted that certain touch inputs are dropped once the CPU load causes the buffer flip (presentRenderBuffer) to have to wait a frame. This occurs only on the device and not in the simulator (annoyingly - this seems to be related to wait for vsync blocking on the main thread & the way the app run-loop process touch input & eats messages)
    • when using the next most recommended system (NSTimer) I noted that certain touch inputs are dropped once the CPU load reaches a certain point in the simulator but not in the device (also annoyingly). NSTimer also results in much lower precision on when my updates fire
    • when using the least recommended system (running the run loop in it's own thread managed internally with a high-precision timer built from mach_absolute_time, all my touch input problems go away, however my ASSERT code now traps in the wrong thread and only if I usleep following the software interrupt. (My assert code is similar to http://iphone.m20.nl/wp/?p=1) I really like having my assert code trap immediately at the line that caused the problem, so this solution is not really workable for me: harder to debug.
  • Second, lost time:
    • while investigating the system, I found that regardless of framerate (bizarrely, but I suppose statistically it still makes sense w/vsync) I'm waiting approximately 22% of the time on the vsync. I've confirmed this by moving around glFlush/glFinish and by playing with how often I do the presentRenderBuffer calls. This is key time that I'd love to be processing AI, etc rather than simply stalling on a blocking gl call. The only way I can think of around this would involve moving rendering into it's own thread, but I'm not sure if it's warranted to start re-architecting for multi-threading on a single-processor device.

So has anyone found a magic bullet around these issues? Does anyone have a killer run-loop architecture that's kick-ass on this platform? At the moment it looks like I have to pick the lesser of the evils.

like image 957
Mark Avatar asked Jan 27 '11 15:01

Mark


1 Answers

For my own iOS projects, I use the classic approach (create a window .nib, create a class inheriting EAGLView, add EAGLView to a view in a view controller which is placed in its own .nib).

At work, I took a slightly different approach inspired by SDL, which you can inspect in our opensourced library, APRIL. Main goal of APRIL is support for as many platforms as possible, while retaining simplicity (window and input management only) and being clear about licensing issues and free to use. Our developers want to write apps on one platform (Windows, Mac or Linux, according to tastes and desires) and then the code is handed over to me to adapt for other platforms.

In the approach we use in APRIL, you don't create any .nibs, and upon calling UIApplicationMain, you specify the delegate class as its fourth argument. Main code of game remains absolutely the same for each platform, and only platform-specific stuff is #ifdef'd into the code, or abstracted in a helper library.

In the app delegate you create the view controller and the window:

- (void)applicationDidFinishLaunching:(UIApplication *)application {     // create a window.     // early creation so Default.png can be displayed while we're waiting for      // game initialization     window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];      // viewcontroller will automatically add imageview     viewController = [[AprilViewController alloc] initWithWindow:window];     [viewController loadView];      // set window color     [window setBackgroundColor:[UIColor blackColor]];      // display the window     [window makeKeyAndVisible];      // thanks to Kyle Poole for this trick     // also used in latest SDL     // quote:     // KP: using a selector gets around the "failed to launch application in time" if the startup code takes too long     // This is easy to see if running with Valgrind      [self performSelector:@selector(runMain:) withObject:nil afterDelay:0.2f]; } 

Notice how we delay launching by 0.2? That's why I mention image view above. During those 0.2 seconds, we'd have blank screen displayed immediately after Default.png, and extra delay is introduced before control is transferred to runMain:, which releases control to the main app:

- (void)runMain:(id)sender {            // thanks to Kyle Poole for this trick     char *argv[] = {"april_ios"};     int status = april_RealMain (1, argv); //gArgc, gArgv); #pragma unused(status) } 

So, now the control is never transferred back to UIApplication's actual main loop. You then create your own main loop.

    void iOSWindow::enterMainLoop()     {             while (mRunning)              {                     // parse UIKit events                     doEvents();                     handleDisplayAndUpdate();             }     }      void iOSWindow::doEvents()     {             SInt32 result;             do {                     result = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, TRUE);             } while(result == kCFRunLoopRunHandledSource);     } 

(On a side note, view controller is used, of course, to simplify rotation of UI to match device orientation.)

Both of these approaches use CADisplayLink if supported by the OS. I have not noticed any issues with either of the methods, although my private projects are primarily accelerometer based. I suspect APRIL approach might make some of the problems go away, too.

like image 191
Ivan Vučica Avatar answered Oct 03 '22 21:10

Ivan Vučica