Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I track down memory peaks? (That's peaks with a p, not an l.)

I've got a kiosk app, which, essentially shows a bunch of slides with various bits of information on them. I initially began coding this over a year ago, when I was beginning with Objective-C and iOS development. I find that my code style is much cleaner now than what it was, and I'm much more experienced, so I've decided to rewrite from scratch.

I ran my app with the Allocations instrument to see what the memory usage was. Considering that this is a kiosk app, everything needs to run smoothly, without leaks. (Of course all apps need to run without leaks, but a kiosk app makes this an even more important goal.) I saw some interesting results, so I ran the old version of the code as well.

Running the older version of the code, I see a pretty much even run at about 1.15 megabytes of memory usage. Everything seems to be allocated and deallocated as necessary. In my new implementation, however, I'm seeing something a little different. Memory usage keeps jumping in little "plateaus", and then eventually seems to peak out at about 1.47 megabytes of usage. Here's what the new Allocations report looks like after running for over 10 hours:

enter image description here

I'm concerned for several reason.

  1. The odd pattern in the beginning of the run.
  2. Allocations seems to peak at 1.47 megabytes, but running it overnight shows that it actually will slowly use more and more memory over time. That can't be a good thing.

There are several notable differences between the old project and the new one.

  • The older one uses Plists as a backing store (I manually read and write to a plist file.) The new project uses Core Data.

  • The new project implements a library that is called on each "slide" that the old project didn't have. I'd be more concerned about this library, except I wrote it and I went through it to make sure I was releasing everything and only autoreleased wherever manual releases were impossible.

  • Both classes use a factory class to create the slides. In the old project, the factory class was a singleton. I thought that making it into a normal class would help with the memory issues, since the singleton was never released. (Hence it's properties were not being released.In the new project, the factory class is being released so I'm not sure why it's still taking up all that memory (if that's what's causing the problem.

  • The old project makes use of string constants in various places. The new code uses a massive enum for the same thing. (The new code in general uses more constants.)

What can I do to track down memory peaks? The memory is all being cleaned up by the application when it discards whatever it's using, but it doesn't seem to be discarding things until the app terminates.

I'd be grateful if anyone would help point me in the right direction.

Edit:

It looks like the peaking is being caused by calls to the KosherCocoa library. If anyone would mind taking a look at it and telling me what I'm doing wrong there as far as memory management goes, I'd really appreciate it.

like image 632
Moshe Avatar asked Aug 01 '11 14:08

Moshe


1 Answers

What can I do to track down memory peaks? The memory is all being cleaned up by the application when it discards whatever it's using, but it doesn't seem to be discarding things.

This is a classic case of "abandoned objects" or "usage accretion". That is, you have an application that, as it runs, builds up an object graph in memory as a normal part of usage. The objects aren't leaked because they are still connected to the live object graph. More likely than not, the objects are a part of either some kind of a cache (a write-only cache, most often) or a mechanism involving historical state (the undo stack is a potential source for accretion).

To fix it, you need to make sure your object graph is pruned appropriately as your app runs. Caches should generally use a least-recently-used [LRU] pruning algorithm that limits the cache size. If a cache key ever goes invalid, that data should be pruned, too.

For historical information, pruning the history is critical. So is making sure that the historical data contains an absolutely minimal representation of that historical state.

Use Heapshot analysis -- it was created to help track down exactly these kinds of problems.

I wrote a detailed "How to" guide; When is a Leak not a Leak?

like image 200
bbum Avatar answered Oct 08 '22 11:10

bbum