So, I was reading this: http://www.ibm.com/developerworks/java/library/j-jtp09275/index.html which says, "Public service announcement: Object pooling is now a serious performance loss for all but the most heavyweight of objects, and even then it is tricky to get right without introducing concurrency bottlenecks," and took it at face value. The article talks about generational GC, deallocation, thread-local allocation and escape analysis.
However, I just had a little voice in my head ask me, "But is this true of the garbage collector implementation in Android?" and I don't know the answer. I wouldn't even know how to go about finding the answer.
I remember having the GC run less often in my android apps when I implemented pooling for small objects that were used often, though. Not sure if that means a faster app.. Also, GC ran more often without pooling (according to logcat), so I assume Android's implementation of the GC loses to pooling.. But this assumption has very little backing because I didn't notice any significant performance difference with or without pooling.
So.. Anyone here know if pooling is more efficient than Android's GC for small objects used often?
Anyone here know if pooling is more efficient than Android's GC for small objects used often?
That depends on how you measure "efficient", "small", and "often".
Object pooling is used in several places within Android itself, such as:
the whole Adapter
framework for AdapterView
(ListView
and kin) is designed around object pools, this time for relatively heavyweight objects (e.g., a ListView
row can easily be tens of KB)
SensorEvent
objects are recycled, this time for lightweight objects used potentially dozens of times per second
AttributeSet
objects are recycled, as part of View
inflation
and so on.
Some of that was based on early versions of Dalvik in early versions of Android, when we were aiming at under 100MHz CPUs with a purely interpreted language and a fairly naïve GC engine.
However, even today, object pooling has one big advantage beyond immediate performance: heap fragmentation.
Java's GC engine is a compacting garbage collector, meaning that contiguous blocks of free heap space are combined into much larger blocks. Dalvik's GC engine is a non-compacting garbage collector, meaning that a block that you allocate will never become part of a larger block. This is where many developers get screwed with bitmap management -- the OutOfMemoryError
they get is not because the heap is out of room, but that the heap has no block big enough for the desired allocation, due to heap fragmentation.
Object pools avoid heap fragmentation, simply by preventing the pooled objects from getting garbage collected again and not allocating new objects for the pool very often (only if the pool needs to grow due to too much simultaneous use).
Game developers have long used object pooling in Android, stemming from back when Android's garbage collection was non-concurrent, "stopping the world" when GC was conducted. Now, most Android devices uses a concurrent garbage collector, which eases the pain here some.
So, object pooling is definitely still a relevant technique. Mostly, though, I'd view it as something to employ as a reaction to a detected problem (e.g., Traceview showing too much time in GC, Traceview showing too much time in object constructors, MAT showing that you have plenty of heap but you get OutOfMemoryErrors
). The exception would be game development -- game developers probably have their own heuristics for when pooling is still needed with modern Android devices.
There is a fallacy in your reasoning. The GC running more frequently does not indicate some sort of diminished performance. Those more frequent GC runs could also be much faster and shorter lived than the less frequent ones that have to muddle through the object pool.
That said, I did some research and here are some thoughts... A couple years ago, my mobile phone had a single core. Running the GC meant switching from the activity to the GC thread. Even with concurrent GC and multiple cores (modern devices have 2-5 afaik), there could be slight pauses.
Pre-allocating everything that the user might need for the next sequence of interactions is suggested as a good idea for games. Essentially following the mantra of real-time applications which are less worried about overall performance as compared to having consistent measurable performance during the User Experience portion of the application.
http://developer.android.com/training/articles/perf-tips.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With