Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding Memory Performance Counters

[Update - Sep 30, 2010]

Since I studied a lot on this & related topics, I'll write whatever tips I gathered out of my experiences and suggestions provided in answers over here-

1) Use memory profiler (try CLR Profiler, to start with) and find the routines which consume max mem and fine tune them, like reuse big arrays, try to keep references to objects to minimal.

2) If possible, allocate small objects (less than 85k for .NET 2.0) and use memory pools if you can to avoid high CPU usage by garbage collector.

3) If you increase references to objects, you're responsible to de-reference them the same number of times. You'll have peace of mind and code probably will work better.

4) If nothing works and you are still clueless, use elimination method (comment/skip code) to find out what is consuming most memory.

Using memory performance counters inside your code might also help you.

Hope these help!


[Original question]

Hi!

I'm working in C#, and my issue is out of memory exception.

I read an excellent article on LOH here -> http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/

Awesome read!

And, http://dotnetdebug.net/2005/06/30/perfmon-your-debugging-buddy/

My issue:

I am facing out of memory issue in an enterprise level desktop application. I tried to read and understand stuff about memory profiling and performance counter (tried WinDBG also! - little bit) but am still clueless about basic stuff.

I tried CLR profiler to analyze the memory usage. It was helpful in:

  1. Showing me who allocated huge chunks of memory

  2. What data type used maximum memory

But, both, CLR Profiler and Performance Counters (since they share same data), failed to explain:

  1. The numbers that is collected after each run of the app - how to understand if there is any improvement?!?!

  2. How do I compare the performance data after each run - is lower/higher number of a particular counter good or bad?


What I need:

I am looking for the tips on:

  1. How to free (yes, right) managed data type objects (like arrays, big strings) - but not by making GC.Collect calls, if possible. I have to handle arrays of bytes of length like 500KB (unavoidable size :-( ) every now and then.

  2. If fragmentation occurs, how to compact memory - as it seems that .NET GC is not really effectively doing that and causing OOM.

  3. Also, what exactly is 85KB limit for LOH? Is this the size of the object of the overall size of the array? This is not very clear to me.

  4. What memory counters can tell if code changes are actually reducing the chances of OOM?

Tips I already know

  1. Set managed objects to null - mark them garbage - so that garbage collector can collect them. This is strange - after setting a string[] object to null, the # bytes in all Heaps shot up!

  2. Avoid creating objects/arrays > 85KB - this is not in my control. So, there could be lots of LOH.

3.

Memory Leaks Indicators:

# bytes in all Heaps increasing
Gen 2 Heap Size increasing
# GC handles increasing
# of Pinned Objects increasing
# total committed Bytes increasing
# total reserved Bytes increasing
Large Object Heap increasing

My situation:

  • I have got 4 GB, 32-bit machine with Wink 2K3 server SP2 on it.
  • I understand that an application can use <= 2 GB of physical RAM
  • Increasing the Virtual Memory (pagefile) size has no effect in this scenario.

As its OOM issue, I am only focusing on memory related counters only.

Please advice! I really need some help as I'm stuck because of lack of good documentation!

like image 639
Nayan Avatar asked Sep 21 '10 08:09

Nayan


People also ask

How do I check my memory usage with Performance Monitor?

Method 2 - Using Performance MonitorClick on Performance Monitor. Click on Green colored "Plus" Symbol to open add counters Window. To select Memory, search the list of counters and select Memory, click on Add button and then OK button. When the graph appears on the screen, the graph will indicate memory usage.

What is a memory counter?

Counter-memory designates a practice of memory formation that is social and political, one that runs counter to the official histories of governments, mainstream mass media, and the society of the spectacle.

What does cache bytes show in perfmon?

The Cache Bytes counter shows the number of resident pages allocated in RAM that the Kernel threads can address without causing a Page Fault.


3 Answers

Nayan, here are the answers to your questions, and a couple of additional advices.

  1. You cannot free them, you can only make them easier to be collected by GC. Seems you already know the way:the key is reducing the number of references to the object.
  2. Fragmentation is one more thing which you cannot control. But there are several factors which can influence this:
    • LOH external fragmentation is less dangerous than Gen2 external fragmentation, 'cause LOH is not compacted. The free slots of LOH can be reused instead.
    • If the 500Kb byte arrays are referring to are used as some IO buffers (e.g. passed to some socket-based API or unmanaged code), there are high chances that they will get pinned. A pinned object cannot be compacted by GC, and they are one of the most frequent reasons of heap fragmentation.
    • 85K is a limit for an object size. But remember, System.Array instance is an object too, so all your 500K byte[] are in LOH.
    • All counters that are in your post can give a hint about changes in memory consumption, but in your case I would select BIAH (Bytes in all heaps) and LOH size as primary indicators. BIAH show the total size of all managed heaps (Gen1 + Gen2 + LOH, to be precise, no Gen0 - but who cares about Gen0, right? :) ), and LOH is the heap where all large byte[] are placed.

Advices:

  • Something that already has been proposed: pre-allocate and pool your buffers.

  • A different approach which can be effective if you can use any collection instead of contigous array of bytes (this is not the case if the buffers are used in IO): implement a custom collection which internally will be composed of many smaller-sized arrays. This is something similar to std::deque from C++ STL library. Since each individual array will be smaller than 85K, the whole collection won't get in LOH. The advantage you can get with this approach is the following: LOH is only collected when a full GC happens. If the byte[] in your application are not long-lived, and (if they were smaller in size) would get in Gen0 or Gen1 before being collected, this would make memory management for GC much easier, since Gen2 collection is much more heavyweight.

  • An advice on the testing & monitoring approach: in my experience, the GC behavior, memory footprint and other memory-related stuff need to be monitored for quite a long time to get some valid and stable data. So each time you change something in the code, have a long enough test with monitoring the memory performance counters to see the impact of the change.

  • I would also recommend to take a look at % Time in GC counter, as it can be a good indicator of the effectiveness of memory management. The larger this value is, the more time your application spends on GC routines instead of processing the requests from users or doing other 'useful' operations. I cannot give advices for what absolute values of this counter indicate an issue, but I can share my experience for your reference: for the application I am working on, we usually treat % Time in GC higher than 20% as an issue.

Also, it would be useful if you shared some values of memory-related perf counters of your application: Private bytes and Working set of the process, BIAH, Total committed bytes, LOH size, Gen0, Gen1, Gen2 size, # of Gen0, Gen1, Gen2 collections, % Time in GC. This would help better understand your issue.

like image 130
Alexey Nedilko Avatar answered Oct 07 '22 21:10

Alexey Nedilko


You could try pooling and managing the large objects yourself. For example, if you often need <500k arrays and the number of arrays alive at once is well understood, you could avoid deallocating them ever--that way if you only need, say, 10 of them at a time, you could suffer a fixed 5mb memory overhead instead of troublesome long-term fragmentation.

As for your three questions:

  1. Is just not possible. Only the garbage collector decides when to finalize managed objects and release their memory. That's part of what makes them managed objects.

  2. This is possible if you manage your own heap in unsafe code and bypass the large object heap entirely. You will end up doing a lot of work and suffering a lot of inconvenience if you go down this road. I doubt that it's worth it for you.

  3. It's the size of the object, not the number of elements in the array.

Remember, fragmentation only happens when objects are freed, not when they're allocated. If fragmentation is indeed your problem, reusing the large objects will help. Focus on creating less garbage (especially large garbage) over the lifetime of the app instead of trying to deal with the nuts and bolts of the gc implementation directly.

like image 27
blucz Avatar answered Oct 07 '22 22:10

blucz


Another indicator is watching Private Bytes vs. Bytes in all Heaps. If Private Bytes increases faster than Bytes in all Heaps, you have an unmanaged memory leak. If 'Bytes in all Heaps` increases faster than 'Private Bytes' it is a managed leak.

To correct something that @Alexey Nedilko said:

"LOH external fragmentation is less dangerous than Gen2 external fragmentation, 'cause LOH is not compacted. The free slots of LOH can be reused instead."

is absolutely incorrect. Gen2 is compacted which means there is never free space after a collection. The LOH is NOT compacted (as he correctly mentions) and yes, free slots are reused. BUT if the free space is not contiguous to fit the requested allocation, then the segment size is increased - and can continue to grow and grow. So, you can end up with gaps in the LOH that are never filled. This is a common cause of OOMs and I've seen this in many memory dumps I've analyzed.

Though there are now methods in the GC API (as of .NET 4.51) that can be called to programatically compact the LOH, I strongly recommend to avoid this - if app performance is a concern. It is extremely expensive to perform this operation at runtime and and hurt your app performance significantly. The reason that the default implementation of the GC was to be performant which is why they omitted this step in the first place. IMO, if you find that you have to call this because of LOH fragmentation, you are doing something wrong in your app - and it can be improved with pooling techniques, splitting arrays, and other memory allocation tricks instead. If this app is an offline app or some batch process where performance isn't a big deal, maybe it's not so bad but I'd use it sparingly at best.

A good visual example of how this can happen is here - The Dangers of the Large Object Heap and here Large Object Heap Uncovered - by Maoni (GC Team Lead on the CLR)

like image 38
Dave Black Avatar answered Oct 07 '22 20:10

Dave Black