I have an .net app that seems to have a memory leak issue(s). The .net service starts out around 100MB of memory, but under load it hits around 400-500MB. Most of my classes don't have unmanaged resources, and the ones that do already implement IDisposable. So my question is would slapping IDisposable on my classes help?
The 4-500 MB isn't itself concerning. The concern is there are 8 different services. Each is built using SharpArch, NServiceBus, Windsor, and NHibernate. My feeling is that there is something in one of these that is causing a problem. My concern is that the total memory of all the services is around 3.2 to 3.6 gigs of memory out of 4 gigs. It is not throwing OutOfMemory exceptions yet, but I'd like to head this off at the pass. Also I've used dotTrace, which give me some information, I'm just not sure how to act on that information
Memory leaks don't result in physical or permanent damage. Since it's a software issue, it will slow down the applications or even your whole system. However, a program taking up a lot of RAM space doesn't always mean its memory is leaking somewhere. The program you're using may really need that much space.
Usage of memory profiler to detect a memory leak DotMemory, SciTech Memory Profiler, and ANTS Memory Profiler are the most popular.NET memory profilers. If you have Visual Studio Enterprise, you may also use a "free" profiler. Memory profilers all work in the same way.
My first concern would be to ensure that you are measuring something relevant. "Memory" can mean a lot of different things. There is an enormous difference between running out of virtual memory space and running out of RAM. There is an enormous difference between a performance problem caused by thrashing the page file and a performance problem caused by creating too much GC pressure.
If you don't understand what the relationships are between RAM, virtual memory, working set and the page file then start by doing some reading until you understand all that stuff. The way you phrased the question leads me to suspect that you believe that virtual memory and RAM are the same thing. They certainly are not.
I suspect that the arithmetic you are doing is:
That syllogism is completely invalid. That's the syllogism:
when in fact you have an entire warehouse-sized cold storage facility next door. Remember, RAM is just a convenient fast way to store stuff near where you need it, like your fridge. If you have more stuff that needs to be stored, who cares if you run out of room locally? You can always pop next door and put the stuff you use less frequently in long term deep freeze -- the page file. That's less convenient, but nothing melts.
You get an "out of memory" exception when a process runs out of virtual address space, not when all the RAM in the system is consumed. When all the RAM in the system is consumed, you don't get an error, you get crap performance because the operating system is spending all of its time running stuff back and forth from disk.
So, anyway, start by understanding what you are measuring and how memory in Windows works. What you should actually be looking for is:
Is any process in danger of using more than two billion bytes of virtual memory on a 32 bit system? A process only gets 2GB of virtual memory (not RAM, remember, virtual memory has nothing to do with RAM: that's why its called "virtual" -- it isn't hardware) on win32 that is addressible by user code; you'll get an OOM if you try to use more.
Is any process in danger of attempting to allocate a huge block of virtual memory such that there is no contiguous block of that size free? Are you likely to be allocating ten million bytes of data in a single array, for example? Again, OOM.
Is the working set -- that is, the virtual memory pages of a process that are *required to be in RAM for performance reasons -- of all processes smaller than the amount of RAM available? If not, then soon you'll get thrashing, but not an OOM.
Is your page file big enough to handle the virtual memory pages that could be paged out to disk if RAM starts to get short?
So far none of this has anything to do with .NET. Once you've actually determined that there is a real problem - there might not be - then start investigating based on what the real problem is. Use a memory profiler to examine what the memory allocator and garbage collector are doing. See if there are huge blocks in the large object heap, or unexpectedly big graphs of live objects that cannot be collected, or what. But use good engineering principles: understand the system, use tools to investigate the actual empirical performance, experiment with changes and carefully measure their results. Don't just start randomly slapping magic IDisposable interfaces on a few classes and hope that doing so makes the problem -- if there is one -- go away.
If all of the classes which have unmanaged resources implement IDisposable
and are properly disposed of (via using or try/finally) then adding further IDisposable
implementations won't help anything.
The first problem is that you don't know why you're leaking. Managed applications typically leak for one of the following reasons
Given the information in your question it's almost certainly #2 that is causing the problem. You'll need to get a profiler or windbg to tell you what the actual leak is and which rooted objects are causing it.
Here is a great article by Rico to get you started
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With