While talking to a colleague about a particular group of apps using up nearly 1.5G memory on startup... he pointed me to a very good link on .NET production debugging
The part that has me puzzled is ...
For example, if you allocate 1 MB of memory to a single block, the large object heap expands to 1 MB in size. When you free this object, the large object heap does not decommit the virtual memory, so the heap stays at 1 MB in size. If you allocate another 500-KB block later, the new block is allocated within the 1 MB block of memory belonging to the large object heap. During the process lifetime, the large object heap always grows to hold all the large block allocations currently referenced, but never shrinks when objects are released, even if a garbage collection occurs. Figure 2.4 on the next page shows an example of a large object heap.
Now let's say we have a fictional app that creates a flurry of large objects ( > 85KB), so the large object heap grows lets say to 200 Meg. Now lets say we have 10 such app instances running.. so that 2000 Megs allocated. Now is this memory never given back to the OS until the process shuts down... (is what I understood)
Are there any gaps in my understanding? How do we get back unused memory in the various LOHeaps ; we don't create the perfect storm of OutOfMemoryExceptions ?
Update: From Marc's response, I wanted to clarify that the LOH objects are not referenced - the large objects are use-n-throw - however the heap doesn't shrink even though the heap is relatively empty post the initial surge.
Update#2: Just including a code snippet (exaggerated but gets the point across I think).. I see an OutOfMemoryException around the time the Virtual memory hits the 1.5G mark on my machine (1.7G on another).. From Eric L.'s blog post, 'process memory can be visualized as a massive file on disk..' - this result is thus unexpected. The machines in this instance had GBs of free space on the HDD. Does the PageFile.sys OS file (or related settings) impose any restrictions?
static float _megaBytes;
static readonly int BYTES_IN_MB = 1024*1024;
static void BigBite()
{
try
{
var list = new List<byte[]>();
int i = 1;
for (int x = 0; x < 1500; x++)
{
var memory = new byte[BYTES_IN_MB + i];
_megaBytes += memory.Length / BYTES_IN_MB;
list.Add(memory);
Console.WriteLine("Allocation #{0} : {1}MB now", i++, _megaBytes);
}
}
catch (Exception e)
{ Console.WriteLine("Boom! {0}", e); // I put a breakpoint here to check the console
throw;
}
}
static void Main(string[] args)
{
BigBite();
Console.WriteLine("Check VM now!"); Console.ReadLine();
_megaBytes = 0;
ThreadPool.QueueUserWorkItem(delegate { BigBite(); });
ThreadPool.QueueUserWorkItem(delegate { BigBite(); });
Console.ReadLine(); // will blow before it reaches here
}
Newly allocated objects form a new generation of objects and are implicitly generation 0 collections. However, if they are large objects, they go on the large object heap (LOH), which is sometimes referred to as generation 3. Generation 3 is a physical generation that's logically collected as part of generation 2.
When it passes the end control, it clears all the memory variables which are assigned on stack. In other words all variables which are related to int data type are de-allocated in 'LIFO' fashion from the stack.
Whenever we create an object, it's always created in the Heap space. Garbage Collection runs on the heap memory to free the memory used by objects that don't have any reference.
Key Difference Between Stack and Heap MemoryStack accesses local variables only while Heap allows you to access variables globally. Stack variables can't be resized whereas Heap variables can be resized. Stack memory is allocated in a contiguous block whereas Heap memory is allocated in any random order.
There is a clarification I would like to make first. - Assuming you are running app as a 32bit app, the VA space available for your process is only 2 GB , 3GB if you enabled large address space switch, so even if you have HUGE page file, it doesn't matter if you are 32bit process, it matters if you run 64bit, where you have huge address space.
so you will get back the memory in one of these cases. The fact that you are hitting an OOM before reaching the 2GB limit could mean you have VA fragmentation, VA fragmentation occur when you don't have continuous VA address space to satisfy new allocation, for example you ask for 8KB segment, and you don't have 2 consecutive pages in your VA ( assuming page size is 4 K)
you can use !vamap debugger extension in Debugging tools for windows to validate this.
Hope this helps Thanks
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With