Note: I am aware of the question Memory management in memory intensive application, however that question appears to be about applications that make frequent memory allocations, whereas my question is about applications intentionally designed to consume as much physical memory as is safe.
I have a server application that uses large amounts of memory in order to perform caching and other optimisations (think SQL Server). The application runs on a dedicated machine, and so can (and should) consume as much memory as it wants / is able to in order to speed up and increase throughput and response times without worry of impacting other applications on the system.
The trouble is that if memory usage is underestimated, or if load increases its possible to end up with nasty failures as memory allocations fail - in this situation obviously the best thing to do is to free up memory in order to prevent the failure at the expense of performance.
Some assumptions:
My question is - how should I handle memory allocations in such an application? In particular:
The core objective is to prevent failures as a result of using too much memory, while at the same time using up as much memory as possible.
I'm a C# developer, however my hope is that the basic concepts for any such app are the same regardless of the language.
In linux, the memory usage percentage is divided into following levels.
0 - 30% - no swapping 30 - 60% - swap dirty pages only 60 - 90% - swap clean pages also based on LRU policy.
90% - Invoke OOM(Out of memory) killer and kill the process consuming maximum memory.
check this - http://linux-mm.org/OOM_Killer
In think windows might have similar policy, so you can check the memory stats and make sure you never get to the max threshold.
One way to stop consuming more memory is to go to sleep and give more time for memory cleanup threads to run.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With