I am working with a team developing a Java GUI application running on a 1GB Linux target system.
We have a problem where the memory used by our java process grows indefinitely, until Linux finally kills the java process.
Our heap memory is healthy and stable. (we have profiled our heap extensively) We also used MemoryMXBean to monitor the application's non heap memory usage, since we believed the problem might lie there. However, what we see is that reported heap size + reported non heap size stays stable.
Here is an example of how the numbers might look when running the application on our target system with 1GB RAM (heap and non heap reported by MemoryMXBean, total memory used by Java process monitored using Linux's top command (resident memory)):
At startup:
After 1 day:
After 2 days:
The numbers above are just a "cleaner" representation of how our system performs, but they are fairly accurate and close to reality. As you can see, the trend is clear. After a couple of weeks running the application, the Linux system starts having problems due to running out of system memory. Things start slowing down. After a few more hours the Java process is killed.
After months of profiling and trying to make sense of this, we are still at a loss. I feel it is hard to find information about this problem as most discussions end up explaining the heap or other non heap memory pools. (like Metaspace etc.)
My questions are as follows:
If you break it down, what does the memory used by a java process include? (in addition to the heap and non heap memory pools)
Which other potential sources are there for memory leaks? (native code? JVM overhead?) Which ones are, in general, the most likely culprits?
How can one monitor / profile this memory? Everything outside the heap + non heap is currently somewhat of a black box for us.
Any help would be greatly appreciated.
Heap and Non-Heap Memory The JVM memory consists of the following segments: Heap Memory, which is the storage for Java objects. Non-Heap Memory, which is used by Java to store loaded classes and other meta-data. JVM code itself, JVM internal structures, loaded profiler agent code and data, etc.
What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room. The need for it derives from several different reasons: Garbage collection.
Java Heap space is used by java runtime to allocate memory to Objects and JRE classes. Whenever we create an object, it's always created in the Heap space. Garbage Collection runs on the heap memory to free the memory used by objects that don't have any reference.
I'll try partially answer your question.
The basic strategy I'm trying to stick to in such situations is to make a monitoring of max/used/peak values for each memory pool available, opened files, sockets, buffer pools, number of threads, etc. There might be a leakage of socket connections/opened files/threads which you can miss.
In your case it looks like you are really have a problem with native memory leakage which is quite nasty and hard to find.
You may try to profile memory. Take a look at GC
roots and find out which ones is JNI
global references. It may help you to find out which classes may be not collected. For example this is a common problem in awt
which may require explicit component disposal.
To inspect JVM internal memory usage (which is not belongs to heap/off-heap memory) -XX:NativeMemoryTracking
is very handy. It allows you to track thread stack sizes, gc/compiler overheads and much more. The greatest thing about it is that you can create a baseline in any point of time and then track memory diffs since baseline was made
# jcmd <pid> VM.native_memory baseline
# jcmd <pid> VM.native_memory summary.diff scale=MB
Total: reserved=664624KB -20610KB, committed=254344KB -20610KB
...
You can also use JMX
com.sun.management:type=DiagnosticCommand/vmNativeMemory
command to generate this reports.
And... You can go deeper and inspect pmap -x <pid>
and/or procfs
content.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With