Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java process's memory grows indefinitely, but MemoryMXBean reports stable heap and non-heap size

I am working with a team developing a Java GUI application running on a 1GB Linux target system.

We have a problem where the memory used by our java process grows indefinitely, until Linux finally kills the java process.

Our heap memory is healthy and stable. (we have profiled our heap extensively) We also used MemoryMXBean to monitor the application's non heap memory usage, since we believed the problem might lie there. However, what we see is that reported heap size + reported non heap size stays stable.

Here is an example of how the numbers might look when running the application on our target system with 1GB RAM (heap and non heap reported by MemoryMXBean, total memory used by Java process monitored using Linux's top command (resident memory)):

At startup:

  • 200 MB heap committed
  • 40 MB non heap committed
  • 320 MB used by java process

After 1 day:

  • 200 MB heap committed
  • 40 MB non heap committed
  • 360 MB used by java process

After 2 days:

  • 200 MB heap committed
  • 40 MB non heap committed
  • 400 MB used by java process

The numbers above are just a "cleaner" representation of how our system performs, but they are fairly accurate and close to reality. As you can see, the trend is clear. After a couple of weeks running the application, the Linux system starts having problems due to running out of system memory. Things start slowing down. After a few more hours the Java process is killed.

After months of profiling and trying to make sense of this, we are still at a loss. I feel it is hard to find information about this problem as most discussions end up explaining the heap or other non heap memory pools. (like Metaspace etc.)

My questions are as follows:

  1. If you break it down, what does the memory used by a java process include? (in addition to the heap and non heap memory pools)

  2. Which other potential sources are there for memory leaks? (native code? JVM overhead?) Which ones are, in general, the most likely culprits?

  3. How can one monitor / profile this memory? Everything outside the heap + non heap is currently somewhat of a black box for us.

Any help would be greatly appreciated.

like image 512
Serenic Avatar asked Aug 24 '16 08:08

Serenic


People also ask

What is heap and non heap memory in Java?

Heap and Non-Heap Memory The JVM memory consists of the following segments: Heap Memory, which is the storage for Java objects. Non-Heap Memory, which is used by Java to store loaded classes and other meta-data. JVM code itself, JVM internal structures, loaded profiler agent code and data, etc.

Why does my Java process consume more memory than XMX?

What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room. The need for it derives from several different reasons: Garbage collection.

Does Java have heap memory?

Java Heap space is used by java runtime to allocate memory to Objects and JRE classes. Whenever we create an object, it's always created in the Heap space. Garbage Collection runs on the heap memory to free the memory used by objects that don't have any reference.


1 Answers

I'll try partially answer your question.

The basic strategy I'm trying to stick to in such situations is to make a monitoring of max/used/peak values for each memory pool available, opened files, sockets, buffer pools, number of threads, etc. There might be a leakage of socket connections/opened files/threads which you can miss.

In your case it looks like you are really have a problem with native memory leakage which is quite nasty and hard to find.

You may try to profile memory. Take a look at GC roots and find out which ones is JNI global references. It may help you to find out which classes may be not collected. For example this is a common problem in awt which may require explicit component disposal.

To inspect JVM internal memory usage (which is not belongs to heap/off-heap memory) -XX:NativeMemoryTracking is very handy. It allows you to track thread stack sizes, gc/compiler overheads and much more. The greatest thing about it is that you can create a baseline in any point of time and then track memory diffs since baseline was made

# jcmd <pid> VM.native_memory baseline
# jcmd <pid> VM.native_memory summary.diff scale=MB

Total:  reserved=664624KB  -20610KB, committed=254344KB -20610KB
...

You can also use JMX com.sun.management:type=DiagnosticCommand/vmNativeMemory command to generate this reports.

And... You can go deeper and inspect pmap -x <pid> and/or procfs content.

like image 65
vsminkov Avatar answered Oct 12 '22 01:10

vsminkov