we have the problem that our non-heap-memory is growing all the time. so we have to restart our jee (java8) - webapp every 3rd day (as you can see in the screenshot here: screenshot from non-heap- and heap-memory)
I have already tried to find out what fills up that non-heap. But I couldn't find any tool to create a nonheap-dump. do you have any idea how i could investigate on that to find out what elements are increasingly growing?
java-version
java version "1.8.0_102"
Java(TM) SE Runtime Environment (build 1.8.0_102-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.102-b14, mixed mode)
tomcat-version
Apache Tomcat Version 7.0.59
The Java Virtual Machine has memory other than the heap, referred to as Non-Heap Memory. It is created at the JVM startup and stores per-class structures such as runtime constant pool, field and method data, and the code for methods and constructors, as well as interned Strings.
Non-Heap Memory, which is used by Java to store loaded classes and other meta-data. JVM code itself, JVM internal structures, loaded profiler agent code and data, etc.
Non-Heap Memory The method area belongs to non-heap memory. It stores per-class structures such as a runtime constant pool, field and method data, and the code for methods and constructors. It is created at the Java virtual machine start-up.
Check the overall memory usage of the appjvisualvm is one of the memory analysis tools for Java used to analyze the runtime behavior of a Java application. It traces a running Java program, checking its memory and CPU consumption. Also, it is used to create a memory heap dump to analyze the objects in the heap.
Non-heap memory usage, as provided by MemoryPoolMXBean counts the following memory pools:
In other words, standard non-heap memory statistics includes spaces occupied by compiled methods and loaded classes. Most likely, the increasing non-heap memory usage indicates a class loader leak.
Use
jmap -clstats PID
to dump class loader statistics;jcmd PID GC.class_stats
to print the detailed information about memory usage of each loaded class. The latter requires -XX:+UnlockDiagnosticVMOptions
.As @apangin points out it looks like you are using more Metaspace over time. This is usually means you are loading more classes. I would record which classes are being loaded and methods being compiled and try to limit how much this is being done in production on a continuous basis. It is possible you have a library which is generating code continuously but not cleaning it up. This is where looking at what classes are being created could give you a hint as to which one.
For native non-heap memory.
You can look at the memory mapping on Linux with /proc/{pid}/maps
This will let you know how much virtual memory is being used.
You need to determine whether this is due to
From looking at your graphs you could reduce your heap and increase your maximum direct memory and extend the restart time to a week or more, but a better solution would be solve the cause.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With