I have a java application running on Java 8 inside a docker container. The process starts a Jetty 9 server and a web application is being deployed. The following JVM options are passed: -Xms768m -Xmx768m
.
Recently I noticed that the process consumes a lot of memory:
$ ps aux 1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
app 1 0.1 48.9 5268992 2989492 ? Ssl Sep23 4:47 java -server ...
$ pmap -x 1
Address Kbytes RSS Dirty Mode Mapping
...
total kB 5280504 2994384 2980776
$ jcmd 1 VM.native_memory summary
1:
Native Memory Tracking:
Total: reserved=1378791KB, committed=1049931KB
- Java Heap (reserved=786432KB, committed=786432KB)
(mmap: reserved=786432KB, committed=786432KB)
- Class (reserved=220113KB, committed=101073KB)
(classes #17246)
(malloc=7121KB #25927)
(mmap: reserved=212992KB, committed=93952KB)
- Thread (reserved=47684KB, committed=47684KB)
(thread #47)
(stack: reserved=47288KB, committed=47288KB)
(malloc=150KB #236)
(arena=246KB #92)
- Code (reserved=257980KB, committed=48160KB)
(malloc=8380KB #11150)
(mmap: reserved=249600KB, committed=39780KB)
- GC (reserved=34513KB, committed=34513KB)
(malloc=5777KB #280)
(mmap: reserved=28736KB, committed=28736KB)
- Compiler (reserved=276KB, committed=276KB)
(malloc=146KB #398)
(arena=131KB #3)
- Internal (reserved=8247KB, committed=8247KB)
(malloc=8215KB #20172)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=19338KB, committed=19338KB)
(malloc=16805KB #184025)
(arena=2533KB #1)
- Native Memory Tracking (reserved=4019KB, committed=4019KB)
(malloc=186KB #2933)
(tracking overhead=3833KB)
- Arena Chunk (reserved=187KB, committed=187KB)
(malloc=187KB)
As you can see there is a huge difference between the RSS (2,8GB) and what is actually being shown by VM native memory statistics (1.0GB commited, 1.3GB reserved).
Why there is such huge difference? I understand that RSS also shows the memory allocation for shared libraries but after analysis of pmap
verbose output I realized that it is not the shared libraries issue but rather memory is consumed by somehing whas is called [ anon ] structure. Why JVM allocated so much anonymous memory blocks?
I was searching and found out the following topic: Why does a JVM report more committed memory than the linux process resident set size? However the case described there is different, because less memory usage is shown by RSS than by JVM stats. I have opposite situation and can't figure out the reason.
But in 99% of the cases it is completely normal behaviour by the JVM. What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room.
One way to get this sample output is to run: jcmd <pid> VM. native_memory summary . Get detail data: To get a more detailed view of native memory usage, start the JVM with command line option: -XX:NativeMemoryTracking=detail . This will track exactly what methods allocate the most memory.
In computing, resident set size (RSS) is the portion of memory occupied by a process that is held in main memory (RAM). The rest of the occupied memory exists in the swap space or file system, either because some parts of the occupied memory were paged out, or because some parts of the executable were never loaded.
Java is also a very high-level Object-Oriented programming language (OOP) which means that while the application code itself is much easier to maintain, the objects that are instantiated will use that much more memory.
After deep analysis according to the following article: https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/ we found out that the problem is related to memory allocation by java.util.zip.Inflater.
Still need to find out what calls java.util.zip.Inflater.inflateBytes and look for possible solutions.
I was facing similar issue with one of our Apache Spark job where we were submitting our application as a fat jar, After analyzing thread dumps we figured that Hibernate is the culprit, we used to load hibernate classes on startup of the application which was actually using java.util.zip.Inflater.inflateBytes
to read hibernate class files , this was overshooting our native resident memory usage by almost 1.5 gb , here is a bug raised in hibernate for this issue
https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc , the patch suggested in the comments worked for us, Hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With