I am investigating out of memory issues with a java application running in a docker container orchestrated by mesos marathon.
Initial
2MEMUSER +--JIT: 318,789,520 bytes / 778 allocations
2MEMUSER | |
3MEMUSER | +--JIT Code Cache: 268,435,456 bytes / 1 allocation
2MEMUSER | |
3MEMUSER | +--JIT Data Cache: 16,777,728 bytes / 8 allocations
2MEMUSER | |
3MEMUSER | +--Other: 33,576,336 bytes / 769 allocations
After 1 hour
2MEMUSER +--JIT: 525,843,728 bytes / 8046 allocations
2MEMUSER | |
3MEMUSER | +--JIT Code Cache: 268,435,456 bytes / 1 allocation
2MEMUSER | |
3MEMUSER | +--JIT Data Cache: 62,916,480 bytes / 30 allocations
2MEMUSER | |
3MEMUSER | +--Other: 194,491,792 bytes / 8015 allocations
I wanted to know if a core dump with the Eclipse Memory Analyzer Tool (MAT) might shed light on what is in this "Other" space.
We have tried to limit JIT memory usage by following this discussion
*-Xjit:disableCodeCacheConsolidation
-Xcodecachetotal128m*
but can't seem to get the args to work.
We are using IBM JRE 1.8.0 Linux amd64-64 (build 8.0.5.17 - pxa6480sr5fp17-20180627_01(SR5 FP17))
Can people please share tools/experience troubleshooting JIT native memory consumption?
You could have a memory leak in "metaspace" memory. This is off-heap memory that is used by the JVM to hold (for example) JIT-compiled classes and other class metadata.
A couple of common causes of metaspace leaks are:
Proxy classes / objects or similar.There are JVM options that can limit the size of metaspace; e.g. -XX:MaxMetaspaceSize=256m.
Here's a Q&A on diagnosing metaspace leaks:
I just noticed that you are using an IBM JRE rather than an Oracle / OpenJDK one. So the above is not directly applicable.
The root problem could well be the same though: leakage via classloaders / hot-loading or via Proxy classes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With