-Xmx256m -XX:NativeMemoryTracking=summary
768m
768m
) resulting in killing and restarting the container.On the JVM side we don't notice anything special :
492.8MiB / 768MiB 64.17% [ec2-user@ip-10-180-28-222 ~]$ docker exec 34d7 jcmd 1 VM.native_memory summary 1: Native Memory Tracking: Total: reserved=1731355KB, committed=472227KB - Java Heap (reserved=262144KB, committed=262144KB) (mmap: reserved=262144KB, committed=262144KB) - Class (reserved=1131805KB, committed=92829KB) (classes #16224) (malloc=7453KB #20996) (mmap: reserved=1124352KB, committed=85376KB) - Thread (reserved=29932KB, committed=29932KB) (thread #30) (stack: reserved=29772KB, committed=29772KB) (malloc=94KB #151) (arena=66KB #55) - Code (reserved=255659KB, committed=35507KB) (malloc=6059KB #9814) (mmap: reserved=249600KB, committed=29448KB) - GC (reserved=15369KB, committed=15369KB) (malloc=5785KB #547) (mmap: reserved=9584KB, committed=9584KB) - Compiler (reserved=190KB, committed=190KB) (malloc=59KB #858) (arena=131KB #6) - Internal (reserved=7849KB, committed=7849KB) (malloc=7817KB #18468) (mmap: reserved=32KB, committed=32KB) - Symbol (reserved=20018KB, committed=20018KB) (malloc=17325KB #175818) (arena=2693KB #1) - Native Memory Tracking (reserved=3558KB, committed=3558KB) (malloc=10KB #120) (tracking overhead=3548KB) - Arena Chunk (reserved=4830KB, committed=4830KB) (malloc=4830KB)
After running for about 20 hours
649.6MiB / 768MiB 84.59% [ec2-user@ip-10-180-28-222 ~]$ docker exec 34d7 jcmd 1 VM.native_memory summary 1: Native Memory Tracking: Total: reserved=1741020KB, committed=510928KB - Java Heap (reserved=262144KB, committed=262144KB) (mmap: reserved=262144KB, committed=262144KB) - Class (reserved=1138319KB, committed=100495KB) (classes #16390) (malloc=7823KB #30851) (mmap: reserved=1130496KB, committed=92672KB) - Thread (reserved=30996KB, committed=30996KB) (thread #31) (stack: reserved=30800KB, committed=30800KB) (malloc=97KB #156) (arena=99KB #57) - Code (reserved=261330KB, committed=69062KB) (malloc=11730KB #16047) (mmap: reserved=249600KB, committed=57332KB) - GC (reserved=15363KB, committed=15363KB) (malloc=5779KB #334) (mmap: reserved=9584KB, committed=9584KB) - Compiler (reserved=223KB, committed=223KB) (malloc=92KB #1246) (arena=131KB #6) - Internal (reserved=8358KB, committed=8358KB) (malloc=8326KB #18561) (mmap: reserved=32KB, committed=32KB) - Symbol (reserved=20253KB, committed=20253KB) (malloc=17527KB #177997) (arena=2725KB #1) - Native Memory Tracking (reserved=3846KB, committed=3846KB) (malloc=10KB #127) (tracking overhead=3836KB) - Arena Chunk (reserved=188KB, committed=188KB) (malloc=188KB)
What we know so far after observation for 20 hours :
492.8MiB
to 649.6MiB
472227KB
to 510928KB
649.6MiB - 510928KB
and why is it growing)So I don't know what else I can debug on the JVM side. I understand java needs more than just the heap (hence the native memory tracking), but there is still a gap of around 150m between what the jvm native memory tracker reports and what the docker stats is seeing. How can I gain some more insights in where that memory is going ?
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine.
Garbage collection. As you might recall, Java is a garbage-collected language. In order for the garbage collector to know which objects are eligible for collection, it needs to keep track of the object graphs. So this is one part of the memory lost to this internal bookkeeping.
To limit the maximum amount of memory usage for a container, add the --memory option to the docker run command. Alternatively, you can use the shortcut -m . Within the command, specify how much memory you want to dedicate to that specific container.
The --memory parameter limits the container memory usage, and Docker will kill the container if the container tries to use more than the limited memory.
JVM reported memory is not everything.
JVM’s memory footprint
So, what contributes to the JVM memory footprint? Most of us who have run a Java application, know how to set the maximum heap space. But there’s actually a lot more contributing to the memory footprint:
- Native JRE
- Perm / metaspace
- JIT bytecode
- JNI
- NIO
- Threads
This is a lot that needs to be kept in mind when we want to set memory limits with Docker containers. And also setting the container memory limit to the maximum heap space, might not be sufficient…
JVM and CPUs
Let’s take a short look at how the JVM adjusts to the number of processors/cores available on the node it is running on. There are actually a number of parameters which, by default are initialised based on core count.
- ♯ of JIT compiler threads
- ♯ Garbage Collection threads
- ♯ of thread in the common fork-join pool …
So if the JVM is running on a 32 core node (and one did not overwrite the default), the JVM will spawn 32 Garbage Collection threads, 32 JIT compiler threads, …. source
To prevent this problem, you should use +UseContainerSupport
(available since 8u191 enabled by default in Java 10) and probably -XX:MaxRAMPercentage=90.0
or less depending on your observed total used memory. For more see this and that.
I strongly recommend: "Nobody puts Java in the container: Ken Sipe" from JavaZone
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With