I am running a number of jobs on a computing cluster, and they are killed when they go over a requested resource usage - one of these uses is virtual memory size.
In my java startup command I use -Xmx8000m
to indicate an initial stack size of 8GB, I have not yet seen my program's real memory usage go above 4GB, but wanted to be on the safe side.
However, when I use the top command I am seeing a virtual memory size for my java process of 12GB - which is right at the limit of the requested virtual memory space. I can't increase my requested VM size as the jobs are already submitted and the more I ask for the longer they take to be scheduled.
Does Java consistently request more VM heap space than is specified? Is this a constant amount, or a constant % or random? Can the heap space grow above a) the requested VM size (8GB) or b) the allocated VM size (12GB).
Edit: Using jre-1.7.0-openjdk on Linux
What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room. The need for it derives from several different reasons: Garbage collection.
This resource memory used by the JVM is often called overhead. The recommended minimum starting memory point for 64-bit Maximo 7.5 JVMs systems is 3584 MB. Therefore we recommended that physical memory availability for each JVM be 4096 MB;0.5 GB is for JVM allocation and 512 MB is for overhead.
java - JVM exceeds maximum memory defined with -Xmx - Stack Overflow. Stack Overflow for Teams – Start collaborating and sharing organizational knowledge.
This article gives a good analysis of the problem: Why does my Java process consume more memory than Xmx And its author offers this approximate formula:
Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]
But besides the memory consumed by your application, the JVM itself also needs some elbow room. - Garbage collection. - JIT optimization. - Off-heap allocations. - JNI code. - Metaspace.
But be carefull as it may depend on both the platform and the JVM vendor/version.
This could be due to the change in malloc behavior in glibc 2.10+, where malloc now creates per-thread memory pools (arenas). The arena size on 64-bit is 64MB. After using 8 arenas on 64-bit, malloc sets the number of arenas to be number_of_cpus * 8. So if you are using a machine with many processor cores, the virtual size is set to a large amount very quickly, even though the actual memory used (resident size) is much smaller.
Since you are seeing top show 12GB virtual size, you are probably using a 64-bit machine with 24 cores or HW threads, giving 24 * 8 * 64MB = 12GB. The amount of virtual memory allocated varies with number of cores, and the amount will change depending on the number of cores on the machine your job gets sent to run on, so this check is not meaningful.
If you are using hadoop or yarn and get the warning, set yarn.nodemanager.vmem-check-enabled
in yarn-site.xml to false
.
References:
See #6 on this page:
http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/
which links to more in-depth discussion on this page:
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage
Note this is already partially answered on this stackoverflow page:
Container is running beyond memory limits
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With