On a 64-bit Windows machine with 12GB of RAM and 33GB of Virtual Memory (per Task Manager), I'm able to run Java (1.6.0_03-b05) with an impossible -Xmx setting of 3.5TB but it fails with 35TB. What's the logic behind when it works and when it fails? The error at 35TB seems to imply that it's trying to reserve space at startup. Why would it do that for -Xmx (as opposed to -Xms)?
C:\temp>java -Xmx3500g ostest os.arch=amd64 13781729280 Bytes RAM C:\temp>java -Xmx35000g ostest Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine.
On Solaris (4GB RAM, Java 1.5.0_16), I pretty much gave up at 1 PB on how high I can set -Xmx. I don't understand the logic for when it will error out on the -Xmx setting.
devsun1.mgo:/export/home/mgo> java -d64 -Xmx1000000g ostest os.arch=sparcv9 4294967296 Bytes RAM
What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room. The need for it derives from several different reasons: Garbage collection.
If your -Xmx (maximum) is larger than the available memory (total memory to include any virtual memory) you will get a runtime failure if and only if your JVM processes actually tries to use more memory than the machine has.
-Xmx256m. Heap starts at default initial value and grows to a maximum of 256 MB. If you exceed the limit set by the -Xmx option, the JVM generates an OutofMemoryError .
Just to re-enforce Pascal's answer--Be very careful in windows when specifying a high max memory size. I was working on a server project that required as much physical memory as possible, but once you are over physical ram, abysmal performance is not a good description of what happens--Hung machine might be better.
What happens (at least this is my evaluation of it after days of examining logs and re-running tests) is, Windows runs out of ram and asks all apps to free up what they can. When it asks Java, Java kicks off a GC. The GC touches all of memory (causing anything that has been swapped out to be swapped in). This in turn causes windows to run out of memory. Windows then sends a message to all apps asking them to free up what they can.... (recurse indefinitely)
This may not ACTUALLY be what is going on, but the fact that Java GC touches Very Old Memory at times makes it incompatible with paging.
According to this thread on Sun's java forums (the OP has 16GB of physical memory):
You could specify -Xmx20g, but if the total of the memory needed by all the processes on your machine ever exceeds the physical memory on your machine, you are likely to end up paging. Some applications can survive running in paged memory, but the JVM isn't one of them. Your code might run okay, but, for example, garbage collections will be abysmally slow.
UPDATE: I googled a bit further and, according to the Frequently Asked Questions About the Java HotSpot VM and more precisely How large a heap can I create using a 64-bit VM?
How large a heap can I create using a 64-bit VM?
On 64-bit VMs, you have 64 bits of addressability to work with resulting in a maximum Java heap size limited only by the amount of physical memory and swap space your system provides.
See also Why can't I get a larger heap with the 32-bit JVM?
I don't know why you are able to start a JVM with a heap >45GB. This is a bit confusing...
At least with the Sun 64-bit VM 1.6.0_17 for Windows, ObjectStartArray::initialize will allocate 1 byte for each 512 bytes of heap on VM startup. Starting the VM with 35TB heap will cause the VM to allocate 70GB immediately and hence fail on your system.
The 32-bit VM (and so I suppose the 64-bit VM) from Sun does not take account for available physical memory when calculating the maximum heap, but is only limited by the 2GB addressable memory on Windows and Linux or 4GB on Solaris or possibly failing to allocate enough memory at startup for the management area.
If you think about it, checking the sanity of the max heap value against available physical memory does not make much sense. X GB of physical memory does not mean that X GB is available to the VM when required, it can just as well have been used by other processes, so the VM needs a way to cope with the situation that more heap is required than available from the OS anyway. If the VM is not broken, OutOfMemoryErrors are thrown if memory cannot be allocated from the OS, just as if the max heap size has been reached.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With