It is not possible to increase the maximum size of Java's heap after the VM has started. What are the technical reasons for this? Do the garbage collection algorithms depend on having a fixed amount of memory to work with? Or is it for security reasons, to prevent a Java application from DOS'ing other applications on the system by consuming all available memory?
The default maximum Java heap size is 256 MB.
The default startup heap size is 1.5 GB. This value must be a number between 1.5 GB and the maximum amount of memory allowed by your operating system and JVM version. Consider the following examples: If you have a Windows system with a 32-bit JVM, then a process can have a maximum heap size of 2 GB.
Larger heap size will cause less frequent, but bigger and more disruptive garbage collection if using the traditional oracle garbage collection algorithm. So bigger is not better. The exact behavior will depend on the JVM implementation.
64 bit JVM installed on Solaris machines runs with a 32-bit model if you don't specify either -d32 or -d64, which won't accept a Maximum heap size of 4GB, hence "invalid heap size". You can resolve this issue by running Solaris JVM with option -d64.
In Sun's JVM, last I knew, the entire heap must be allocated in a contiguous address space. I imagine that for large heap values, it's pretty hard to add to your address space after startup while ensuring it stays contiguous. You probably need to get it at startup, or not at all. Thus, it is fixed.
Even if it isn't all used immediately, the address space for the entire heap is reserved at startup. If it cannot reserve a large enough contiguous block of address space for the value of -Xmx that you pass it, it will fail to start. This is why it's tough to allocate >1.4GB heaps on 32-bit Windows - because it's hard to find contiguous address space in that size or larger, since some DLLs like to load in certain places, fragmenting the address space. This isn't really an issue when you go 64-bit, since there is so much more address space.
This is almost certainly for performance reasons. I could not find a terrific link detailing this further, but here is a pretty good quote from Peter Kessler (full link - be sure to read the comments) that I found when searching. I believe he works on the JVM at Sun.
The reason we need a contiguous memory region for the heap is that we have a bunch of side data structures that are indexed by (scaled) offsets from the start of the heap. For example, we track object reference updates with a "card mark array" that has one byte for each 512 bytes of heap. When we store a reference in the heap we have to mark the corresponding byte in the card mark array. We right shift the destination address of the store and use that to index the card mark array. Fun addressing arithmetic games you can't do in Java that you get to (have to :-) play in C++.
This was in 2004 - I'm not sure what's changed since then, but I am pretty sure it still holds. If you use a tool like Process Explorer, you can see that the virtual size (add the virtual size and private size memory columns) of the Java application includes the total heap size (plus other required space, no doubt) from the point of startup, even though the memory 'used' by the process will be no where near that until the heap starts to fill up...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With