I had a call of a customer last weekend telling me their Java program was not responding when doing an import of data. The data is a simple Excel workbook with 4 worksheets. All data is being read from the columns and added to a database.
So I started to investigate and had some strange results.
First run
Second run
First run
In this case I had the same issue which the customer was reporting, the program stopped responding after a while going through the import process. This is caused because I'm reaching the max heap size as far as I can tell (red circle).
Second run
So I've decided to increase the initial heap size and max heap size by adding the following to my JNLP file: initial-heap-size="512m" max-heap-size="1024m"
. When I tested the import again, it seemed to work, however I notice there is much more memory used compared to the first 2 cases:
initial-heap-size="512m" max-heap-size="1024m"
a valid solution for this issue?The default Java heap size is 1280 MB, which supports fewer than 75,000 pairs. If you increase the Java heap size by 256 MB, support increases by 25,000 pairs. You can continue to increase the Java heap size until you reach the maximum heap size.
You can improve performance by increasing your heap size or using a different garbage collector. In general, for long-running server applications, use the J2SE throughput collector on machines with multiple processors (-XX:+AggressiveHeap) and as large a heap as you can fit in the free memory of your machine.
Larger heap size will cause less frequent, but bigger and more disruptive garbage collection if using the traditional oracle garbage collection algorithm. So bigger is not better. The exact behavior will depend on the JVM implementation.
Preferably the heap should be at least twice the size of the total amount of live objects, or large enough so that the JVM spends less time garbage collecting the heap than running Java code. To allow the heap to grow and shrink depending on the amount of free memory in your system, set -Xms lower than -Xmx .
Well, the obvious differences are that your first runs are done with a 64-bit server VM
and the latter ones are done with a 32-bit client VM
.
Supposedly the client VM is optimized for better usability in a desktop type application, and the server VM is optimized for "server" work. I don't have a full list of the differences (or whether it's really something that works out well in actual use cases). Don't quote me on this, but I do believe that the client VM avoids GC more than the server VM, because it was at least at one point perceived as slow. That would explain why the client VM is using a lot more memory, instead of aggressively releasing it.
Fiddling with the GC parameters is sometimes the correct thing to do, and in this case it's pretty much the only thing you can do, unless you identify any significant memory leaks. It's not a bad idea to be familiar with what makes the GC tick in any case though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With