On Mac OSX 5.8 I have a Java program that runs at 100% CPU for a very long time -- several days or more (it's a model checker analyzing a concurrent program, so that's more or less expected). However, its virtual memory size, as shown in OSX's Activity Monitor, becomes enormous after a day or so: right now it's 16GB and growing. Physical memory usage is roughly stable at 1.1GB or so.
I would like to know: is the 16GB (and growing) a sign of problems that could be slowing my program?
I start the program with "java -Xmx1024m -ea"
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-9M3326)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode)
Thanks to everyone for their suggestions. I will try the profiling suggestions given in some of the answers and come back (it may take a while because of the multi-day run times).
In answer to some of the points below, the model checker does almost no I/O (only print statements, depending on the debug settings). In the mode I'm using it has no GUI. I am not the primary author of the model checker (though I have worked on some of its internals), but I do not believe that it makes any use of JNI.[<--- edit: this is wrong, details below] It does not do any memory mapping. Also, I am not asking Oracle/Sun's JVM to create lots of threads (see below for an explanation).
The extra virtual memory has not caused the model checker to die, but based on the frequency of the printing output it gradually runs more and more slowly as the virtual memory usage increases. (Perhaps that is just because of more and more garbage collection, though.) I plan to try it on a Windows machine on Monday to see if the same problem happens.
A little extra explanation: The model checker I'm running (JPF) is itself a nearly complete JVM (written entirely in Java) that runs under Oracle/Sun's JVM. Of course, as a virtual machine, JPF is highly specialized to support model checking.
It's a bit counterintuitive, but this means that even though the program I'm model checking is designed to be multithreaded, as far as Sun's JVM is concerned there is only a single thread: the one running JPF. JPF emulates the threads my program needs as part of its model checking process.
I believe that Stephen C has pinpointed the problem; Roland Illig gave me the tools to verify it. I was wrong about the use of JNI. JPF itself doesn't use JNI, but it allows plugins and JNI was used by one of the configured plugins. Fortunately there are equivalent plugins I can use that are pure Java. Preliminary use of one of them shows no growth in virtual memory over the last few hours. Thanks to everyone for their help.
This resource memory used by the JVM is often called overhead. The recommended minimum starting memory point for 64-bit Maximo 7.5 JVMs systems is 3584 MB. Therefore we recommended that physical memory availability for each JVM be 4096 MB;0.5 GB is for JVM allocation and 512 MB is for overhead.
To turn performance you can use certain parameters in the JVM. Set the maximum available memory for the JVM to 1800 Megabyte. The Java application cannot use more heap memory than defined via this parameter.
Java is also a very high-level Object-Oriented programming language (OOP) which means that while the application code itself is much easier to maintain, the objects that are instantiated will use that much more memory.
OutOfMemoryError is a runtime error in Java which occurs when the Java Virtual Machine (JVM) is unable to allocate an object due to insufficient space in the Java heap. The Java Garbage Collector (GC) cannot free up the space required for a new object, which causes a java. lang.
I suspect that it is a leak too. But it can't be a leak of 'normal' memory because the -Xmx1024m option is capping the normal heap. Likewise, it won't be a leak of 'permgen' heap, because the default maximum size of permgen is small.
So I suspect it is one of the following:
You are leaking threads; i.e. threads are being created but are not terminating. They might not be active, but each thread has a stack segment (256k to 1Mb by default ... depending on the platform) that is not allocated in the regular heap.
You are leaking direct-mapped files. These are mapped to memory segments allocated by the OS outside of the regular heap. (@bestsss suggests that you look for leaked ZIP file handles, which I think would be a sub-case of this.)
You are using some JNI / JNA code that is leaking malloc'ed memory, or similar.
Either way, a memory profiler is likely to isolate the problem, or at least eliminate some of the possibilities.
A JVM memory leak is also a possibility, but it is unwise to start suspecting the JVM until you have definitively eliminated possible causes in your own code and libraries / applications that you are using.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With