I have been trying to find memory leak in my application for a week now without any success. I tried to do a heap dump and use jhat to look at the dump and trace down the memory leak.
Is this a best approach? Whats the best way to track down the memory leak with the heap dump.
Appreciate your help.
VM used : java version "1.6.0_25" Java(TM) SE Runtime Environment (build 1.6.0_25-b06) Java HotSpot(TM) 64-Bit Server VM (build 20.0-b11, mixed mode)
JVM Options : -Xmx1600m -XX:+UseParallelGC -XX:MaxPermSize=256m -Xms1600m -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:/tmp/gc.log
OOME Stack trace: Couldn't get this. The kernel killed the process with out of memory error.
GC Log : The last few lines
48587.245: [GC [PSYoungGen: 407168K->37504K(476160K)] 506729K->137065K(1568448K), 3.0673560 secs] [Times: user=3.53 sys=0.00, real=3.07 secs]
50318.617: [GC [PSYoungGen: 444224K->37536K(476416K)] 543785K->175177K(1568704K), 3.6635990 secs] [Times: user=3.70 sys=0.00, real=3.67 secs]
50453.841: [GC [PSYoungGen: 70092K->2912K(476672K)] 207734K->178513K(1568960K), 1.0164250 secs] [Times: user=1.29 sys=0.00, real=1.02 secs]
50454.858: [Full GC (System) [PSYoungGen: 2912K->0K(476672K)] [PSOldGen: 175601K->137776K(1092288K)] 178513K->137776K(1568960K) [PSPermGen: 60627K->60627K(74368K)], 2.0082140 secs] [Times: user=2.09 sys=0.00, real=2.01 secs]
52186.496: [GC [PSYoungGen: 407104K->37312K(444416K)] 544880K->175088K(1536704K), 3.3705440 secs] [Times: user=3.93 sys=0.00, real=3.37 secs]
53919.975: [GC [PSYoungGen: 444416K->37536K(476608K)] 582192K->213032K(1568896K), 3.4242980 secs] [Times: user=4.09 sys=0.00, real=3.42 secs]
54056.872: [GC [PSYoungGen: 70113K->2880K(476480K)] 245609K->216320K(1568768K), 0.9691980 secs] [Times: user=1.19 sys=0.00, real=0.97 secs]
54057.842: [Full GC (System) [PSYoungGen: 2880K->0K(476480K)] [PSOldGen: 213440K->99561K(1092288K)] 216320K->99561K(1568768K) [PSPermGen: 60628K->60628K(72320K)], 2.2203320 secs] [Times: user=2.23 sys=0.01, real=2.22 secs]
55796.688: [GC [PSYoungGen: 406976K->37504K(476160K)] 506537K->137065K(1568448K), 3.2680080 secs]
Update: Upon checking the kernel log messages, its a oom-killer. But still why is the system killing the process, isn't it because the process eating up lot of system resources ( memory ).
A heap dump is a snapshot of all the objects that are in memory in the JVM at a certain moment. They are very useful to troubleshoot memory-leak problems and optimize memory usage in Java applications. Heap dumps are usually stored in binary format hprof files.
Setting Up the Application Ready for Heap Analysis Sun HotSpot JVM has a way to instruct the JVM to dump its heap state when the JVM runs out of memory into a file. This standard format is . hprof. So, to enable this feature, add XX:+HeapDumpOnOutOfMemoryError to the JVM startup options.
Heap dumps contain a snapshot of all the live objects that are being used by a running Java™ application on the Java heap. You can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.
The question about java memory leaks is a duplicate of this, that, etc. Still, here are a few thoughts:
Start by taking a few heap snapshots as described in the answer linked above.
Then, if you know the whole application well, you can eyeball the instance counts and find which type has too many instances sticking around. For example, if you know that a class is a singleton, yet you see 100 instances of that class in memory, then that's a sure sign that something funny is going on there. Alternatively you can compare the snapshots to find which types of objects are growing in number over time; the key here is that you're looking for relative growth over some usage period.
Once you know what's leaking, you trace back through the references to find the root reference that cannot be collected.
Finally, remember that it's possible that you see an OutOfMemoryError not because you're leaking memory, but rather because some part of your heap is too small for the application. To check whether this is the case:
Update: I'm not sure what "kernel killed the process with out of memory error" in your latest update means, but I think you might be saying that the linux out of memory killer was invoked. Was this the case? This problem is completely separate from a java OutOfMemoryError. For more details about what's happening, take a look at the links from the page I just linked to, including this and that. But the solution to your problem is simple: use less memory on the server in question. I suppose that you could drop the min and max heap size of the java process in question, but you need to be sure that you won't trigger real java OutOfMemoryErrors. Can you move some processes elsewhere? Can you correlate the memory killer with the start up of a specific process?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With