The JVM is started using parameter -XX:+HeapDumpOnOutOfMemoryError. But its not creating heapdump on outofmemory.
Does not Java create heapdump when native allocation fails?
Following is the log:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1184288 bytes for Chunk::new
# An error report file with more information is saved as:
# D:\product\bin\hs_err_pid5876.log
java.lang.OutOfMemoryError
--EDIT--
The max heap size is set to 4GB, System RAM size is 16GB and when it ran out of memory it was using >11GB (shown by windows task manager).
from the discussion with @alain.janinm.... i think i can conclude that JVm didn't even have enough memory to generate a heapdump.
So, is it possible that creating the heapdump had caused JVM to use that much system memory
According to the error a java.lang.OutOfMemoryError
has been thrown. The rest of the stacktrace indicate that it happened in the native heap. Hence the allocation failure was detected in a JNI or native method rather than in Java VM code. (from Troubleshooting memory leaks)
That is probably why no heap dump was created. According to the XX:+HeapDumpOnOutOfMemoryError
documentation :
The -XX:+HeapDumpOnOutOfMemoryError command-line option tells the HotSpot VM to generate a heap dump when an allocation from the Java heap or the permanent generation cannot be satisfied.
Because the allocation failed in the native heap and not the java heap, no dump has been created.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With