Currently I am trying to resolve a Java memory issue: My Java application keeps using more and more memory and eventually it gets killed by the Linux OOM killer.
There is probably a Native Memory leak, because after inspection of the JVM with VisualVM both metaspace and the heap look OK.
Using the top command I can see that the memory used by the JVM keeps on increasing.
The first graphic in this article:
Example #1
Is a perfect match of what I am seeing in my own application.
So I tried using JeMalloc to find the leak as described in various articles. Here I run into a problem: When using the jeprof command and later the top command in jeprof itself, it does show the functions that use the most memory, but these are in hexadecimal addresses, so I must be missing some symbols. But I do not know which packages I need for that, that is unknown to me.
I already found this link: Link #1
And installed this package: debuginfo-install java-1.8.0-openjdk
I tried to work through simple steps first:
Get JeMalloc to work with a simple application, such as w. Next get it to work with java -version. So far so good, I can also get PDF's from JeMalloc with a perfect overview.
Next get it to work with java -jar simpletest.jar << Here I am missing symbols For example, if I do not close a GZipInputStream here, that does not show up in the JeMalloc results.
Next get it to work with java -jar myapplication.jar << Here I am missing symbols as well.
So my question is basically: What packages do I need in order to get JeMalloc to display all symbol-names to debug applications such as:
public void test1() {
InputStream fileInputStream = null;
GZipInputStream gzipInputStream = null;
try {
fileInputStream = new FileInputStream("test.zip");
gzipInputStream = new GZIPInputStream(fileInputStream);
int data = gzipInputStream.read();
while (data != -1) {
// do something with data
data = gzipInputStream.read();
}
} catch (Exception ex) {
} finally {
// Disabled to see whether JeMalloc can detect the leak
/*try {
if (gzipInputStream != null) {
gzipInputStream.close();
}
if (fileInputStream != null) {
fileInputStream.close();
}
gzipInputStream = null;
fileInputStream = null;
} catch (IOException e) {
e.printStackTrace();
}*/
}
}
Using the following software:
Articles found:
Article #1
Article #2
Article #3
Article #4
To find a memory leak, you've got to look at the system's RAM usage. This can be accomplished in Windows by using the Resource Monitor. In Windows 11/10/8.1: Press Windows+R to open the Run dialog; enter "resmon" and click OK.
Native memory leaks: associated with any continuously growing memory utilization that is outside the Java heap, such as allocations made by JNI code, drivers or even JVM allocations.
Go into the heap dump settings for your server Right-click on Tomcat from the sidebar on the left-hand side then select 'Heap Dump'. Figure 2: Heap Dump: Click on the 'OQL Console' button.
Replacing allocator (with jemalloc or tcmalloc for instances) to profile memory usage may provide hint about source of native memory leak but it is limited to native code symbols available in libraries loaded in JVM.
To have Java class/method in stack trace, it is required to generate a mapping file associating native code memory location with its origin. The only tool at time of writing is https://github.com/jvm-profiling-tools/perf-map-agent
To get more than only "interpreter" names in stack, the concerned code has to be JIT-compiled, so enforcing with -XX:CompileThreshold=1
on JVM command line options is interesting (except in production IMO).
When agent loaded in JVM, mapping file generated, and code JIT-compiled, perf
can be used to report CPU profiling. Memory leak investigation requires more processing.
The best option is to get bcc
and its memleak
tool if your Linux kernel is 4.9 or upper: https://github.com/iovisor/bcc/blob/master/tools/memleak_example.txt
Many thanks to Brendan Gregg
Debian system gets ready after a simple apt install bcc
, but RedHat system requires more work as documented for CentOS 7 at http://hydandata.org/installing-ebpf-tools-bcc-and-ply-on-centos-7 (it is even worse on CentOS 6)
As an alternative, perf
only can also report leakage stack trace with specific probes. Scripts and example usage are available at https://github.com/dkogan/memory_leak_instrumentation but has to be adapted to Java context.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With