Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Growing Resident Size Set in JVM

I have a JAVA process running on 64bit LINUX with version "CentOS Linux release 7.3.1611" with 7.6GB of RAM.

Below are some of the used JVM flags,

  1. -Xmx3500m
  2. -Xms3500m
  3. -XX:MaxMetaspaceSize=400m
  4. -XX:CompressedClassSpaceSize=35m

Note : Size of the Thread stack (1MB) and code cache (240MB) are taken as default and JDK version is 1.8.0_252.

While running the TOP command, its observed that 6.3 GB of my RAM is held by the java process.

PR   NI    VIRT     RES    SHR S  %CPU %MEM   TIME+   COMMAND   
20   0  28.859g  6.341g  22544 S 215.2 83.1   4383:23 java    

I tried to analyse the native memory of JVM using JCMD, JMAP and JSTAT commands.

Output of JMAP -heap command :

Debugger attached successfully.
Server compiler detected.
JVM version is 25.252-b14

using thread-local object allocation.
Garbage-First (G1) GC with 33 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 3670016000 (3500.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 2202009600 (2100.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 36700160 (35.0MB)
   MaxMetaspaceSize         = 419430400 (400.0MB)
   G1HeapRegionSize         = 1048576 (1.0MB)

Heap Usage:
G1 Heap:
   regions  = 3500
   capacity = 3670016000 (3500.0MB)
   used     = 1735444208 (1655.048568725586MB)
   free     = 1934571792 (1844.951431274414MB)
   47.28710196358817% used
G1 Young Generation:
Eden Space:
   regions  = 1311
   capacity = 2193620992 (2092.0MB)
   used     = 1374683136 (1311.0MB)
   free     = 818937856 (781.0MB)
   62.66730401529637% used
Survivor Space:
   regions  = 113
   capacity = 118489088 (113.0MB)
   used     = 118489088 (113.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 249
   capacity = 1357905920 (1295.0MB)
   used     = 241223408 (230.04856872558594MB)
   free     = 1116682512 (1064.951431274414MB)
   17.76436824135799% used

485420 interned Strings occupying 83565264 bytes.

Output of JSTAT -gc command :

 S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
 0.0   33792.0  0.0   33792.0 1414144.0 1204224.0 2136064.0  1558311.7  262872.0 259709.5 19200.0 18531.5  22077  985.995  10     41.789 1027.785
 0.0   33792.0  0.0   33792.0 1414144.0 1265664.0 2136064.0  1558823.7  262872.0 259709.5 19200.0 18531.5  22077  985.995  10     41.789 1027.785
 0.0   63488.0  0.0   63488.0 124928.0 32768.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830
 0.0   63488.0  0.0   63488.0 124928.0 49152.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830
 0.0   63488.0  0.0   63488.0 124928.0 58368.0  3395584.0  1526795.8  262872.0 259709.5 19200.0 18531.5  22078  986.041  10     41.789 1027.830

Even the sum produced by the output of "JCMD pid VM.native_memory summary" is 5.0GB approx which is not even nearest to 6.3GB. So I could not find where the balance 1.3GB was used.

I tried to find how the 6.3GB is actually mapped with JVM. So I decided to inspect /proc/pid folder.

In /proc/pid/status file ,

VmRSS   : 6649680 kB 
RssAnon :   6627136 kB
RssFile :     22544 kB
RssShmem:         0 kB 

From this I found that most of the 6.3GB space is occupied by the anonymous space.

Output of PMAP command (truncated):

Address           Kbytes     RSS   Dirty Mode  Mapping
0000000723000000 3607296 3606076 3606076 rw---   [ anon ]
00000007ff2c0000   12544       0       0 -----   [ anon ]
00007f4584000000     132       4       4 rw---   [ anon ]
00007f4584021000   65404       0       0 -----   [ anon ]
00007f4588000000     132      12      12 rw---   [ anon ]
00007f4588021000   65404       0       0 -----   [ anon ]
00007f458c000000     132       4       4 rw---   [ anon ]
00007f458c021000   65404       0       0 -----   [ anon ]
00007f4590000000     132       4       4 rw---   [ anon ]
00007f4590021000   65404       0       0 -----   [ anon ]
00007f4594000000     132       8       8 rw---   [ anon ]
00007f4594021000   65404       0       0 -----   [ anon ]
00007f4598000000     132       4       4 rw---   [ anon ]
00007f4598021000   65404       0       0 -----   [ anon ]
00007f459c000000    2588    2528    2528 rw---   [ anon ]

I found that first anonymous address might be mapped for heap memory since its size 3.4GB. However, I was not able to find how the rest of the anonymous space was used.

I need help in finding out, how the extra 1.3 GB is used by the JVM process.

Any information on memory used by the JVM other than mentioned in Native Memory Tracking would be appreciated.

like image 485
Kishore Ramesh Avatar asked Sep 02 '20 06:09

Kishore Ramesh


People also ask

What is XMS and XMX in Tomcat?

Xms is the initial (start) memory pool. Xmx is the maximum memory pool.

Why does my java process consume more memory than XMX?

What you have specified via the -Xmx switches is limiting the memory consumed by your application heap. But besides the memory consumed by your application, the JVM itself also needs some elbow room. The need for it derives from several different reasons: Garbage collection.


1 Answers

As discussed here, besides areas covered by Native Memory Tracking, there are other things that consume memory in the JVM process.

Many anonymous regions of exactly 64MB in size (like in your pmap output) suggest that these are malloc arenas. The standard glibc allocator is known to have issues with excessive memory usage, especially in applications with many threads. I suggest using jemalloc (or tcmalloc, mimalloc) as a drop-in replacement for the standard allocator - it does not have the mentioned leak. An alternative solution is to limit the number of malloc arenas with MALLOC_ARENA_MAX environment variable.

If the problem persists even after switching to jemalloc, this is likely a sign of a native memory leak. For example, native leaks in a Java application may be caused by

  • unclosed resources/streams: ZipInputStream, DirectoryStream, Inflater, Deflater, etc.
  • JNI libraries and agent libraries, including the standard jdwp agent
  • improper bytecode instrumentation

To find a source of the leak, you may also use jemalloc with its built-in profiling feature. However, jemalloc is not capable of unwinding Java stack traces.

async-profiler can show mixed Java+native stacks. Although its primary purpose is CPU and Allocation profiling, async-profiler can also help to find native memory leaks in a Java application.

For details and more examples, see my Memory Footprint of a Java Process presentation.

like image 109
apangin Avatar answered Oct 16 '22 18:10

apangin