Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to set max non-heap memory in a Java 8 (Spring Boot) application?

I have 20 Spring Boot (2.3) embedded Tomcat applications running on a Linux machine with 8GB. All applications are Java 1.8 apps. The machine was running out of memory and Linux started killing some of my app processes as a result.

Using Linux top and Spring Boot admin, I noticed that the max memory heap was set to 2GB:

java -XX:+PrintFlagsFinal -version | grep HeapSize

As a result, each of the 20 apps are trying to get 2GB of heap size (1/4th of physical mem). Using Spring Boot admin I could see only ~128 MB is being used. So I reduced the max heap size to 512 via java -Xmx512m ... Now, Spring Boot admin shows:

enter image description here

1.33 GB is allocated to non-heap space but only 121 MB is being used. Why is so much being allocated to non-heap space? How can I reduce?

Update

According to top each Java process is taking around 2.4GB (VIRT):

KiB Mem :  8177060 total,   347920 free,  7127736 used,   701404 buff/cache
KiB Swap:  1128444 total,  1119032 free,     9412 used.   848848 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
  2547 admin    20   0  2.418g 0.372g 0.012g S  0.0  4.8  27:14.43 java
  .
  .
  .

Update 2

I ran jcmd 7505 VM.native_memory for one of the processes and it reported:

7505:

Native Memory Tracking:

Total: reserved=1438547KB, committed=296227KB
-                 Java Heap (reserved=524288KB, committed=123808KB)
                            (mmap: reserved=524288KB, committed=123808KB)

-                     Class (reserved=596663KB, committed=83423KB)
                            (classes #15363)
                            (malloc=2743KB #21177)
                            (mmap: reserved=593920KB, committed=80680KB)

-                    Thread (reserved=33210KB, committed=33210KB)
                            (thread #32)
                            (stack: reserved=31868KB, committed=31868KB)
                            (malloc=102KB #157)
                            (arena=1240KB #62)

-                      Code (reserved=254424KB, committed=27120KB)
                            (malloc=4824KB #8265)
                            (mmap: reserved=249600KB, committed=22296KB)

-                        GC (reserved=1742KB, committed=446KB)
                            (malloc=30KB #305)
                            (mmap: reserved=1712KB, committed=416KB)

-                  Compiler (reserved=1315KB, committed=1315KB)
                            (malloc=60KB #277)
                            (arena=1255KB #9)

-                  Internal (reserved=2695KB, committed=2695KB)
                            (malloc=2663KB #19903)
                            (mmap: reserved=32KB, committed=32KB)

-                    Symbol (reserved=20245KB, committed=20245KB)
                            (malloc=16817KB #167011)
                            (arena=3428KB #1)

-    Native Memory Tracking (reserved=3407KB, committed=3407KB)
                            (malloc=9KB #110)
                            (tracking overhead=3398KB)

-               Arena Chunk (reserved=558KB, committed=558KB)
                            (malloc=558KB)
like image 920
James Avatar asked Sep 17 '25 02:09

James


1 Answers

First of all - no, 1.33GB is not allocated. On the screenshot you have 127MB of nonheap memory allocated. The 1.33GB is the max limit.

I see your metaspace is about 80MB which should not pose a problem. The rest of the memory can be composed by a lot of things. Compressed classes, code cache, native buffers etc...

To get the detailed view of what is eating up the offheap memory, you can query the MBean java.lang:type=MemoryPool,name=*, for example via VisualVM with an MBean plugin.

However, your apps may simply be eating too much native memory. For example many I/O buffers from Netty may be the culprit (used up by the java.nio.DirectByteBuffer). If that's the culprit, you can for example limit the caching of the DirectByteBuffers with the flag -Djdk.nio.maxCachedBufferSize, or place a limit with -XX:MaxDirectMemorySize. For a definitive answer of what exactly is eating your RAM, you'd have to create a heap dump and analyze it.

So to answer your question "Why is so much being allocated to non-heap space? How can I reduce?" There's not a lot allocated to non-heap space. Most of it is native buffers for I/O, and JVM internals. There is no universal switch or flag to limit all the different caches and pools at once.

Now to adress the elephant in the room. I think your real issue stems from simply having very little RAM. You've said you are running 20 instances of JVM limited to 512MB of heap space on 8GB machine. That is unsustainable. 20 x 512MB = 10GB of heap, which is more than you can accommodate with 8GB of total RAM. And that is before you even count in the off-heap/native memory. You need to either provide more HW resources, decrease the JVM count or further decrease the heap/metaspace and other limits (which I strongly advise not to).

like image 59
Leprechaun Avatar answered Sep 19 '25 17:09

Leprechaun