I was recently testing a presentation app with a relatively high concurrent load. The application is a java application and is running on Hotspot JVM (1.8.0_111).
I could get a maximum throughput of around 400 TPS with a 4G heap and parallel throughput collector. The throughput chart (as a function of load) is shown below.
Because Oracle suggests using G1GC for heap sizes larger than 4G, I wanted to try G1 and see if this benefits my application throughput in any way.
To my surprise, with G1GC, I saw the below throughput trend.
I was genuinely surprised and decided to dig in to it to see what happened here. This is what I found.
I see that initially, out of the 4G heap, 1.5G is allocated to old gen regions and 2.5G to the eden regions. But as time progresses and old gen can no longer fit in 1.5G, heap gets resized. This seems innocuous. But the problem seems to be in the way resizing happens.
All of 4G is now allocated to old gen regions and none, almost, to the eden regions. Now, when something needs to be allocated to eden, heap gets resized again. And this becomes the new normal, where heap gets resized repeatedly resulting in a huge performance cost to the application.
Has anyone noticed this before with G1GC? Are there any recommendations to negotiate this issue?
Below is the startup command line with the JVM options.
java -server -Xms4096m -Xmx4096m -XX:MetaspaceSize=100m -XX:MaxMetaspaceSize=100m -XX:MaxDirectMemorySize=512m -XX:MinMetaspaceFreeRatio=0 -XX:MaxMetaspaceFreeRatio=100 -XX:CompressedClassSpaceSize=20m -XX:InitialCodeCacheSize=50m -XX:ReservedCodeCacheSize=50m -XX:+AlwaysPreTouch -XX:+DisableExplicitGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/tmp -Xloggc:/servers/logs/gc.log.2017-01-05-085234 -Djava.awt.headless=true -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -Dio.netty.leakDetectionLevel=simple -XX:MaxDirectMemorySize=512m -Dadmin.connectors.http.port=9000 -Dproxy.connectors.http.port=8080 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8654 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -server -cp ...
JVM options:
-server
-Xms4096m
-Xmx4096m
-XX:MetaspaceSize=100m
-XX:MaxMetaspaceSize=100m
-XX:MaxDirectMemorySize=512m
-XX:MinMetaspaceFreeRatio=0
-XX:MaxMetaspaceFreeRatio=100
-XX:CompressedClassSpaceSize=20m
-XX:InitialCodeCacheSize=50m
-XX:ReservedCodeCacheSize=50m
-XX:+AlwaysPreTouch
-XX:+DisableExplicitGC
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/tmp
-Xloggc:/servers/logs/gc.log.2017-01-05-085234
-Djava.awt.headless=true
-XX:+UnlockCommercialFeatures
-XX:+FlightRecorder
-Dio.netty.leakDetectionLevel=simple
-XX:MaxDirectMemorySize=512m
-Dadmin.connectors.http.port=9000
-Dproxy.connectors.http.port=8080
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=8654
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-server
-cp ...
Please find the gc logs here
There seem to be many of the following two GC causes:
The humongous allocations need space in the old gen, the to-space exhaustion drive up young gen size. They're basically competing with each other.
It looks like you're allocating a lot of humongous objects (> 1/2 G1 region size) faster than the concurrent cycles initiated by the IHOP can collect them.
You could try to increase the region size. If they are large primitive arrays (i.e. not reference arrasy) then the experimental eager reclaim feature might help too.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With