Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I tune G1GC for smaller memory footprint?

Tags:

java

jvm

g1gc

I have been experimenting with G1GC with Java 8 (Oracle JVM) on one of my projects. My GC flags are effectively:

-Xms64m
-Xmx1024m
-XX:+UseG1GC
-XX:+PrintGCTimeStamps
-XX:+PrintGCDetails
-Xloggc:/tmp/gc.log
-XX:+PrintAdaptiveSizePolicy

I observe that the heap grows much larger than the amount of live data I have. The GC logs show what I think is the root cause:

[G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 10.17 %, threshold: 10.00 %, uncommitted: 811597824 bytes, calculated expansion amount: 162319564 bytes (20.00 %)]

Effectively my application is generating a lot of garbage, and so the proportion of time spent in GC is higher than 10%, and so G1's ergonomics increase the heap size.


With the Parallel collector this threshold can be tuned with -XX:GCTimeRatio (the throughput goal), but from what I can see in the docs there is no equivalent flag for G1.

For the parallel collector, Java SE provides two garbage collection tuning parameters that are based on achieving a specified behavior of the application: maximum pause time goal and application throughput goal; see the section The Parallel Collector. (These two options are not available in the other collectors.) Note that these behaviors cannot always be met.

My question is, aside from lowering the maximum heap size, how can I tune G1GC for a smaller memory footprint?

In the logs there is no evidence that I'm tripping the maximum pause time goal, and indeed increasing that does not fix the problem.


This is possibly a dupe of this question: Which JVM Flag sets the GC overhead threshold mentioned in the G1Ergonomics log?, but there it looks like an incorrect answer has been accepted. (Or perhaps it was correct only for an older version of the JVM.)

like image 547
pauldoo Avatar asked Apr 06 '16 12:04

pauldoo


People also ask

How do I lower my GC frequency?

4. Reduce GC frequency. In generational GC algorithms, collection frequency for a generation can be decreased by (i) reducing the object allocation/promotion rate and (ii) increasing the size of the generation.

What is g1gc algorithm?

G1 GC uses the Snapshot-At-The-Beginning (SATB) algorithm, which takes a snapshot of the set of live objects in the heap at the start of a marking cycle. The set of live objects is composed of the live objects in the snapshot, and the objects allocated since the start of the marking cycle.

How do you set up a g1gc?

The basic strategy to tune your JVM for G1 GC is to set heap size and pause-time goal, then let the JVM dynamically modify the required settings to attempt to meet the pause-time goal. If the performance goals are not met, then consider other options based on GC monitoring / log analysis.

How do I tune my GC?

Tuning GC Performance If adaptive sizing is turned on, then you can use the MaxGCPauseMillis flag to tune GC behavior. This flag sets a target for the maximum GC pause time. When used with the Parallel collector, the JVM will adjust the size of the young and old generations in order to try and meet the goal.


1 Answers

Overview:

  • In this Oracle article(1) you can find the most important flags for G1 (inlcuding -XX:MaxGCPauseMillis).

  • This bug report indicates that the GCTimeRatio flag is also used within the G1.

  • Please also see this related question & answer(2).

  • I would assume you should be able to solve this by setting -XX:MaxGCPauseMillis to a higher value, or if you know that your application creates much (young) garbage, you can play with settings about the size of the young generation. EDIT: OK, be very careful about this, (1) states: *Young Generation Size*: Avoid explicitly setting young generation size with the -Xmn option or any or other related option such as -XX:NewRatio. Fixing the size of the young generation overrides the target pause-time goal.


(1)

Important Defaults:

The G1 GC is an adaptive garbage collector with defaults that enable it to work efficiently without modification. Here is a list of important options and their default values. This list applies to the latest Java HotSpot VM, build 24. You can adapt and tune the G1 GC to your application performance needs by entering the following options with changed settings on the JVM command line.

  • -XX:G1HeapRegionSize=n

Sets the size of a G1 region. The value will be a power of two and can range from 1MB to 32MB. The goal is to have around 2048 regions based on the minimum Java heap size.

  • -XX:MaxGCPauseMillis=200

Sets a target value for desired maximum pause time. The default value is 200 milliseconds. The specified value does not adapt to your heap size.

  • -XX:G1NewSizePercent=5

Sets the percentage of the heap to use as the minimum for the young generation size. The default value is 5 percent of your Java heap. This is an experimental flag. See "How to unlock experimental VM flags" for an example. This setting replaces the -XX:DefaultMinNewGenPercent setting. This setting is not available in Java HotSpot VM, build 23.

  • -XX:G1MaxNewSizePercent=60

Sets the percentage of the heap size to use as the maximum for young generation size. The default value is 60 percent of your Java heap. This is an experimental flag. See "How to unlock experimental VM flags" for an example. This setting replaces the -XX:DefaultMaxNewGenPercent setting. This setting is not available in Java HotSpot VM, build 23.

  • -XX:ParallelGCThreads=n

Sets the value of the STW worker threads. Sets the value of n to the number of logical processors. The value of n is the same as the number of logical processors up to a value of 8.

If there are more than eight logical processors, sets the value of n to approximately 5/8 of the logical processors. This works in most cases except for larger SPARC systems where the value of n can be approximately 5/16 of the logical processors.

  • -XX:ConcGCThreads=n

Sets the number of parallel marking threads. Sets n to approximately 1/4 of the number of parallel garbage collection threads (ParallelGCThreads).

  • -XX:InitiatingHeapOccupancyPercent=45

Sets the Java heap occupancy threshold that triggers a marking cycle. The default occupancy is 45 percent of the entire Java heap.

  • -XX:G1MixedGCLiveThresholdPercent=65

Sets the occupancy threshold for an old region to be included in a mixed garbage collection cycle. The default occupancy is 65 percent. This is an experimental flag. See "How to unlock experimental VM flags" for an example. This setting replaces the -XX:G1OldCSetRegionLiveThresholdPercent setting. This setting is not available in Java HotSpot VM, build 23.

  • -XX:G1HeapWastePercent=10

Sets the percentage of heap that you are willing to waste. The Java HotSpot VM does not initiate the mixed garbage collection cycle when the reclaimable percentage is less than the heap waste percentage. The default is 10 percent. This setting is not available in Java HotSpot VM, build 23.

  • -XX:G1MixedGCCountTarget=8

Sets the target number of mixed garbage collections after a marking cycle to collect old regions with at most G1MixedGCLIveThresholdPercent live data. The default is 8 mixed garbage collections. The goal for mixed collections is to be within this target number. This setting is not available in Java HotSpot VM, build 23.

  • -XX:G1OldCSetRegionThresholdPercent=10

Sets an upper limit on the number of old regions to be collected during a mixed garbage collection cycle. The default is 10 percent of the Java heap. This setting is not available in Java HotSpot VM, build 23.

  • -XX:G1ReservePercent=10

Sets the percentage of reserve memory to keep free so as to reduce the risk of to-space overflows. The default is 10 percent. When you increase or decrease the percentage, make sure to adjust the total Java heap by the same amount. This setting is not available in Java HotSpot VM, build 23.


(2)

My guess is that recent GC overhead higher than threshold is driving G1's decisions. You can relax it by setting -XX:GCTimeRatio=4, which would allow it to take up 20% of CPU cycles relative to application time for GCing instead of 10%.

If that's too much you should either

  • allow it to use more CPU cores - that would make it easier to meet its pause time goals which in turn means it can defer collections for longer, making it easier to meet the throughput goal. yes, this does mean using more cores can actually use fewer CPU cycles overall.

  • relax pause time goals so it has to collect less often

like image 139
Markus Weninger Avatar answered Oct 23 '22 04:10

Markus Weninger