I've noticed occasional full GC's in my application using the G1 garbage collector, and am trying to figure out why they happen.
The cycle from one region-scan-start to the next is excerpted below. At 61807.406, a full GC is logged, followed by an entry for concurrent-mark-abort. What I want to know is why the GC felt the need to do a full, stop-the-world, garbage collection and how I can avoid it.
Note that this question appears to have been asked before on the OpenJDK mailing list, with no responses.
I've trimmed the details of the young GCs for brevity, but I can post the full chunk somewhere if needed.
61805.878: [GC concurrent-root-region-scan-start]
61805.882: [GC concurrent-root-region-scan-end, 0.0033586]
61805.882: [GC concurrent-mark-start]
61806.133: [GC pause (young), 0.02836202 secs]
[Eden: 498M(498M)->0B(478M) Survivors: 14M->34M Heap: 3025M(4096M)->2548M(4096M)]
[Times: user=0.19 sys=0.00, real=0.03 secs]
61806.426: [GC pause (young), 0.02766222 secs]
[Eden: 478M(478M)->0B(480M) Survivors: 34M->32M Heap: 3050M(4096M)->2576M(4096M)]
[Times: user=0.19 sys=0.00, real=0.03 secs]
61806.717: [GC pause (young), 0.02214895 secs]
[Eden: 480M(480M)->0B(502M) Survivors: 32M->10M Heap: 3056M(4096M)->2571M(4096M)]
[Times: user=0.09 sys=0.00, real=0.02 secs]
61807.000: [GC pause (young), 0.01899188 secs]
[Eden: 502M(502M)->0B(502M) Survivors: 10M->10M Heap: 3074M(4096M)->2573M(4096M)]
[Times: user=0.09 sys=0.00, real=0.02 secs]
61807.201: [GC pause (young), 0.02619259 secs]
[Eden: 162M(502M)->0B(500M) Survivors: 10M->12M Heap: 3036M(4096M)->2876M(4096M)]
[Times: user=0.11 sys=0.00, real=0.03 secs]
61807.283: [GC pause (young), 0.02068515 secs]
[Eden: 102M(500M)->0B(500M) Survivors: 12M->12M Heap: 3058M(4096M)->2957M(4096M)]
[Times: user=0.09 sys=0.00, real=0.02 secs]
61807.350: [GC pause (young), 0.01606520 secs]
[Eden: 52M(500M)->0B(498M) Survivors: 12M->14M Heap: 3020M(4096M)->2969M(4096M)]
[Times: user=0.11 sys=0.00, real=0.02 secs]
61807.389: [GC pause (young), 0.01573865 secs]
[Eden: 42M(498M)->0B(500M) Survivors: 14M->12M Heap: 3021M(4096M)->2978M(4096M)]
[Times: user=0.09 sys=0.00, real=0.02 secs]
61807.406: [Full GC 2978M->2498M(4096M), 4.8896638 secs]
[Times: user=6.37 sys=0.08, real=4.89 secs]
61812.296: [GC concurrent-mark-abort]
61812.542: [GC pause (young), 0.01526403 secs]
[Eden: 512M(500M)->0B(510M) Survivors: 0B->2048K Heap: 3018M(4096M)->2506M(4096M)]
[Times: user=0.09 sys=0.00, real=0.02 secs]
61812.793: [GC pause (young) (initial-mark), 0.01391544 secs]
[Eden: 510M(510M)->0B(508M) Survivors: 2048K->4096K Heap: 3016M(4096M)->2508M(4096M)]
[Times: user=0.09 sys=0.00, real=0.01 secs]
61812.807: [GC concurrent-root-region-scan-start]
This is using the Java Hotspot 1.7.0_7 release, with the following interesting settings:
-XX:PermSize=128m
-XX:MaxPermSize=128m
-XX:NewSize=512m
-XX:MaxNewSize=512m
-Xms4096m
-Xmx4096m
-XX:+UnlockDiagnosticVMOptions
-XX:+UnsyncloadClass
-XX:+UseTLAB
-XX:+UseG1GC
-XX:SurvivorRatio=10
-Xloggc:./workspace/gc.log
-verbose:gc
-XX:+PrintGC
-XX:+PrintGCTimeStamps
-XX:+PrintGCDetails
G1 copies objects from one or more regions of the heap to a single region on the heap, and in the process both compacts and frees up memory. This evacuation is performed in parallel on multi-processors, to decrease pause times and increase throughput.
One difference is that G1 is a compacting collector. Also, G1 offers more predictable garbage collection pauses than the CMS collector, and allows users to specify desired pause targets. As with CMS, G1 is designed for applications that require shorter GC pauses.
After space-reclamation, the collection cycle restarts with another young-only phase. As backup, if the application runs out of memory while gathering liveness information, G1 performs an in-place stop-the-world full heap compaction (Full GC) like other collectors.
The garbage-first collector (G1) is a garbage collection algorithm introduced in the Oracle HotSpot Java virtual machine (JVM) 6 and supported from 7 Update 4. It was planned to replace concurrent mark sweep collector (CMS) in JVM 7 and was made default in Java 9.
I suppose you know about this reference. This page is also useful.
Full GC's occur when tenured objects - those that survive collection in the ephemeral (young) generation - fill up the space allocated for them. When a full GC occurs, any ephemeral marking that was in progress must be aborted.
Decreasing the rate at which the tenured generation fills up requires either adding more heap/RAM or fiddling with the division of available memory between tenured and young spaces. The parameters NewSize
, MaxNewSize
and NewRatio
are for the latter. Experiment is the only way to find what will work.
Common wisdom is that shifting the ratio to make the tenured generation bigger decreases the number of full collections. In many cases this is true, but not always. There is a condition where many tenured objects are becoming dead shortly after they're tenured. In other words, they should have been collected in the young area, but their life ended a bit too late for that. In this case making the young generation bigger allows these objects to be collected there rather than being tenured. A symptom of this is full collection causing a big decrease in allocated space.
That doesn't seem to be your case: 2978M->2498M
. Your only out may be to make the heap bigger, buying more memory as needed. Still, nearly all systems that run a long time will have a full collection once in a while.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With