I have some questions about the GC Algorithm: First when we use the parameters such UseSerialGC
, UseParallelGC
, UseParallelOldGC
and so on, we specify a GC Algorithm. Each of them all can do GC in all generation, is it right?
For example, if I use java -XX:+UseSerialGC
, all generation will use serial GC as the GC Algorithm.
Second can I use ParallelGC
in Old Gneneration and use SerialGC
in young generation?
The last as the title what's the difference between ParallelGC
and ParallelOldGC
?
Parallel GC is a parallel stop-the-world collector, which means that when a GC occurs, it stops all application threads and performs the GC work using multiple threads. The GC work can thus be done very efficiently without any interruptions.
The primary difference between the serial and parallel collectors is that the parallel collector has multiple threads that are used to speed up garbage collection. The parallel collector is intended for applications with medium-sized to large-sized data sets that are run on multiprocessor or multi-threaded hardware.
Comparing G1 with CMS reveals differences that make G1 a better solution. One difference is that G1 is a compacting collector. Also, G1 offers more predictable garbage collection pauses than the CMS collector, and allows users to specify desired pause targets.
ZGC is a low-latency garbage collector that works well with very large (multi-terabyte) heaps. Like G1, ZGC works concurrently with the application. ZGC is concurrent, single-generation, region-based, NUMA-aware, and compacting. It does not stop the execution of application threads for more than 10ms.
Take a look at the HotSpot VM Options:
-XX:+UseParallelGC = Use parallel garbage collection for scavenges. (Introduced in 1.4.1).
-XX:+UseParallelOldGC = Use parallel garbage collection for the full collections. Enabling this option automatically sets -XX:+UseParallelGC. (Introduced in 5.0 update 6.)
where Scavenges = Young generation GC.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With