Som background info;
The server;
New SLES 12 server with 130 GB Ram intended to run MySQL for a large database (150G + data).
The server will also host some Java applications.
Java version (default from Oracle) - Java(TM) SE Runtime Environment (build 1.7.0-b147) - Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)
We have stumbled into the following issue;
Running some spesific java applications makes the kerne/system cpu peak slowing down/halting the application for a periode of time. I have reproduced it by making a Java application that simply eats memory over time and uses some cpu.
Investigations shows a hight number of interups during slowdown (10000-25000) .
After each slowdown Java has aquired some more memory. Setting Java to start with a fixed memory also seems to reduce the issue (setting -Xmx and -Xms to the same value). Verbosing garbage collection also indicates that GC is kicking in and might be the trigger.
The GC and memory allocation is for some reason is very expensive, and we're not sure where to look from here. Verbose from GC:
[GC^C 1024064K->259230K(3925376K), 87,3591890 secs]
On a low-end linux server the same program running GC (running SLES, Java 1.6.0_11 from SUN);
[GC 1092288K->253266K(3959488K), 3.0125460 secs]
TOP during slowdown:
top - 11:23:33 up 87 days, 19:55, 5 users, load average: 14.27, 4.50, 10.17
Tasks: 250 total, 39 running, 211 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 71.8%sy, 0.0%ni, 28.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 129033M total, 128576M used, 457M free, 1388M buffers
Swap: 32765M total, 13M used, 32752M free, 113732M cached
vmstat during slowdown (from the 3. row);
procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 13552 1714328 1422268 116462260 0 0 10 9 0 0 0 0 100 0 0
1 0 13552 1241780 1422268 116462292 0 0 0 0 240 353 1 0 99 0 0
1 0 13552 695616 1422268 116462292 0 0 0 17 419 431 3 0 97 0 0
55 0 13552 486384 1422268 116462292 0 0 0 2 20228 458 1 57 43 0 0
75 0 13552 476172 1422268 116462300 0 0 0 8 12782 684 0 70 30 0 0
65 0 13552 470304 1422268 116462304 0 0 0 0 13108 792 0 72 28 0 0
Why is the GC so expensive on a high end server versus a low end server? Any ideas where to look for clues?
UPDATE - invoke params 2012-11-26 Invoke params;
java -Xmx4g -Xms4g -verbose:gc -server -cp "./dest/" UseMemoryMain
Giving
[GC^C 1024064K->259230K(3925376K), 87,3591890 secs]
Changed to;
java -Xmx4g -Xms4g -XX:+UseParallelGC -verbose:gc -cp "./dest/" UseMemoryMain
Giving
[GC 1048640K->265430K(4019584K), 0,0902660 secs]
Changed to;
java -Xmx4g -Xms4g -XX:+UseConcMarkSweepGC -verbose:gc -cp "./dest/" UseMemoryMain
Giving
[GC 1092288K->272230K(3959488K), 0,1791320 secs]
What is real funny is that rerunning today without telling which GC method to use gives this;
java -Xmx4g -Xms4g -verbose:gc -server -cp "./dest/" UseMemoryMain
Giving
[GC 1024064K->259238K(3925376K), 0,0839190 secs]
Java has changed strategy for defaulting GC somehow...
Garbage Collection is indeed a tricky topic. To give best answer you should post the complete command line used to invoke java.
As you said playing aground with the GC switches helps. The reason for that is that the default settings are unfortunately not optimal for many applications used these days. For many many applications, which are required to have fast responses, as they are interactive, the parameter
-XX:+UseConcMarkSweepGC
will make a great difference.
It is worth noting, that using the JVM you mentioned, using larger heaps (lets say greater 10GB) will always require some tuning. Take the GC log you have and observe how behavior changes when you play with GC options. I would recommend trying different collector strategies (like CMS, or G1) and also playing with the configuration of the Eden Space (like Xmn).
Last, but not least, you could investigate what the application does with the memory using a profiler. Perhaps the code can be improved and thus a lot of GC can be avoided.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With