I'm using Jmeter to inject workload to an application deployed on an AWS EC2 instance. The test has to be very huge: it lasts for 10 hours and the workload profile has a bimodal shapes with a pitch of about 2600 requests in 5 minutes. Actually I have one m3.xlarge instance in which the application is deployed and 8 m3.xlarge instances each one running a jmeter instance. With a python script the workload to inject is splitted among the 8 client instances so in example if the original workload as to inject 800 requests, each jmeter instance will inject 100 requests. The full test as I said lasts for 10 hours and is divided into timesteps of 5 min each. Every 5 min a little workload variation is applied. Actually I get from each jmeter instance the java.lang.OutOfMemoryError: GC overhead limit exceeded error immediatly after the test is started and no request arrive to the application. I read a lot online and on stackoverflow, and I concluded the possible mistake could be:
JMV heap size too low-> I solved setting the following in the jmeter.bat files in each jmeter instance:
set HEAP=-Xms4g -Xmx4g
set NEW=-XX:NewSize=4g -XX:MaxNewSize=4g
some mistakes in the code that results in a continue unuseful usage of the garbage collector. So I remove from my test all the jmeter listeners. In particular I was using TableVisualizer, ViewResultsFullVisualizer, StatVisualizer, and GraphVisualizer.
Anyway the problem persists. I really have no idea about how to solve it. I know 10 hours of test with 2600 pitch request could be a very heavy test, but I think there should be a way to perform this. I'm using EC2 m3.xlarge instance so I could even raise the heap size to 8G if it could be useful, or splitting the workload among even more clients since I'm using spot instances so I will not pay so much more, but since I have already doubled the number of client instance from 4 to 8 in order to solve the problem and is doesn't work I'm a little bit confused and I want to know r suggestions before continue to get more and more resources. Thank you a lot in advance.
The "java. lang. OutOfMemoryError: GC overhead limit exceeded" error indicates that the NameNode heap size is insufficient for the amount of HDFS data in the cluster. Increase the heap size to prevent out-of-memory exceptions.
GC Overhead Limit Exceeded Error VirtualMachineError. It's thrown by the JVM when it encounters a problem related to utilizing resources. More specifically, the error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space.
The GC Overhead Limit Exceeded error is an indication of a resource exhaustion i.e. memory. The JVM throws this error if the Java process spends more than 98% of its time doing GC and only less than 2% of the heap is recovered in each execution.
Your heap settings look wrong: set HEAP=-Xms4g -Xmx4g set NEW=-XX:NewSize=4g -XX:MaxNewSize=4g
Your new is equal to Heap size, this is wrong. Comment NEW part first.
Can you do a ps -eaf|grep java and show the output ?
And also check you respect these recommendations:
Finally, show an overview of your Test plan and , number of threads that you start.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With