Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to send 4000+ requests in exactly 1 second?

I have an HTTP GET request. I need to send the request to the application server for more than 4000 times exactly in 1 second.

I'm sending these requests using JMeter. I have taken ethereal traces every time for each test using a sniffer tool (Wireshark).

I have tried to achieve this from one machine, multiple machines (parallel) and even distributed mode.

Actually, JMeter results are not my concern here. The concern of this test is to see that 4000 requests are hitting the server in one second at the sniffer tool.

I have found almost 2500 request in 1 sec in ethereal trace while using the following JMeter test plan.

Number of Threads= 4000
Ramp-Up Periods = 0 (Though it is depricated)
Loop count= 1

When I use the number of threads as 2500, I got almost 2200 request hitting the server in one second in the ethereal trace.

Response from the server for that request is not my concern here. I just want to make sure that 4000 request sent by JMeter is hitting the application server in one second.

UPDATE:

Case 1: (4000 Threads)

Number of Threads= 4000
Ramp-Up Periods = 0 
Loop count= 1

Output for Case 1:

JMeter (View Results in Table): 2.225 seconds to start 4000 requests.

Ethereal trace: 4.12 seconds for 4000 requests to hit the server.

enter image description here

Case 2: (3000 Threads)

JMeter (View Results in Table): 1.83 seconds to start 3000 requests.

Ethereal trace: 1.57 seconds for 3000 requests to hit the server.

Case 3: (2500 Threads)

JMeter (View Results in Table): 1.36 seconds to start 2500 requests.

Ethereal trace: 2.37 seconds for 2500 requests to hit the server.

Case 4: (2000 Threads)

JMeter (View Results in Table): 0.938 seconds to start 2000 requests.

Ethereal trace: 1.031 seconds for 2000 requests to hit the server.

I have run these test from only one machine. 
No listeners added.
Non-Gui mode.
No assertions in my scripts.
Heap size: 8GB

So, I don't understand why my JMeter Results and ethereal traces differ from each other. I've also tried with Synchronizing Timer to achieve this scenario.

Since 4000 Threads is too heavy, maybe I have to test this in Distributed mode. I've also tried with distributed mode (1 master, 2 slaves). Maybe my script is wrong.

Is it possible to see in the ethereal trace that my 4000 requests hit the server in 1 second?

What will be the JMeter script to achieve this scenario in distributed mode?

like image 513
Masud Jahan Avatar asked Aug 31 '16 07:08

Masud Jahan


1 Answers

How about starting with whether the server is configured correctly to avoid such load. Requests can be of any type. If they are of static requests then work to ensure that the absolute minimum number of these hit your origin server due to caching policies or architecture, such as

  • If you have returning users and no CDN, make sure your cache policy is storing at the client, expiring with your build schedule. This avoids repeat requests from returning visitors
  • If you have no returning users and no CDN, make sure that your cache policy is set to at least 120% of the maximum page to page delay visible in your logs for a given user set
  • If you have a CDN, make sure all static request headers, 301 & 404 headers are set to allow your CDN to cache your requests to expire with your new build push schedule.
  • If you do not have a CDN, consider a model where you place all static resources on a dedicated server where everything on that server is marked for caching at the client at a high level. You can also front that one server with varnish or squid as a caching proxy to take the load

Utlimately I would suspect a design issue at play with this high a consistent request level. 4000 requests per second becomes 14,400,000 requests per hour and 345,600,000 per 24 hour period.

On a process basis, I would also suggest a minimum of three load generators: Two for primary load and one for a control virtual user of a single virtual user|thread for your business process. In your current model for all on one load generator you have no control element to determine the overhead imposed by the potential overload of your load generator. The use of the control element will help you determine if you have a load generator imposed skew in your driving of load. Essentially, you have a resource exhausting which is adding a speed break on your load generator. Go for a deliberate underload philosophy on your load generators. Adding another load generator is cheaper than the expense of political capital when someone attacks your test for lack of a control element and then you need to re-run your test. It is also far less expensive than chasing an engineering ghost which appears as a slow system but which is really an overloaded load generator

like image 95
James Pulley Avatar answered Nov 04 '22 01:11

James Pulley