Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can we stimulate 10000 concurrent request?

Is there a way to stimulate 10000 concurrent HTTP request?

I try siege tool but only have 2000 request limit for my laptop How can I make 10000 request?

like image 858
TheOneTeam Avatar asked Jul 30 '14 11:07

TheOneTeam


People also ask

How many concurrent requests can a server handle?

You can configure the maximum concurrent requests per instance. By default each Cloud Run container instance can receive up to 80 requests at the same time; you can increase this to a maximum of 1000. Although you should use the default value, if needed you can lower the maximum concurrency.

How many concurrent requests can PHP handle?

Of course, the maximum number of processes is 10. No matter how many concurrent requests you have, only 10 processes can handle the requests.

How does node handle concurrent request?

Multiple clients make multiple requests to the NodeJS server. NodeJS receives these requests and places them into the EventQueue . NodeJS server has an internal component referred to as the EventLoop which is an infinite loop that receives requests and processes them. This EventLoop is single threaded.


2 Answers

The most simple approach to generate a huge amount of concurrent requests, it probably Apache's ab tool.

For example, ab -n 100 -c 10 http://www.example.com/ would request the given websites a 100 times, with a concurrency of 10 requests.

It is true that the number of simultaneous requests is limited by nature. Keep in mind that TCP only has 65536 available ports, some of which are already occupied and the first 1024 are usually reserved, this leaves you with a theoretical maximum of around 64500 ports per machine for outgoing request.

Then there are the operating system limits. For example, in Linux there are the kernel parameters in the net.ipv4.* group.

Finally, you should of course configure your HTTP server to handle that amount of simultaneous requests. In Apache, those are StartServers and its friends, in nginx it's worker_processes and worker_connections. Also, if you have some stand-alone dynamic processor attached to your webserver (such as php-fpm), you must raise the number of idle processes in the connection pool, too.

After all, the purpose of massive parallel requests should be to find your bottle necks, and the above steps will give you a fair idea.

Btw. if you use ab, read its final report thoroughly. It may seem brief, but it carries a lot of useful information (e.g. "non-2xx responses" may indicate server-side errors due to overload.)

like image 92
lxg Avatar answered Nov 15 '22 10:11

lxg


Jmeter allows distributed testing, which means that you can setup up a set of computers (one acting as a master and the rest as slaves) to run as many threads as you need. Jmeter has a very good doc explaining this here . . .

http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf

and some more info here . . .

http://digitalab.org/2013/06/distributed-testing-in-jmeter/

You can set this all up on the cloud as well if you do not have access to sufficient slave machines, there are a couple of services out there for this.

like image 45
remudada Avatar answered Nov 15 '22 10:11

remudada