Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache Benchmark - concurrency and number of requests

The benchmark documentation says concurrency is how many requests are done simultaneously, while number of requests is total number of requests. What I'm wondering is, if I put a 100 requests at a concurrency level of 20, does that mean 5 tests of 20 requests at the same time, or 100 tests of 20 requests at the same time each? I'm assuming the second option, because of the example numbers quoted below..

I'm wondering because I frequently see results such as this one on some testing blogs:

Complete requests: 1000000
Failed requests: 2617614

This seems implausible, since the number of failed requests is higher than the number of total requests.

Edit: the site that displays the aforementioned numbers: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php

OR could it be that it keeps trying until it reaches one million successes? Hm...

like image 593
Swader Avatar asked Oct 05 '11 07:10

Swader


People also ask

What is concurrency in Apache benchmark?

In simple words, ab -n 1000 -c 5 http://www.example.com/ where, -n 1000: ab will send 1000 number of requests to example.com server in order to perform for the benchmarking session. -c 5 : 5 is concurrency number i.e. ab will send 5 number of multiple requests to perform at a same time to example.com server.

How does Apache Benchmark work?

ApacheBench can also display data about each connection in tab-separated values (TSV) format, allowing you to calculate values that are not available within the standard ab report, such as wait time percentiles. This data comes from the same data objects that ab uses to calculate Connection Times and percentiles.

What is non 2xx response?

Non-2xx responses. The number of responses that were not in the 200 series of response codes. If all responses were 200, this field is not printed. Keep-Alive requests. The number of connections that resulted in Keep-Alive requests.


1 Answers

It means a single test with a total of 100 requests, keeping 20 requests open at all times. I think the misconception you have is that requests all take the same amount of time, which is virtually never the case. Instead of issuing requests in batches of 20, ab simply starts with 20 requests and issues a new one each time an existing request finishes.

For example, testing with ab -n 10 -c 3 would start with3 concurrent requests:

[1, 2, 3]

Let's say #2 finishes first, ab replaces it with a fourth:

[1, 4, 3]

... then #1 may finish, replaced by a fifth:

[5, 4, 3]

... Then #3 finishes:

[5, 4, 6]

... and so on, until a total of 10 requests have been made. (As requests 8, 9, and 10 complete the concurrency tapers off to 0 of course.)

Make sense?

As to your question about why you see results with more failures than total requests... I don't know the answer to that. I can't say I've seen that. Can you post links or test cases that show this?

Update: In looking at the source, ab tracks four types of errors which are detailed below the "Failed requests: ..." line:

  • Connect - (err_conn in source) Incremented when ab fails to set up the HTTP connection
  • Receive - (err_recv in source) Incremented when ab fails a read of the connection fails
  • Length - (err_length in source) Incremented when the response length is different from the length of the first good response received.
  • Exceptions - (err_except in source) Incremented when ab sees an error while polling the connection socket (e.g. the connection is killed by the server?)

The logic around when these occur and how they are counted (and how the total bad count is tracked) is, of necessity, a bit complex. It looks like the current version of ab should only count a failure once per request, but perhaps the author of that article was using a prior version that was somehow counting more than one? That's my best guess.

If you're able to reproduce the behavior, definitely file a bug.

like image 117
broofa Avatar answered Sep 22 '22 20:09

broofa