Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ab load testing

People also ask

What is ab load testing?

Apache Bench (ab) is a load testing and benchmarking tool for Hypertext Transfer Protocol (HTTP) server. It can be run from command line and it is very simple to use. A quick load testing output can be obtained in just one minute.

How do I run Apache benchmark?

Step 1 − Update package database. Please note that symbol # before a terminal command means that root user is issuing that command. Step 2 − Install apache2 utils package to get access to Apache Bench. Being an Apache utility, Apache Bench is automatically installed on installation of the Apache web server.

What is ab command Linux?

ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving. Synopsis.


The apache benchmark tool is very basic, and while it will give you a solid idea of some performance, it is a bad idea to only depend on it if you plan to have your site exposed to serious stress in production.

Having said that, here's the most common and simplest parameters:

-c: ("Concurrency"). Indicates how many clients (people/users) will be hitting the site at the same time. While ab runs, there will be -c clients hitting the site. This is what actually decides the amount of stress your site will suffer during the benchmark.

-n: Indicates how many requests are going to be made. This just decides the length of the benchmark. A high -n value with a -c value that your server can support is a good idea to ensure that things don't break under sustained stress: it's not the same to support stress for 5 seconds than for 5 hours.

-k: This does the "KeepAlive" funcionality browsers do by nature. You don't need to pass a value for -k as it it "boolean" (meaning: it indicates that you desire for your test to use the Keep Alive header from HTTP and sustain the connection). Since browsers do this and you're likely to want to simulate the stress and flow that your site will have from browsers, it is recommended you do a benchmark with this.

The final argument is simply the host. By default it will hit http:// protocol if you don't specify it.

ab -k -c 350 -n 20000 example.com/

By issuing the command above, you will be hitting http://example.com/ with 350 simultaneous connections until 20 thousand requests are met. It will be done using the keep alive header.

After the process finishes the 20 thousand requests, you will receive feedback on stats. This will tell you how well the site performed under the stress you put it when using the parameters above.

For finding out how many people the site can handle at the same time, just see if the response times (means, min and max response times, failed requests, etc) are numbers your site can accept (different sites might desire different speeds). You can run the tool with different -c values until you hit the spot where you say "If I increase it, it starts to get failed requests and it breaks".

Depending on your website, you will expect an average number of requests per minute. This varies so much, you won't be able to simulate this with ab. However, think about it this way: If your average user will be hitting 5 requests per minute and the average response time that you find valid is 2 seconds, that means that 10 seconds out of a minute 1 user will be on requests, meaning only 1/6 of the time it will be hitting the site. This also means that if you have 6 users hitting the site with ab simultaneously, you are likely to have 36 users in simulation, even though your concurrency level (-c) is only 6.

This depends on the behavior you expect from your users using the site, but you can get it from "I expect my user to hit X requests per minute and I consider an average response time valid if it is 2 seconds". Then just modify your -c level until you are hitting 2 seconds of average response time (but make sure the max response time and stddev is still valid) and see how big you can make -c.

I hope I explained this clear :) Good luck


Please walk me through the commands I should run to figure this out.

The simplest test you can do is to perform 1000 requests, 10 at a time (which approximately simulates 10 concurrent users getting 100 pages each - over the length of the test).

ab -n 1000 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://www.example.com/

-n 1000 is the number of requests to make.

-c 10 tells AB to do 10 requests at a time, instead of 1 request at a time, to better simulate concurrent visitors (vs. sequential visitors).

-k sends the KeepAlive header, which asks the web server to not shut down the connection after each request is done, but to instead keep reusing it.

I'm also sending the extra header Accept-Encoding: gzip, deflate because mod_deflate is almost always used to compress the text/html output 25%-75% - the effects of which should not be dismissed due to it's impact on the overall performance of the web server (i.e., can transfer 2x the data in the same amount of time, etc).

Results:

Benchmarking www.example.com (be patient)
Completed 100 requests
...
Finished 1000 requests


Server Software:        Apache/2.4.10
Server Hostname:        www.example.com
Server Port:            80

Document Path:          /
Document Length:        428 bytes

Concurrency Level:      10
Time taken for tests:   1.420 seconds
Complete requests:      1000
Failed requests:        0
Keep-Alive requests:    995
Total transferred:      723778 bytes
HTML transferred:       428000 bytes
Requests per second:    704.23 [#/sec] (mean)
Time per request:       14.200 [ms] (mean)
Time per request:       1.420 [ms] (mean, across all concurrent requests)
Transfer rate:          497.76 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     5   14   7.5     12      77
Waiting:        5   14   7.5     12      77
Total:          5   14   7.5     12      77

Percentage of the requests served within a certain time (ms)
  50%     12
  66%     14
  75%     15
  80%     16
  90%     24
  95%     29
  98%     36
  99%     41
 100%     77 (longest request)

For the simplest interpretation, ignore everything BUT this line:

Requests per second:    704.23 [#/sec] (mean)

Multiply that by 60, and you have your requests per minute.

To get real world results, you'll want to test Wordpress instead of some static HTML or index.php file because you need to know how everything performs together: including complex PHP code, and multiple MySQL queries...

For example here is the results of testing a fresh install of Wordpress on the same system and WAMP environment (I'm using WampDeveloper, but there are also Xampp, WampServer, and others)...

Requests per second:    18.68 [#/sec] (mean)

That's 37x slower now!

After the load test, there are a number of things you can do to improve the overall performance (Requests Per Second), and also make the web server more stable under greater load (e.g., increasing the -n and the -c tends to crash Apache), that you can read about here:

Load Testing Apache with AB (Apache Bench)


Steps to set up Apache Bench(AB) on windows (IMO - Recommended).

Step 1 - Install Xampp.
Step 2 - Open CMD.
Step 3 - Go to the apache bench destination (cd C:\xampp\apache\bin) from CMD
Step 4 - Paste the command (ab -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://localhost:yourport/)
Step 5 - Wait for it. Your done


Load testing your API by using just ab is not enough. However, I think it's a great tool to give you a basic idea how your site is performant.

If you want to use the ab command in to test multiple API endpoints, with different data, all at the same time in background, you need to use "nohup" command. It runs any command even when you close the terminal.

I wrote a simple script that automates the whole process, feel free to use it: http://blog.ikvasnica.com/entry/load-test-multiple-api-endpoints-concurrently-use-this-simple-shell-script


I was also curious if I can measure the speed of my script with apache abs or a construct / destruct php measure script or a php extension.

the last two have failed for me: they are approximate. after which I thought to try "ab" and "abs".

the command "ab -k -c 350 -n 20000 example.com/" is beautiful because it's all easier!

but did anyone think to "localhost" on any apache server for example www.apachefriends.org?

you should create a folder such as "bench" in root where you have 2 files: test "bench.php" and reference "void.php".

and then: benchmark it!

bench.php

<?php

for($i=1;$i<50000;$i++){
    print ('qwertyuiopasdfghjklzxcvbnm1234567890');
}
?>

void.php

<?php
?>

on your Desktop you should use a .bat file(in Windows) like this:

bench.bat

"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/void.php
"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/bench.php
pause

Now if you pay attention closely ...

the void script isn't produce zero results !!! SO THE CONCLUSION IS: from the second result the first result should be decreased!!!

here i got :

c:\xampp\htdocs\bench>"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/void.php
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Apache/2.4.33
Server Hostname:        localhost
Server Port:            80

Document Path:          /bench/void.php
Document Length:        0 bytes

Concurrency Level:      1
Time taken for tests:   11.219 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2150000 bytes
HTML transferred:       0 bytes
Requests per second:    891.34 [#/sec] (mean)
Time per request:       1.122 [ms] (mean)
Time per request:       1.122 [ms] (mean, across all concurrent requests)
Transfer rate:          187.15 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       1
Processing:     0    1   0.9      1      17
Waiting:        0    1   0.9      1      17
Total:          0    1   0.9      1      17

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      2
  98%      2
  99%      3
 100%     17 (longest request)

c:\xampp\htdocs\bench>"c:\xampp\apache\bin\abs.exe" -n 10000 http://localhost/bench/bench.php
This is ApacheBench, Version 2.3 <$Revision: 1826891 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests


Server Software:        Apache/2.4.33
Server Hostname:        localhost
Server Port:            80

Document Path:          /bench/bench.php
Document Length:        1799964 bytes

Concurrency Level:      1
Time taken for tests:   177.006 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      18001600000 bytes
HTML transferred:       17999640000 bytes
Requests per second:    56.50 [#/sec] (mean)
Time per request:       17.701 [ms] (mean)
Time per request:       17.701 [ms] (mean, across all concurrent requests)
Transfer rate:          99317.00 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0       1
Processing:    12   17   3.2     17      90
Waiting:        0    1   1.1      1      26
Total:         13   18   3.2     18      90

Percentage of the requests served within a certain time (ms)
  50%     18
  66%     19
  75%     19
  80%     20
  90%     21
  95%     22
  98%     23
  99%     26
 100%     90 (longest request)

c:\xampp\htdocs\bench>pause
Press any key to continue . . .

90-17= 73 the result i expect !