Two warnings: this performance thing is addictive. Every bit you squeeze, you want more. And English is my second language so pardon me for any mistakes.
Anyways I am comparing nginx performance for wordpress websites in different scenarios and something seems weird. So I am here to share with you guys and maybe adjust my expectations.
Software
# NGINX 1.4.2-1~dotdeb.1
# PHP5-CGI 5.4.20-1~dotdeb.1
# PHP-FPM 5.4.20-1~dotdeb.1
# MYSQL Server 5.5.31+dfsg-0+wheezy1
# MYSQL Tuner 1.2.0-1
# APC opcode 3.1.13-1
This is a ec2 small instance. All tests done using SIEGE 40 concurrent requests for 2 minutes. All tests done from localhost > localhost.
Scenario one - A url cached via fastcgi_cache to TMPFS (MEMORY)
SIEGE -c 40 -b -t120s 'http://www.joaodedeus.com.br/quero-visitar/abadiania-go'
Transactions: 1403 hits
Availability: 100.00 %
Elapsed time: 119.46 secs
Data transferred: 14.80 MB
Response time: 3.36 secs
Transaction rate: 11.74 trans/sec
Throughput: 0.12 MB/sec
Concurrency: 39.42
Successful transactions: 1403
Failed transactions: 0
Longest transaction: 4.43
Shortest transaction: 1.38
Scenario two - Same url cached via fastcgi_cache to disk (ec2 oninstance storage - ephemeral)
Transactions: 1407 hits
Availability: 100.00 %
Elapsed time: 119.13 secs
Data transferred: 14.84 MB
Response time: 3.33 secs
Transaction rate: 11.81 trans/sec
Throughput: 0.12 MB/sec
Concurrency: 39.34
Successful transactions: 1407
Failed transactions: 0
Longest transaction: 4.40
Shortest transaction: 0.88
Here is where the first question pops in. I dont see a huge difference on ram to disk. Is that normal? I mean, no huge benefit on using ram cache.
Scenario three - The same page, saved as .html and server by nginx
Transactions: 1799 hits
Availability: 100.00 %
Elapsed time: 120.00 secs
Data transferred: 25.33 MB
Response time: 2.65 secs
Transaction rate: 14.99 trans/sec
Throughput: 0.21 MB/sec
Concurrency: 39.66
Successful transactions: 1799
Failed transactions: 0
Longest transaction: 5.21
Shortest transaction: 1.30
Here is the main question. This is a huge difference. I mean, AFAIK serving from cache is supposed to be as fast as serving a static .html file, right? I mean - nginx sees that there is a cache rule for location and sees that there is a cached version, serves it. Why so much difference?
Cache is working fine
35449 -
10835 HIT
1156 MISS
1074 BYPASS
100 EXPIRED
Best regards.
Here are short summary of investigation in nginx mailing list (see the thread here):
First of all, numbers reported are very low. They should be much bigger, and answering original question ("why difference") doesn't really make sense. Correct question would be "why so slow". Even a ec2 small instance should do better.
During the investigation host was found to be CPU bound, with gzip filter and pagespeed module being most CPU hungry.
Basic recommendations are:
With gzip off; pagespeed off;
30x speedup was reported.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With