Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scaling Nginx, PHP-FPM and MongoDB

I am looking for the best way to scale a PHP application under Nginx using PHP-FPM. I am looking at a concurrency of about 1200. Currently, anything over 400 starts to get slow response times. Response size is generally very small, but a few may be fairly large. Request sizes are usually small except for a select few.

Things are fast up until it is under a heavy load. Response times crawl down to anywhere between 2 and 50 seconds. Under a light load, response times vary between 100 and 300 milliseconds.

Server setup is 2 servers. Load balancer in front, PHP-FPM, Nginx and MongoDB on both boxes. One server runs the mongod master and arbiter, the other runs the slave (unless failover occurs). I know best practices with Mongo, but I don't have enough servers to have dedicated database servers.

There is still quite a bit of ram free and the last 1 minute load average never gets above 0.7. They're 8 core boxes with 16 gigs of ram each so this shouldn't be the bottleneck. Mongo isn't sweating at all and Nginx and PHP-FPM don't seem to be either. I've checked top statistics and MongoDB using db.serverStatus().

My question is, given my concurrency, do my Nginx fastcgi settings look correct, and is there anything else that I may be missing even if it doesn't have anything to do with Nginx settings?

fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;

Would a low "ulimit -n" slow this down? Mongo uses about 500 to 600 connections when under a heavy load. Ulimit settings are as follows:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 147456
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 147456
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

FYI, I will be upping "ulimit -n" when load testing for 1200 concurrency.

Thanks in advance.

like image 400
Tres Avatar asked May 31 '11 01:05

Tres


People also ask

Does NGINX need PHP-FPM?

Nginx doesn't know how to run a PHP script of its own. It needs a PHP module like PHP-FPM to efficiently manage PHP scripts.

Is NGINX better for PHP?

NGINX PHP-FPM (PHP – FastCGI Process Manager) is a better option for higher performance. PHP-FPM is an alternative FastCGI for PHP, which intends to handle high loads. NGINX uses event-driven architecture and occupies around 10MB of RAM while handling a large number of requests. PHP-FPM is enhanced in terms of speed.

How can I tell if PHP-FPM is working?

First open the php-fpm configuration file and enable the status page as shown. Inside this file, find and uncomment the variable pm. status_path = /status as shown in the screenshot. Save the changes and exit the file.

Why we use PHP-FPM?

Q: What is PHP-FPM used for? A: PHP-FPM (FastCGI Process Manager) is a web tool used to speed up the performance of a website. It is much faster than traditional CGI based methods and has the ability to handle tremendous loads simultaneously.


2 Answers

It seems all it took was a little bit of calculations. Since I have 8 cores available, I can generate more nginx worker processes:

nginx.conf

worker_processes 4;
events {
    worker_connections 1024;
}

And 16gb of ram will give some leg room for a static amount of php-fpm workers.

php-fpm.conf

pm = static
pm.max_children = 4096

The Nginx fastcgi settings stayed the same. I probably have a bit more tweaking todo to as as settings changed, the acceptable concurrency stayed the same while the server load went down, but this seems to do the trick and is at least a starting point.

A single server seems to handle about 2000 concurrency before the load gets pretty high. ApacheBench starts getting errors around 500 concurrency so testing with AB should be done from multiple servers.

As David said, ideally this would be written in something that could scale easier, but given the time frame that just isn't feasible at this point.

I hope this helps others.

like image 151
Tres Avatar answered Oct 13 '22 01:10

Tres


MongoDB is not the bottleneck here. If you need 1200+ concurrent connections, PHP-FPM (and PHP in general) may not the tool for the job. Actually, scratch that. It's NOT the right tool for the job. Many benchmarks assert that after 200-500 concurrent connections, nginx/PHP-FPM starts to falter (see here).

I was in a similar situation last year and instead of trying to scale the unscalable, I rewrote the application in Java using Kilim (a project which I've also contributed to). Another great choice is writing it in Erlang (which is what Facebook uses). I strongly suggest you re-evaluate your choice of language here and refactor before it's too late.

Suppose you get PHP-FPM working "okay" with 1200 maybe even 1500 concurrent connections. What about 2000? 5000? 10000? Absolutely, unequivocally, indubitably impossible.

like image 31
David Titarenco Avatar answered Oct 13 '22 00:10

David Titarenco