I am using Nginx+PHP-FPM with php 5.3.6 and it was working well for weeks. Suddenly each PHP-FPM child started taking too much memory. In initial weeks each PHP-FPM child used to take 3MB now few children are taking 700MB. Can anybody guide on this?
This script I used to get memory usage by child pid
http://www.pixelbeat.org/scripts/ps_mem.py.
It could be verified by 'top' method too
top output::
------------------------------
------------------------------
28419 daemon 20 0 844m 757m 4200 S 0 6.4 0:14.27 php-fpm
16788 daemon 20 0 700m 614m 4632 S 0 5.2 0:28.34 php-fpm
29450 daemon 20 0 669m 581m 3548 S 0 4.9 0:08.31 php-fpm
17881 daemon 20 0 642m 556m 4108 S 0 4.7 0:14.83 php-fpm
19048 daemon 20 0 642m 555m 4108 S 0 4.7 0:08.86 php-fpm
11956 daemon 20 0 97612 10m 5476 S 4 0.1 0:39.57 php-fpm
11993 daemon 20 0 97560 10m 5188 S 4 0.1 0:39.18 php-fpm
11925 daemon 20 0 97328 10m 5144 D 3 0.1 0:38.68 php-fpm
11953 daemon 20 0 97748 10m 5172 S 4 0.1 0:38.51 php-fpm
php-fpm.conf:
/etc/php-fpm/php-fpm.conf
listen = 127.0.0.1:9000
user = daemon
group = daemon
pm = dynamic
pm.max_children = 2000
pm.start_servers = 50
pm.min_spare_servers = 40
pm.max_spare_servers = 90
pm.max_requests = 10000
Following are the more debug inputs:
pmap::
pmap 28419
0000000000b52000 96K rw--- [ anon ]
0000000001a49000 1668K rw--- [ anon ]
0000000001bea000 208K rw--- [ anon ]
0000000001c1e000 770476K rw--- [ anon ]
strace::
strace -p 28419
Process 28419 attached - interrupt to quit
restart_syscall(<... resuming interrupted call ...>) = 0
recvfrom(4, 0x1bda1d0, 8196, 64, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=4, events=POLLIN}], 1, 5000) = 0 (Timeout)
recvfrom(4, 0x1bda1d0, 8196, 64, 0, 0) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=4, events=POLLIN}], 1, 5000^C <unfinished ...>
Try lowering pm.max_requests to 1000 and then lower if needed. This will kill off the child processes after 1000 requests. There are many variables to consider when php-fpm hogs resources. I have been using it for quite a while and have not seen this level of memory consumption. My guess would be a code issue, or a run-away script.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With