Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PHP generating pages, but not returning them to the user right away

Tags:

php

apache

I'm currently testing the load capacity of a server setup I'm putting together. The apache2 server has PHP 5.X installed on it, and it connects to a master database on a seperate machine, and then 1 of 2 slave servers to do read froms.

My test page takes .2 seconds to generate if I call it by itself. I created a php script on a different server that creates 65 simultaneous calls to the test page. The test page takes microtime benchmarks throughout the page to let me know how long each section is taking. As expected - at least to me, if anyone has opinions or suggestions on this, feel free to comment-, the SQL portion of the page takes a short amount of time for the first couple requests it receives and then degrades because the rest of the queries stack up and have to wait. I thought that it may be a disk IO issue, but the same behavoir occured when testing on a solid state drive.

My issue is that about 30 or so of 65 pages are created, and loaded by my test script as I expected. My benchmark said the page was created in 3 seconds for example, and my test script said it received the page in full in 3.1 seconds. The differential wasn't much. The problem is that for the other requests, my benchmark says the pages were loaded in 3 seconds, but the test script didn't receive the page in full until 6 seconds. that's a full 3 seconds between the page being generated by the apache server and sent back to my test script that requested it. To make sure it wasn't an issue with the test script, I tried loading the page in a local browser while it was running, and received the same delay confirmed via the timeline window in Chrome.

I have tried all sorts of configurations for Apache, but can't seem to find what is causing this delay. My most recent attempt is below. The machine is a quad core AMD 2.8Ghz with 2Ghz of Ram. Any help with configuration, or other suggestions on what to do would be appreciated. -- Sorry for the long question.

I should mention that I monitored the resources while the script while it was running, and the CPU hit a max of 9% load and always had at least 1 gig of ram free.

I'll also mention that the same type of thing occurs when all I'm querying is a static HTML page. The first couple take .X seconds, and then it slowly ramps up to 3 seconds.

LockFile ${APACHE_LOCK_DIR}/accept.lock
PidFile ${APACHE_PID_FILE}
Timeout 120
MaxClients            150
KeepAlive On
KeepAliveTimeout 4
MaxKeepAliveRequests 150

Header always append x-frame-options sameorigin

    StartServers         50
    MinSpareServers      25
    MaxSpareServers      50
    MaxClients          150
    MaxRequestsPerChild   0


User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
AccessFileName .httpdoverride

    Order allow,deny
DefaultType text/plain
HostnameLookups Off
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
Include mods-enabled/*.load
Include mods-enabled/*.conf
Include httpd.conf
Include ports.conf

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
Include conf.d/
Include sites-enabled/
AddType application/x-httpd-php .php
AddType application/x-httpd-php-source .phps



    SecRuleEngine On
    SecRequestBodyAccess On
    SecResponseBodyAccess Off
    SecUploadKeepFiles Off
    SecDebugLog /var/log/apache2/modsec_debug.log
    SecDebugLogLevel 0
    SecAuditEngine RelevantOnly
    SecAuditLogRelevantStatus ^5
    SecAuditLogParts ABIFHZ
    SecAuditLogType Serial
    SecAuditLog /var/log/apache2/modsec_audit.log
    SecRequestBodyLimit 131072000
    SecRequestBodyInMemoryLimit 131072
    SecResponseBodyLimit 524288000
        ServerTokens Full
        SecServerSignature "Microsoft-IIS/5.0"

UPDATE: It seems alot of responses are are focusing on the fact that the SQL is the culprit. So I'm stating here that the same behavoir happens on a static HTML Page. The results of a benchmarking are listed below.

Concurrency Level:      10
Time taken for tests:   5.453 seconds
Complete requests:      1000
Failed requests:        899
   (Connect: 0, Receive: 0, Length: 899, Exceptions: 0)
Write errors:           0
Total transferred:      290877 bytes
HTML transferred:       55877 bytes
Requests per second:    183.38 [#/sec] (mean)
Time per request:       54.531 [ms] (mean)
Time per request:       5.453 [ms] (mean, across all concurrent requests)
Transfer rate:          52.09 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   21 250.7      0    3005
Processing:    16   33  17.8     27     138
Waiting:       16   33  17.8     27     138
Total:         16   54 253.0     27    3078

Percentage of the requests served within a certain time (ms)
  50%     27
  66%     36
  75%     42
  80%     46
  90%     58
  95%     71
  98%     90
  99%    130
 100%   3078 (longest request)

I'll also state that I determined through the use of PHP and microtime() that the lag is happening before the page is being generated. I determined this through the difference in time between the page being generated and my test script receiving it. The difference is consistent meaning the amount of time from the point the page is generated until the point my test page receives it was the same no matter how long the entire request took.

Thank you to all who have responded. All are good points, I just can't say any of them have solved the issue.

like image 643
Andrew Avatar asked May 27 '11 18:05

Andrew


2 Answers

There are many other factors, but I'm really guessing that you're spawning 30-40 processes quickly using using 30M or so each and killing your machines limited memory, then continuing to spawn new ones and thashing to swap, slowing everything down.

With 2G of ram, MaxClients at 150 and MaxRequestsPerChild at 0 the server resources are probably getting swamped even if your DB isn't on the same physical server.

Basically, for web server performance you don't want to ever swap. Run your tests and then immediately check memory on the web server with a:

free -m

This will give you memory usage in MB and swap usage. You should ideally see swap either 0 or close to 0. If not zilch or very low swap usage, the issue is simply memory running out and your server is thrashing therefore wasting CPU resulting in slow response time.

You need to get some numbers to be certain, but first do a 'top' and press Shift-M while top is running to sort by memory. The next time you run your tests and find a ballpark number on how much %MEM is being reported for each httpd process. It will vary, so it's best to use the higher ones as your guide for a worst case bound. I've got a wordpress, a drupal, and a custome site on the same server that routinely allocates 20M per http process from start and eventually grow in time upwards--if unchecked past 100M each.

Pulling some numbers out of my butt for example, if I had 2G and linux, core services, and mysql were using 800M, I'd keep expectations for memory I'd want to assume available for Apache fun to be under 1G. With this, if my apache processes were using on the high side an average of 20M, I could only have 50 MaxClients. Thats a very non-conservative number, in real life I'd drop Max down to 40 or so to be safe. Don't try to pinch memory... if you're serving up enough traffic to have 40 simultaneous connections, pony up the $100 to go to 4G before inching up the Max servers. It's one of those, once you cross the line everything goes down the toilet, so stay safely under your memory limits!

Also, with php I like to keep the MaxRequestsPerChild to 100 or so... you're not CPU bound serving web pages, so don't worry about saving a few milliseconds spawning new child process. Setting it to 0 means unlimited requests and they never be killed off unless the total clients exceed MaxSpareServers. This is generally A VERY BAD THING with php using apache workers as they just keep growing on until badness occurs (like having to hard restart your server because you can't log in cause apache used up all memory and ssh can't work without timing out).

Good Luck!

like image 194
Ray Avatar answered Nov 15 '22 21:11

Ray


What is the exact number of pages that load before the drop off? You mentioned that you were creating 65 simultaneous requests from a single, external script. You don't have a mod like limitipconn enabled that would be limiting things after N connections from a single IP or something? Is it always exactly 30 (or whatever) connections and then delay?

like image 32
profexorgeek Avatar answered Nov 15 '22 22:11

profexorgeek