Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I compile my PHP script to a faster executing format?

I have a PHP script that acts as a JSON API to my backend database.

Meaning, you send it an HTTP request like: http://example.com/json/?a=1&b=2&c=3... it will return a json object with the result set from my database.

PHP works great for this because it's literally about 10 lines of code.

But I also know that PHP is slow and this is an API that's being called about 40x per second at times and PHP is struggling to keep up.

Is there a way that I can compile my PHP script to a faster executing format? I'm already using PHP-APC which is a bytecode optimization for PHP as well as FastCGI.

Or, does anyone recommend a language I rewrite the script in so that Apache can still process the example.com/json/ requests?

Thanks

UPDATE: I just ran some benchmarks:

  • PHP script takes 0.6 second to complete
  • If I use the generated SQL from the PHP script above and run the query from the same web server but directly from within the MySQL command, meaning, network latency is still in play - the fetched result set takes only 0.09 seconds to complete.

As you notice, PHP is literally 1 order of magnitude slower in generating the results. Network does not appear to be the major bottleneck in this case, though I agree it typically is the root cause.

like image 749
Tedi Avatar asked Dec 13 '22 01:12

Tedi


2 Answers

Before you go optimizing something, first figure out if it's a problem. Considering it's only 10 lines of code (according to you) I very much suspect you don't have a problem. Time how long the script takes to execute. Bear in mind that network latency will typically dwarf trivial script execution times.

In other words: don't solve a problem until you have a problem.

You're already using an opcode cache (APC). It doesn't get much faster than that. More to the point, it rarely needs to get any faster than that.

If anything you'll have problems with your database. Too many connections (unlikely at 20x per second), too slow to connect or the big one: query is too slow. If you find yourself in this situation 9 times out of 10 effective indexing and database tuning is sufficient.

In the cases where it isn't is where you go for some kind of caching: memcached, beanstalkd and the like.

But honestly 20x per second means that these solutions are almost certainly overengineering for something that isn't a problem.

like image 132
cletus Avatar answered Jan 18 '23 22:01

cletus


I've had a lot of luck with using PHP, memcached and nginx's memcache module together for very fast results. The easiest way is to just use the full URL as the cache key

I'll assume this URL:

/widgets.json?a=1&b=2&c=3

Example PHP code:

<?
$widgets_cache_key = $_SERVER['REQUEST_URI'];

// connect to memcache (requires memcache pecl module)
$m = new Memcache;
$m->connect('127.0.0.1', 11211);

// try to get data from cache
$data = $m->get($widgets_cache_key);
if(empty($data)){
    // data is not in cache. grab it.
    $r = mysql_query("SELECT * FROM widgets WHERE ...;");
    while($row = mysql_fetch_assoc($r)){
        $data[] = $row;
    }
    // now store data for next time.
    $m->set($widgets_cache_key, $data);
}

var_dump(json_encode($data));
?>

That in itself provides a huge performance boost. If you were to then use nginx as a front-end for Apache (put Apache on 8080 and nginx on 80), you could do this in your nginx config:

worker_processes  2;

events {
    worker_connections  1024;
}

http {
    include  mime.types;
    default_type  application/octet-stream;

    access_log  off;
    sendfile  on;
    keepalive_timeout  5;
    tcp_nodelay  on;
    gzip  on;

    upstream apache {
        server  127.0.0.1:8080;
    }

    server {
        listen  80;
        server_name  _;

        location / {
            if ($request_method = POST) {
                proxy_pass  http://apache;
                break;
            }
            set  $memcached_key $uri;
            memcached_pass  127.0.0.1:11211;
            default_type  text/html;
            proxy_intercept_errors  on;
            error_page  404 502 = /fallback;
        }

        location /fallback {
            internal;
            proxy_pass  http://apache;
            break;
        }
    }
}

Notice the set $memcached_key $uri; line. This sets the memcached cache key to use REQUEST_URI just like the PHP script. So if nginx discovers a cache entry with that key it will serve it directly from memory, and you never have to touch PHP or Apache. Very fast.

There is an unofficial Apache memcache module as well. Haven't tried it but if you don't want to mess with nginx this may help you as well.

like image 26
patcoll Avatar answered Jan 18 '23 23:01

patcoll