I'm coding a site in PHP and getting "pretty urls" (also hiding my directories) by directing all requests to one index.php file (using .htaccess). The index file then parses the uri and includes the requested files. These files also have more than a couple of includes in them, and each may open up a MySQL connection. And then those files have includes too, which open sql connections. It goes down to about 3-4 levels.
Is this process CPU and memory intensive, both from the PHP includes and opening (and closing) MySQL connections in each included file?
Also, would pretty urls using purely htaccess use less resources?
The answer re the logical decomposition of your app into a source hierarchy depends on how your solution is being hosted.
I administer a few phpBB forums and have found that by aggregating common include hierarchies for shared hosting implementations, I can half the user response time. Here are some are articles which describe this in more detail (Terry Ellison [phpBB]). And to quote one article:
Let me quantify my views with some ballpark figures. I need to emphasise that the figures below are indicative. I have included the benchmarks as attachments to this article, just in case you want to validate them on your own service.
- 20–40. The number of files that you can open and read per second, if the file system cache is not primed.
- 1,500–2,500. The number of files that you can open and read per second, if the file system cache is primed with their contents.
- 300,000–400,000. The number of lines per second that the PHP interpreter can compile.
- 20,000,000. The number of PHP instructions per second that the PHP interpreter can interpret.
- 500-1,000. The number of MySQL statements per second that the PHP interpreter can call, if the database cache is primed with your table contents.
For more see More on optimising PHP applications in a Webfusion shared service where you can copy the benchmarks to run yourself.
The easiest thing to do here is to pool the connection. I use my own mysqli class extension which uses a standard single-object-per-class template. In my case any module can issue a:
$db = AppDB::get();
to return this object. This is cheap as it is an internal call involve half a dozen PHP opcodes.
An alternative but traditional method is to use a global to hold the object and just do a
global $db;
in any function that need to use it.
You suggested combining all includes into a single include file. This is OK for stable production, but a pain during testing. Can I suggest a simple compromise? Keeps them separate for testing but allow loading of a single composite. You do this in two parts (i) I assume each include defines a function or class, so use a standard template for each include, e.g.
if( !function_exists( 'fred' ) ) {
require "include/module1.php";
}
Before any loads in the master script simple do:
@include "include/_all_modules.php";
This way, when you are test you delete _all_modules.php
and the script falls back to loading individual modules. When you're happy you can recreate the _all_modules.php
. You can event do this server side by a simple "release" script which does a
system( 'cp include/[a-z]*.php include/_all_modules.php' );
This way, you get the best of both worlds
It depends on the MySQL client code, I know for one that connections often get reused when opening a MySQL connection with the same parameters.
Personally I wouldd only initialize the database connection in the front controller (your index.php file), because everything should come through there anyway.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With