I come from the front-end world in web development where we try really hard to limit the number of HTTP requests issued (by consolidating css, js files, images, etc.).
With db connections (MySQL), obviously you don't want to have unnecessary connections, but as a general rule, how bad is it to have multiple small queries? (they execute quickly)
I ask because I'm moving my application to a clustered environment and where before I was caching some stuff in server memory (as I was running on a single server), I am now trying to make my app "stateless" and in my current implementation that means more small db calls. This will help me with load balancing (avoiding sticky sessions) and also keep server memory usage down.
We're not talking a ton of queries, maybe 6-8 db calls instead of 2-4, returning anywhere from a handful of records to a few thousand records. Each of them executes quickly, less than 30ms (some much less), but I don't know if there is some "connection latency" I should be concerned about.
Thanks for your insight.
If your database service only allows you to make simple SQL queries, less than 20 queries would be fine for a small, common webpage, but if it's the webpage for your university or a decision taking support application, 60 may not be enough.
It just depends. You can query your database as often as you can, however you may meet efficiency issue. You can just write a script which just query your database and set a clock to make that script just run for sometime (30 seconds for example). You count the total query, and output the count into file.
In summary, if all your databases are on one server instance, then multiple database querying is as easy as prefixing the table name with the database or schema name depending on your database software. In other cases, you need to have one database with multiple schemas to make this technique work.
It mainly depends on the installed plugins, it is normaly up to 200.
Short answer: (1) make sure you're staying at the same big-O level, reuse connections, measure performance; (2) think about how much you care about data consistency.
Long answer:
Performance
Strictly from performance perspective, and generally speaking, unless you are already close to maxing out your database resources, such as max connections, this is not likely to have major impact. But there are certain things you should keep in mind:
O(1)
is it going to change to O(n)
? Or current O(n)
going to change to O(n^2)
? If yes, you should think about what that means for your applicationGenerally speaking about performance, the rule of thumb is - always measure.
Consistency
Performance is not the only aspect to consider, however. Also think about how much you care about data consistency in your application.
For example, consider a simple case - tables A
and B
that have one-to-one relationship and you are querying for a single record using a primary key. If you join those tables and retrieve result using a single query, you'll either get a record from both A
and B
, or no records from either, which is what your application expects too. Now consider if you split that up into 2 queries (and you're not using transactions with preferred isolation levels) - you get a record from table A
, but before you could grab the matching record from table B
, it is deleted/updated by another process. Now your application has a record from A
but none from B
.
General question here is - do you care about ACID compliance of your relational data as it pertains to the queries you are breaking apart? If the answer is yes, you must think about how your application logic will react in these specific cases.
6-8 queries for one web page? Usually this is fine. I do it all the time.
Thousands of rows returned? Choke! What is the client going to do with that many? Can the SQL do more processing, then return fewer rows?
With rare exceptions, only 1 connection per web page.
Each query has a lot of overhead. For example, INSERTing
100 rows into a table -- 100 INSERT
single-row statements will take about 10 times as long as a single 100-row INSERT
. So when practical use fewer round-trips to the server. This becomes very important if the network is a WAN. The other side of the globe is 250ms away, just for latency. A server in the same datacenter is probably so close that latency can be ignored. In a WAN, use Stored Routines to minimize round trips.
I like to time each query actively in the code. Then, if I perceive a performance problem, I look to see which query to work on first. Or use the SlowLog.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With