Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Limiting mysql use per process

I have Debian VPS configured with a standard LAMP. On this server, there is only one site (shop) which has a few cron jobs - mostly PHP scripts. One of them is update script executed by Lynx browser, which sends tons of queries. When this script runs (it takes 3-4 minutes to complete) it consumes all MySQL resources, and the site almost doesn't work (page generates in 30-60 seconds instead of 1-2s).

How can I limit this script (i.e. extending its execution time limiting available resources) to allow other services to run properly? I believe there is a simple solution to the problem but can't find it. Seems my Google superpowers are limited last two days.

like image 877
Marcin Nowak Avatar asked Apr 10 '26 02:04

Marcin Nowak


1 Answers

You don't have access to modify the offending script, so fixing this requires database administrator work, not programming work. Your task is called tuning the MySQL databse.

(I guess you already asked your vendor for help with this, and they said no.)

Ron top or htop while the script runs. Is CPU pinned at 100%? Is RAM exhausted?

1) Just live with it, and run the update script at a time of day when your web site doesn't have many visitors. Fairly easy, but not a real solution.

2) As an experiment, add RAM to your VPS instance. It may let MySQL do things all-in-RAM that it's presently putting on the hard drive in temporary tables. If it helps, that may be a way to solve your problem with a small amount of work, and a larger server rental fee.

3) Add some indexes to speed up the queries in your script, so each query gets done faster. The question is, what indexes will help? (Just adding indexes randomly generally doesn't help much.)

First, figure out which queries are slow. Give the command SHOW FULL PROCESSLIST repeatedluy while your script runs. The Info column in that result shows all the running queries. Copy them into a text file to keep them. (Or you can use MySQL's slow query log, about which you can read online.)

Second, analyze the worst offending queries to see whether there's an obvious index to add. Telling you how to do that generally is beyond the scope of a Stack Overflow answer. You might ask another question about a specific query. Before you do, please reead this note about asking good SQL questions, and pay attention to the section on query performance.

3) It's possible your script is SELECTing many rows, or using SELECT to summarize many rows, from tables that also need to be updated when users visit your web site. In that case your visitors may be waiting for those SELECTs to finish. If you could change the script, you could put this statement right before long-running SELECTS.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

This allows the SELECT statement after it to do a "dirty read", in which it might get an earlier version of an updated row. See here.

Or, if you can figure out how to insert one statement into your obscured script, put this one right after it opens a database session.

SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

Without access to the source code, though, you only have one way to see if this is the problem. That is, access the MySQL server from a privileged account, right before your script runs, and give these SQL commands.

 SHOW VARIABLES LIKE 'tx_isolation';
 SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITED;

then see if the performance problem is improved. Set it back after your script finishes, probably like this (depending on the tx_isolation value retrieved above)

 SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITED;

Warning a permanent global change to the isolation level might foul up your application if it relies on transaction consistency. This is just an experiment.

4) Harass the script's author to fix this problem.

like image 181
O. Jones Avatar answered Apr 11 '26 14:04

O. Jones



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!