Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementing a simple queue with PHP and MySQL?

I have a PHP script that retrieves rows from a database and then performs work based on the contents. The work can be time consuming (but not necessarily computationally expensive) and so I need to allow multiple scripts to run in parallel.

The rows in the database looks something like this:

+---------------------+---------------+------+-----+---------------------+----------------+
| Field               | Type          | Null | Key | Default             | Extra          |
+---------------------+---------------+------+-----+---------------------+----------------+
| id                  | bigint(11)    | NO   | PRI | NULL                | auto_increment |
.....
| date_update_started | datetime      | NO   |     | 0000-00-00 00:00:00 |                |
| date_last_updated   | datetime      | NO   |     | 0000-00-00 00:00:00 |                |
+---------------------+---------------+------+-----+---------------------+----------------+

My script currently selects rows with the oldest dates in date_last_updated (which is updated once the work is done) and does not make use of date_update_started.

If I were to run multiple instances of the script in parallel right now, they would select the same rows (at least some of the time) and duplicate work would be done.

What I'm thinking of doing is using a transaction to select the rows, update the date_update_started column, and then add a WHERE condition to the SQL statement selecting the rows to only select rows with date_update_started greater than some value (to ensure another script isn't working on it). E.g.

$sth = $dbh->prepare('
    START TRANSACTION;
    SELECT * FROM table WHERE date_update_started > 1 DAY ORDER BY date_last_updated LIMIT 1000;
    UPDATE table DAY SET date_update_started = UTC_TIMESTAMP() WHERE id IN (SELECT id FROM table WHERE date_update_started > 1 DAY ORDER BY date_last_updated LIMIT 1000;);
    COMMIT;
');
$sth->execute(); // in real code some values will be bound
$rows = $sth->fetchAll(PDO::FETCH_ASSOC);

From what I've read, this is essentially a queue implementation and seems to be frowned upon in MySQL. All the same, I need to find a way to allow multiple scripts to run in parallel, and after the research I've done this is what I've come up with.

Will this type of approach work? Is there a better way?

like image 261
Nate Avatar asked Dec 13 '14 13:12

Nate


1 Answers

I think your approach could work, as long as you also add some kind of identifier to the rows you selected that they are currently been worked on, it could be as @JuniusRendel suggested and i would even think about using another string key (random or instance id) for cases where the script resulted in errors and did not complete gracefully, as you will have to clean these fields once you updated the rows back after your work.

The problem with this approach as i see it is the option that there will be 2 scripts that run at the same point and will select the same rows before they were signed as locked. here as i can see it, it really depends on what kind of work you do on the rows, if the end result in these both scripts will be the same, i think the only problem you have is for wasted time and server memory (which are not small issues but i will put them aside for now...). if your work will result in different updates on both scripts your problem will be that you could have the wrong update at the end in the TB.

@Jean has mentioned the second approach you can take that involves using the MySql locks. i am not an expert of the subject but it seems like a good approach and using the 'Select .... FOR UPDATE' statement could give you what you are looking for as you could do on the same call the select & the update - which will be faster than 2 separate queries and could reduce the risk for other instances to select these rows as they will be locked.

The 'SELECT .... FOR UPDATE' allows you to run a select statement and lock those specific rows for updating them, so your statement could look like:

START TRANSACTION;
   SELECT * FROM tb where field='value' LIMIT 1000 FOR UPDATE;
   UPDATE tb SET lock_field='1' WHERE field='value' LIMIT 1000;
COMMIT;

Locks are powerful but be careful that it wont affect your application in different sections. Check if those selected rows that are currently locked for the update, are they requested somewhere else in your application (maybe for the end user) and what will happen in that case.

Also, Tables must be InnoDB and it is recommended that the fields you are checking the where clause with have a Mysql index as if not you may lock the whole table or encounter the 'Gap Lock'.

There is also a possibility that the locking process and especially when running parallel scripts will be heavy on your CPU & memory.

here is another read on the subject: http://www.percona.com/blog/2006/08/06/select-lock-in-share-mode-and-for-update/

Hope this helps, and would like to hear how you progressed.

like image 57
Lupin Avatar answered Sep 19 '22 18:09

Lupin