I have an application, running on php + mysql plattform, using Doctrine2 framework. I need to execute 3 db queries during one http request: first INSERT, second SELECT, third UPDATE. UPDATE is dependent on result of SELECT query. There is a high probability of concurrent http requests. If such situation occurs, and DB queries get mixed up (eg. INS1, INS2, SEL1, SEL2, UPD1, UPD2), it will result in data inconsistency. How do I assure atomicity of INS-SEL-UPD operation? Do I need to use some kind of locks, or transactions are sufficient?
A table-wide LOCK
is guaranteed to work in all situations. But they are quite bad because they kind of prevent concurrency, rather than deal with it.
However, if your script holds the locks for a very short time frame, it might be an acceptable solution.
If your table uses InnoDB engine (no support for transactions with MyISAM), transaction is the most efficient solution, but also the most complex.
For your very specific need (in the same table, first INSERT, second SELECT, third UPDATE dependending on result of SELECT query):
The atomicity of your transaction is guaranteed if you proceed this way.
(*) in general, rows not returned by this SELECT may still be inserted in a concurrent transaction, that is, the non-existence is not guaranteed throughout the course of the transaction unless proper precautions are taken
Transactions won't prevent thread B to read the values thread A has not locked
So you must use locks to prevent concurrency access.
@Gediminas explained how you can use locks with Doctrine. But using locks can result in dead locks or lock timeouts. Doctrine renders these SQL errors as RetryableExceptions. These exceptions are often normal if you are in a high concurrency environment. They can happen very often and your application should handle them properly.
Each time a RetryableException is thrown by Doctrine, the proper way to handle this is to retry the whole transaction.
As easy as it seems, there is a trap. The Doctrine 2 EntityManager becomes unusable after a RetryableException and you must recreate a new one to replay your whole transaction.
I wrote this article illustrated with a full example.
The answer from @YaK is actually a good answer. You should know how to deal with locks in general.
Addressing Doctrine2 specifically, your code should look like:
$em->getConnection()->beginTransaction();
try {
$toUpdate = $em->find('Entity\WhichWillBeUpdated', $id, \Doctrine\DBAL\LockMode::PESSIMISTIC_WRITE);
// this will append FOR UPDATE http://docs.doctrine-project.org/en/2.0.x/reference/transactions-and-concurrency.html
$em->persist($anInsertedOne);
// you can flush here as well, to obtain the ID after insert if needed
$toUpdate->changeValue('new value');
$em->persist($toUpdate);
$em->flush();
$em->getConnection()->commit();
} catch (\Exception $e) {
$em->getConnection()->rollback();
throw $e;
}
The every subsequent request to fetch for update, will wait until this transaction finishes for one process which has acquired the lock. Mysql will release the lock automatically after transaction is finished successfully or failed. By default, innodb lock timeout is 50 seconds. So if your process does not finish transaction in 50 seconds it will rollback and release the lock automatically. You do not need any additional fields on your entity.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With