I run multiple servers and each of them are running multi-update statements such as these
UPDATE user SET ... WHERE user_id = 2;
UPDATE user SET ... WHERE user_id = 1;
UPDATE user SET ... WHERE user_id = 3;
If there is a concurrenct update like:
UPDATE user SET ... WHERE user_id = 1;
UPDATE user SET ... WHERE user_id = 2;
Then I will run into error deadlock detected
Right now, my fix is to order the update statement on client-side and always guarantee that the ids are in the same order. i.e. I always sort the statement on the client-side ASC by user_id
This has seemed to fix the issue so far, but I still have questions:
Ordering statements at the application level is a good solution in that it avoids database overhead. The statements would need to keep their order per-table. If this is easily workable in the application, it's worthwhile.
There is also a solution at the database level: serializable isolation.
The Serializable isolation level provides the strictest transaction isolation. This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently. However, like the Repeatable Read level, applications using this level must be prepared to retry transactions due to serialization failures. In fact, this isolation level works exactly the same as Repeatable Read except that it monitors for conditions which could make execution of a concurrent set of serializable transactions behave in a manner inconsistent with all possible serial (one at a time) executions of those transactions.
You can set this isolation level when you start your transactions. This does add some database overhead, but more importantly the application must be ready to catch serialization failures and retry the transaction.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With