Let's say I have 1000 requests being received on our servers to update a single MySQL table. Deadlock issues occur inevitably in this situation. We have retried posting the transaction as recommended for deadlocks but they still occur.
We are thinking of coming up with an alternative solution below.
So our question is when this happens and multiple rows gets written into Table A, B and C the underlying Triggers on Tables A, B and C might fire at the same time to update Table D.
Does MySQL InnoDB engine automatically queue these triggers or should we have to handle this in our code?
Any help is much appreciated.
Table D that is being updated directly by all these requests now and where the deadlock occurs looks like this.
v_user_email varchar(60) NO PRI
v_device_IMEI varchar(40) NO PRI
i_adid int(11) NO PRI
i_impressions int(4) YES 0
dt_pulllogdttm datetime NO
c_created_by char(15) NO
dt_created_on datetime NO
c_modified_by char(15) YES
dt_modified_on datetime YES
PHP that inserts / updates rows in this table looks like this below. You will see that we try posting the transaction 3 times if it fails due to deadlock but there are transactions that fail even then and the log says due to deadlock.
$updateQuery = "UPDATE tb_ad_pull_log SET i_impressions = (i_impressions + 1), dt_pulllogdttm = SYSDATE(), c_modified_by = '$createdBy', dt_modified_on = SYSDATE() WHERE v_user_email = '$email' AND i_adid = $adId";
if(ExecuteDeadLockQuery($updateQuery, "UPDATE", __LINE__) == 0) // If there is no record for this ad for the user, insert a new record
{
$insertQuery = "INSERT INTO tb_ad_pull_log VALUES('$email', '$device_IMEI', $adId, 1, SYSDATE(), '$createdBy', SYSDATE(), NULL, NULL)";
ExecuteDeadLockQuery($insertQuery, "INSERT", __LINE__);
}
ExecuteDeadLockQuery function looks like this -
function ExecuteDeadLockQuery($query, $activity, $lineNumber)
{
global $errorLoggingPath;
$maxAttempts = 3;
$currentTry = 1;
$noOfAffectedRows = -1;
while($currentTry <= $maxAttempts)
{
$currentTry++;
mysql_query($query);
if( mysql_errno() <> 0 ) // If error occured
{
continue;
}
else
{
$noOfAffectedRows = mysql_affected_rows();
break;
}
}
if($noOfAffectedRows == -1) // Query never executed successfully
{
LogError($activity . " failed in tb_ad_pull_log: " . mysql_error(), __FILE__, $lineNumber , $errorLoggingPath);
}
return $noOfAffectedRows;
}
Is there a cleaner way to avoid this deadlock? Here are some logs that we have.
ERROR: 08-21-2011 14:09:57 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:09:57 INSERT failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 86
ERROR: 08-21-2011 14:09:57 INSERT failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 86
ERROR: 08-21-2011 14:09:57 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:09:57 INSERT failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 86
ERROR: 08-21-2011 14:09:57 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:09:59 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:09:59 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:10:01 UPDATE failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 83
ERROR: 08-21-2011 14:10:01 INSERT failed in tb_ad_pull_log: Deadlock found when trying to get lock; try restarting transaction LINE 86
Line 83 is the UPDATE statement in the PHP and 86 is the INSERT. Please keep in mind that this data can be written into this table at the rate of 5-8 transactions per second.
Additional info
With every INSERT and UPDATE into TABLE D a trigger executes that updates TABLE X and TABLE Y. Is this a reason for Table D to remain locked and hence the incoming requests gets a deadlock?
Finally got the problem but I am not sure how to solve it. The AFTER INSERT and AFTER UPDATE triggers on TABLE D locks the table when they are triggered and hence the incoming requests deadlock. Why I am so sure of this is because once I dropped these triggers the log stopped logging deadlock messages logged otherwise
Snippet of the Trigger code.
CREATE DEFINER=CURRENT_USER TRIGGER tuadmin.t_update_CPM_updateBalance
AFTER UPDATE
ON tb_ad_pull_log
FOR EACH ROW
BEGIN
DECLARE `cpm_value` decimal(10,4);
DECLARE `clientid` int(4);
/* Execute the below block if the requested ad is not the default ad */
IF NEW.i_adid <> 1 THEN
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
//do updates to TABLE X and Y logic
END
This is where I do not understand why would these triggers hold a lock on Table D and not let any insert/update happen concurrently.
Will this avoid all problems if we drop the triggers and just call a SP from the PHP to do the work?
Ok, so you're using a single table, and a few triggers ?
And you have only very few transactions per second ?
And you have weird locking issues ?
Use PostgreSQL, i'm quite sure of the following : a) It will not have those problems b) If it has them, you'll have community support in no time
There is 99.99% of chances that your problem is caused by VERY_SLOW_TRIGGERS, I mean like totally hugely very slow, because only 8 per second implies a transaction running time of 125ms which is .. huge.
The reason for the lock is evident, you are calling a trigger on the table D.
-> call modification on table D
-> before mod trigger
-> modification
-> after mod trigger
-> modification complete
I.E. everything that happens in your trigger is part of the transaction on table D, and will thus keep the lock until it's finished.
You can either :
a) lock less rows
b) lock less time -> insert into another table, process async from there
c) use a rdbms that supports triggers properly
The balancing option is the hammer-vs-fly option, there is no reason you would need more than one server for such low tps count.
However, you should troubleshoot the performance of your trigger and verify that you're not running into I/O congestion somewhere (usually that which is unnecessarily slow tends to also overuse precious resources).
Alright, here's another option :
UNLOCK TABLES explicitly releases any table locks held by the current session.
IF your last action is that update / insert AND IF your trigger failing is either IMPOSSIBLE or NOT AN ISSUE
Then you could use this at the beginning of your trigger, releasing all locks and asking only for the non-locking consistent read.
update and insert in mysql blocked and syncronized opration,assume that you have 2 request is coming from 2 trigger for updating table D ,when 1 is updating table D second is waited on queue.for select there has no syncronized block 2 thread can request at same time.if you want to make this possible same time transaction you should build replication
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With