Recently we have started getting lot of deadlock errors in logs. (Postgres server 9.6.5)
Our table consist of two columns one is an auto-increment primary key , while other is a json object. two attributes from json object are defined as unique . Now in logs we keep on getting errors that two simple insert queries on different rows are blocking each other.
============
process 65325 detected deadlock while waiting for ShareLock on transaction 2934224126 after 1000.050 ms
DETAIL: Process holding the lock: 35530. Wait queue: .
CONTEXT: while inserting index tuple (128,10) in relation "A"
STATEMENT: INSERT INTO A AS t (info) VALUES('{"x":"y",....)
ERROR: deadlock detected
DETAIL: Process 65325 waits for ShareLock on transaction 2934224126; blocked by process 35530.
Process 35530 waits for ShareLock on transaction 2934224125; blocked by process 65325.
Process 65325: INSERT INTO A AS t (info) VALUES({"x":"y",....)
Process 35530: INSERT INTO A AS t (info) VALUES({"x":"z",....)
====================
So basically two different rows are in deadlock condition. Is there any suggestion on what conditions such deadlocks may occur?
PostgreSQL locks often called “write locks” or “exclusive locks,” restrict users from modifying a row or a PostgreSQL table's contents. Rows that have undergone a DELETE or UPDATE operation will be locked solely until the transaction is finished.
With two concurrent update queries, postgres can end up in a deadlock in the same way that an application could cause postgres to deadlock. The way we can avoid deadlocks in this scenario is to tell postgres to explicitly lock the rows before the update.
In PostgreSQL, when a transaction cannot acquire the requested lock within a certain amount of time (configured by `deadlock_timeout`, with default value of 1 second), it begins deadlock detection.
idle in transaction: This indicates the backend is in a transaction, but it is currently not doing anything and could be waiting for an input from the end user.
Rows can never be in deadlock. It is not two different rows, but two different transactions, that are in deadlock. Your log is showing you the most recent insertion attempt by each transaction. Presumably, there were previous inserts as well in each transaction. But those won't show up in the log, unless they show up for some other reason (like log_statement=all).
So if T1 successfully (and invisibly, looking at your log file) inserted "x":"y", the T2 successfully and invisibly inserted "x":"z", and now T1 tries to insert "x":"z" and T2 tries "x":"y", there will be a deadlock. Assuming the unique index is on info->>'x'
This would be the same issue if you were not using JSON.
Mitigations would be, don't insert more than one row per transaction. Or if you do, always insert them in a specified order (for example, "y" before "z" due to the latin alphabet ordering), although in this case you just replace the deadlock error with a unique key violation. Or, just be prepared to catch the deadlock and try again.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With