We've got a web-based application. There are time-bound database operations (INSERTs and UPDATEs) in the application which take more time to complete, hence this particular flow has been changed into a Java Thread so it will not wait (block) for the complete database operation to be completed.
My problem is, if more than 1 user comes across this particular flow, I'm facing the following error thrown by PostgreSQL:
org.postgresql.util.PSQLException: ERROR: deadlock detected
Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566.
Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560.
The above error is consistently thrown in INSERT statements.
Additional Information: 1) I have PRIMARY KEY defined in this table. 2) There are FOREIGN KEY references in this table. 3) Separate database connection is passed to each Java Thread.
Technologies Web Server: Tomcat v6.0.10 Java v1.6.0 Servlet Database: PostgreSQL v8.2.3 Connection Management: pgpool II
I'm using SQL Server 2019, but a deadlock occurs when there is a lot of access. The target SELECT statement refers to only one record in a table, and the UPDATE statement rewrites the value in one record in a table. When I used Jmeter and issued a mixture of this SELECT statement and UPDATE statement via JDBC, a deadlock occurred.
The error message obviously was indicating a deadlock problem. As a first step, he decided to check the system_health session for the deadlocks. Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
The system_health session can be a good starting point to figure out the deadlock problems. The below query helps to find out the deadlock problems which is captured by the system_health session. When we click any row of the XmlDeadlockReport column, the deadlock report will appear.
Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. The system_health is the default extended event session of the SQL Server, and it started automatically when the database engine starts.
One way to cope with deadlocks is to have a retry mechanism that waits for a random interval and tries to run the transaction again. The random interval is necessary so that the colliding transactions don't continuously keep bumping into each other, causing what is called a live lock - something even nastier to debug. Actually most complex applications will need such a retry mechanism sooner or later when they need to handle transaction serialization failures.
Of course if you are able to determine the cause of the deadlock it's usually much better to eliminate it or it will come back to bite you. For almost all cases, even when the deadlock condition is rare, the little bit of throughput and coding overhead to get the locks in deterministic order or get more coarse-grained locks is worth it to avoid the occasional large latency hit and the sudden performance cliff when scaling concurrency.
When you are consistently getting two INSERT statements deadlocking it's most likely an unique index insert order issue. Try for example the following in two psql command windows:
Thread A | Thread B
BEGIN; | BEGIN;
| INSERT uniq=1;
INSERT uniq=2; |
| INSERT uniq=2;
| block waiting for thread A to commit or rollback, to
| see if this is an unique key error.
INSERT uniq=1; |
blocks waiting |
for thread B, |
DEADLOCK |
V
Usually the best course of action to resolve this is to figure out the parent objects that guard all such transactions. Most applications have one or two of primary entities, such as users or accounts, that are good candidates for this. Then all you need is for every transaction to get the locks on the primary entity it touches via SELECT ... FOR UPDATE. Or if touches several, get locks on all of them but in the same order every time (order by primary key is a good choice).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With