I have the following statement in my PostgreSQL 10.5 database, which I execute in a repeatable read
transaction:
delete from task
where task.task_id = (
select task.task_id
from task
order by task.created_at asc
limit 1
for update skip locked
)
returning
task.task_id,
task.created_at
Unfortunately, when I run it, I sometimes get:
[67] ERROR: could not serialize access due to concurrent update
[67] STATEMENT: delete from task
where task.task_id = (
select task.task_id
from task
order by task.created_at asc
limit $1
for update skip locked
)
returning
task.task_id,
task.created_at
which means the transaction rolled back because some other transaction modified the record in the meantime. (I think?)
I don't quite understand this. How could a different transaction modify a record that was selected with for update skip locked
, and deleted?
The SKIP LOCKED clause improves the scalability of applications that attempt to concurrently update the same set of rows in a table. It eliminates wait time for TX locks.
skip locked is a new feature in Postgres 9.5 that solves this issue. It purposely bypasses the isolation between transactions that Postgres normally provides and skips rows that other transactions have locked.
A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.
The SKIP LOCKED DATA option allows a transaction to skip rows that are incompatibly locked by other transactions when those locks would hinder the progress of the transaction. Because the SKIP LOCKED DATA option skips these rows, the performance of some applications can be improved by eliminating lock wait time.
This quote from the manual discusses your case exactly:
UPDATE
,DELETE
,SELECT FOR UPDATE
, andSELECT FOR SHARE
commands behave the same asSELECT
in terms of searching for target rows: they will only find target rows that were committed as of the transaction start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the repeatable read transaction will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the repeatable read transaction can proceed with updating the originally found row. But if the first updater commits (and actually updated or deleted the row, not just locked it) then the repeatable read transaction will be rolled back with the messageERROR: could not serialize access due to concurrent update
Meaning, your transaction was unable to lock the row to begin with - due to concurrent write access that got there first. SKIP LOCKED
cannot save you from this completely as there may not be a lock to skip any more and we still run into a serialization failure if the row has already been changed (and the change committed - hence the lock released) since transaction start.
The same statement should work just fine with default READ COMMITTED
transaction isolation. Related:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With