Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SELECT ... FOR UPDATE SKIP LOCKED in REPETABLE READ transactions

I have the following statement in my PostgreSQL 10.5 database, which I execute in a repeatable read transaction:

delete from task
  where task.task_id = (
    select task.task_id
    from task
    order by task.created_at asc
    limit 1
    for update skip locked
  )
  returning
    task.task_id,
    task.created_at

Unfortunately, when I run it, I sometimes get:

[67] ERROR:  could not serialize access due to concurrent update
[67] STATEMENT:  delete from task
  where task.task_id = (
    select task.task_id
    from task
    order by task.created_at asc
    limit $1
    for update skip locked
  )
  returning
    task.task_id,
    task.created_at

which means the transaction rolled back because some other transaction modified the record in the meantime. (I think?)

I don't quite understand this. How could a different transaction modify a record that was selected with for update skip locked, and deleted?

like image 645
Ynv Avatar asked Nov 13 '18 19:11

Ynv


People also ask

What does for update skip locked do?

The SKIP LOCKED clause improves the scalability of applications that attempt to concurrently update the same set of rows in a table. It eliminates wait time for TX locks.

What is update skip locked in Postgres?

skip locked is a new feature in Postgres 9.5 that solves this issue. It purposely bypasses the isolation between transactions that Postgres normally provides and skips rows that other transactions have locked.

Does SELECT for update lock read?

A SELECT ... FOR UPDATE reads the latest available data, setting exclusive locks on each row it reads. Thus, it sets the same locks a searched SQL UPDATE would set on the rows.

What is skip locked?

The SKIP LOCKED DATA option allows a transaction to skip rows that are incompatibly locked by other transactions when those locks would hinder the progress of the transaction. Because the SKIP LOCKED DATA option skips these rows, the performance of some applications can be improved by eliminating lock wait time.


Video Answer


1 Answers

This quote from the manual discusses your case exactly:

UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands behave the same as SELECT in terms of searching for target rows: they will only find target rows that were committed as of the transaction start time. However, such a target row might have already been updated (or deleted or locked) by another concurrent transaction by the time it is found. In this case, the repeatable read transaction will wait for the first updating transaction to commit or roll back (if it is still in progress). If the first updater rolls back, then its effects are negated and the repeatable read transaction can proceed with updating the originally found row. But if the first updater commits (and actually updated or deleted the row, not just locked it) then the repeatable read transaction will be rolled back with the message

ERROR:  could not serialize access due to concurrent update

Meaning, your transaction was unable to lock the row to begin with - due to concurrent write access that got there first. SKIP LOCKED cannot save you from this completely as there may not be a lock to skip any more and we still run into a serialization failure if the row has already been changed (and the change committed - hence the lock released) since transaction start.

The same statement should work just fine with default READ COMMITTED transaction isolation. Related:

  • Postgres UPDATE … LIMIT 1
like image 171
Erwin Brandstetter Avatar answered Sep 21 '22 10:09

Erwin Brandstetter