Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Predicate locking in PostgreSQL 9.2.1 with Serializable isolation

I have been reading thoroughly the postgres documentation on transaction isolation suggested in other of my questions but I have not still managed to understand the "predicate locking" stuff.

I hope somebody can enlighten me :-)

According to the documentation: Predicate locks in PostgreSQL, like in most other database systems, are based on data actually accessed by a transaction

That sounds good, then why is the following happening?

CREATE TABLE mycustomer(cid integer PRIMARY KEY, licenses integer);
CREATE TABLE mydevice(id integer PRIMARY KEY, cid integer REFERENCES 
mycustomer (cid), status varchar(10));

INSERT INTO mycustomer(cid, licenses) VALUES (1, 5);
INSERT INTO mycustomer(cid, licenses) VALUES (2, 5);

    Request 1                            Request2
BEGIN TRANSACTION ISOLATION 
LEVEL SERIALIZABLE;
                                         BEGIN TRANSACTION ISOLATION 
                                         LEVEL SERIALIZABLE;
SELECT * from mydevice where cid = 1;

                                         SELECT * from mydevice where cid = 2;
INSERT INTO mydevice(id, cid, status) 
VALUES (1, 1, 'ok');

                                         INSERT INTO mydevice(id, cid, status)         
                                         VALUES (2, 2, 'ok');
commit;
(=ok)                                 
                                         commit;
                                         (=rollback)

I understand that the inserts from request 1 and request 2 are not conflicting with the previous reads and thus there should not be any error launched. Why am I getting a "ERROR: could not serialize access due to read/write dependencies among transactions".

As you can imagine I cannot have the aforementioned behavior happening since every concurrent request would be roll-backed regardless of its details. In my business scenario I would like concurrent requests to be only roll-backed when they were inserting data (as per the example devices) for the same single customer.

These operations are performed from a Java application and In principle I am thinking about creating a locking table to satisfy my needs. Any ideas?

Many thanks!

like image 994
Javier Moreno Garcia Avatar asked Oct 11 '12 10:10

Javier Moreno Garcia


People also ask

What is serializable locking?

SERIALIZABLE is the strictest SQL transaction isolation level. While this isolation level permits transactions to run concurrently, it creates the effect that transactions are running in serial order. Transactions acquire locks for read and write operations.

What is serializable transaction isolation level?

Serializable Isolation Level. The Serializable isolation level provides the strictest transaction isolation. This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently.

Which SET TRANSACTION isolation level lock mode is the most isolated and requires that every transaction complete in sequence?

Serializable – This is the highest isolation level. A serializable execution is guaranteed to be serializable. Serializable execution is defined to be an execution of operations in which concurrently executing transactions appears to be serially executing.

How does locking work in Postgres?

Locks or Exclusive Locks or Write Locks prevent users from modifying a row or an entire table. Rows modified by UPDATE and DELETE are then exclusively locked automatically for the duration of the transaction. This prevents other users from changing the row until the transaction is either committed or rolled back.


1 Answers

From the Transaction Isolation page:

The particular locks acquired during execution of a query will depend on the plan used by the query, and multiple finer-grained locks (e.g., tuple locks) may be combined into fewer coarser-grained locks (e.g., page locks) during the course of the transaction to prevent exhaustion of the memory used to track the locks.

...

  • A sequential scan will always necessitate a relation-level predicate lock. This can result in an increased rate of serialization failures.

An EXPLAIN on that SELECT can tel you what the query plan is being taken, but if the table is small (or empty!), PostgreSQL will almost certainly pick a sequential scan instead of referencing the index. This will cause a predicate lock on the entire table, causing serialization failure whenever another transaction does anything to the table.

On my system:

isolation=# EXPLAIN SELECT * from mydevice where cid = 1;
                        QUERY PLAN                        
----------------------------------------------------------
 Seq Scan on mydevice  (cost=0.00..23.38 rows=5 width=46)
   Filter: (cid = 1)
(2 rows)

You could try adding an index and force it to use that:

isolation=# CREATE INDEX mydevice_cid_key ON mydevice (cid);
CREATE INDEX
isolation=# SET enable_seqscan = off;
SET
isolation=# EXPLAIN SELECT * from mydevice where cid = 1;
                                    QUERY PLAN                                    
----------------------------------------------------------------------------------
 Index Scan using mydevice_cid_key on mydevice  (cost=0.00..8.27 rows=1 width=46)
   Index Cond: (cid = 1)
(2 rows)

However, this is not the correct solution. Let's back up a little bit.

Serializable is meant to guarantee that transactions will have exactly the same effect as if they were run one after another, despite the fact that you're actually running these transactions concurrently. PostgreSQL does not have infinite resources, so while it's true that it puts predicate locks on data that your query actually accesses, "data" can mean more than "rows returned".

PostgreSQL chooses to flag serialization failures when it thinks there might be a problem, not when it's certain. (Hence how it generalizes row locks to page locks.) This design choice cause false positives, such as the one in your example. False positives are less than ideal, however, it doesn't affect correctness of the isolation semantics.

The error message is:

ERROR:  could not serialize access due to read/write dependencies among transactions
DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
HINT:  The transaction might succeed if retried.

That hint is key. Your application needs to catch serialization failures and retry the whole operation. This is true whenever SERIALIZABLE is in play -- it guarantees serial correctness despite concurrency, but it can't do that without the help of your application. Put another way, if you're actually doing concurrent modifications, the only way PostgreSQL can satisfy the isolation requirements is to ask your application to serialize itself. Thus:

It is important that an environment which uses this technique have a generalized way of handling serialization failures (which always return with a SQLSTATE value of '40001'), because it will be very hard to predict exactly which transactions might contribute to the read/write dependencies and need to be rolled back to prevent serialization anomalies.

like image 152
willglynn Avatar answered Sep 28 '22 08:09

willglynn