Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Postgres advisory lock within function allows concurrent execution

I'm encountering an issue where I have a function that is intended to require serialized access dependent on some circumstances. This seemed like a good case for using advisory locks. However, under fairly heavy load, I'm finding that the serialized access isn't occurring and I'm seeing concurrent access to the function.

The intention of this function is to provide "inventory control" for a event. Meaning, it is intended to limit concurrent ticket purchases for a given event such that the event is not oversold. These are the only advisory locks used within the application/database.

I'm finding that occasionally there are more tickets in an event than the eventTicketMax value. This doesn't seem like it should be possible because of the advisory locks. When testing with low volume (or manually introduced delays such as pg_sleep after acquiring the lock), things work as expected.

CREATE OR REPLACE FUNCTION createTicket(
        userId int,
        eventId int,
        eventTicketMax int
    ) RETURNS integer AS $$
        DECLARE insertedId int;
        DECLARE numTickets int;
    BEGIN
            -- first get the event lock
            PERFORM pg_advisory_lock(eventId);

            -- make sure we aren't over ticket max
            numTickets := (SELECT count(*) FROM api_ticket
                WHERE event_id = eventId and status <> 'x');

            IF numTickets >= eventTicketMax THEN
                -- raise an exception if this puts us over the max
                -- and bail
                PERFORM pg_advisory_unlock(eventId);
                RAISE EXCEPTION 'Maximum entries number for this event has been reached.';
            END IF;

            -- create the ticket
            INSERT INTO api_ticket (
                user_id,
                event_id,
                created_ts
            )
            VALUES (
                userId,
                eventId,
                now()
            )
            RETURNING id INTO insertedId;

            -- update the ticket count
            UPDATE api_event SET ticket_count = numTickets + 1 WHERE id = eventId;

            -- release the event lock
            PERFORM pg_advisory_unlock(eventId);

        RETURN insertedId;
    END;
    $$ LANGUAGE plpgsql;

Here's my environment setup:

  • Django 1.8.1 (django.db.backends.postgresql_psycopg2 w/ CONN_MAX_AGE 300)
  • PGBouncer 1.7.2 (session mode)
  • Postgres 9.3.10 on Amazon RDS

Additional variables which I tried tuning:

  • setting CONN_MAX_AGE to 0
  • Removing pgbouncer and connecting directly to DB

In my testing, I have noticed that, in cases where an event was oversold, the tickets were purchased from different webservers so I don't think there is any funny business about a shared session but I can't say for sure.

like image 520
Brock Haywood Avatar asked May 05 '16 18:05

Brock Haywood


People also ask

What is Postgres advisory lock?

PostgreSQL advisory locks are application-level, cooperative locks explicitly locked and unlocked by the user's application code. An application can use PostgreSQL advisory locks to coordinate activity across multiple sessions.

What is Pg_advisory_lock?

pg_advisory_lock locks an application-defined resource, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap).

How does locking work in Postgres?

Locks or Exclusive Locks or Write Locks prevent users from modifying a row or an entire table. Rows modified by UPDATE and DELETE are then exclusively locked automatically for the duration of the transaction. This prevents other users from changing the row until the transaction is either committed or rolled back.

What are the benefits of using explicit locks SELECT one?

Explicit row-locks use transaction priorities to ensure that two transactions can never hold conflicting locks on the same row. This is done by the query layer assigning a very high value for the priority of the transaction that is being run under pessimistic concurrency control.


1 Answers

As soon as PERFORM pg_advisory_unlock(eventId)is executed, another session can grab that lock, but as the INSERT of session #1 is not yet commited, it will not be counted in the COUNT(*)of session #2, resulting in the over-booking.

If keeping the advisory lock strategy, you must use transaction-level advisory locks (pg_advisory_xact_lock), as opposed to session-level. Those locks are automatically released at COMMIT time.

like image 85
Daniel Vérité Avatar answered Sep 28 '22 03:09

Daniel Vérité