Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Postgres transaction seems to take AccessExclusiveLock for no reason

After a lot of googling, I figured I'd ask this question. I have a piece of code which does the following against a postgres (9.2) database :

  1. Start transaction
  2. drop indexes (5 indexes) on the table
  3. insert a million rows into the table
  4. Re-create the indexes
  5. commit the transaction.

My reading of postgres tells me that I should be able to do that and still allow other users to select from the table (actually even use the existing indexes as they are not yet dropped for them) while this operation is ongoing.

What I'm actually finding is that all other queries on the table (they are all select queries) are stuck. Upon looking the pg_locks and pg_stat_activity table, i found that my transaction has created an AccessExclusiveLock on the table preventing other queries from running. Once this transaction finishes, all other queries execute just fine.

So, my question is - why is a create index/insert data creating an exclusive lock on the table? Should'nt it be using a less intrusive lock (like a SHARE lock for example)?

I'm basing this upon my reading of the offical documentation here - so I don't get flamed asking to RTFM from anybody :)

Thanks,
Jalpesh

like image 886
Jalpesh Avatar asked Oct 11 '13 18:10

Jalpesh


People also ask

What is Accessexclusivelock?

Tip. Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE/SHARE ) statement. Once acquired, a lock is normally held until the end of the transaction. But if a lock is acquired after establishing a savepoint, the lock is released immediately if the savepoint is rolled back to.

Does Postgres lock table on transaction?

There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. PostgreSQL supports that too; see SET TRANSACTION for details.

What is deadlock in PostgreSQL?

In PostgreSQL, when a transaction cannot acquire the requested lock within a certain amount of time (configured by `deadlock_timeout`, with default value of 1 second), it begins deadlock detection.


2 Answers

I've also wanted this, but you can't do it.

It is not the create index nor the insert that is taking the AccessExclusive lock on the table, it is the drop index which is doing so.

The documentation does not provide an exhaustive list of every situation that acquires every type of lock--it only provides illustrative examples (but perhaps this example should be included in it).

I believe there are many parts of the code which assume that indices cannot disappear while an AccessShare lock on the table is held. So dropping an index needs to conflict with that.

like image 157
jjanes Avatar answered Oct 09 '22 22:10

jjanes


I suppose you want to be able to query your table during bulk insert and speed up this bulk insert by not maintaining indexes during inserting.

I think you could be able to achieve this using this procedure:

begin;    

maxid = select max(id) from table_name;    

create index table_name_id_tmp_idx where id<=maxid;    

drop index table_name_id_idx;    

commit;    

begin;    

do_bulk_insert();    

create index table_name_id_idx on table_name(id);    

drop index table_name_id_tmp_idx;    

commit;    

It wouldn't be possible for every index, as you need this where predicate for partial index that would return true for existing rows and false for inserted. But serial columns, timestamps etc. could work.

like image 31
Tometzky Avatar answered Oct 09 '22 23:10

Tometzky