After a lot of googling, I figured I'd ask this question. I have a piece of code which does the following against a postgres (9.2) database :
My reading of postgres tells me that I should be able to do that and still allow other users to select from the table (actually even use the existing indexes as they are not yet dropped for them) while this operation is ongoing.
What I'm actually finding is that all other queries on the table (they are all select queries) are stuck. Upon looking the pg_locks and pg_stat_activity table, i found that my transaction has created an AccessExclusiveLock on the table preventing other queries from running. Once this transaction finishes, all other queries execute just fine.
So, my question is - why is a create index/insert data creating an exclusive lock on the table? Should'nt it be using a less intrusive lock (like a SHARE lock for example)?
I'm basing this upon my reading of the offical documentation here - so I don't get flamed asking to RTFM from anybody :)
Thanks,
Jalpesh
Tip. Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE/SHARE ) statement. Once acquired, a lock is normally held until the end of the transaction. But if a lock is acquired after establishing a savepoint, the lock is released immediately if the savepoint is rolled back to.
There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. PostgreSQL supports that too; see SET TRANSACTION for details.
In PostgreSQL, when a transaction cannot acquire the requested lock within a certain amount of time (configured by `deadlock_timeout`, with default value of 1 second), it begins deadlock detection.
I've also wanted this, but you can't do it.
It is not the create index nor the insert that is taking the AccessExclusive lock on the table, it is the drop index which is doing so.
The documentation does not provide an exhaustive list of every situation that acquires every type of lock--it only provides illustrative examples (but perhaps this example should be included in it).
I believe there are many parts of the code which assume that indices cannot disappear while an AccessShare lock on the table is held. So dropping an index needs to conflict with that.
I suppose you want to be able to query your table during bulk insert and speed up this bulk insert by not maintaining indexes during inserting.
I think you could be able to achieve this using this procedure:
begin;
maxid = select max(id) from table_name;
create index table_name_id_tmp_idx where id<=maxid;
drop index table_name_id_idx;
commit;
begin;
do_bulk_insert();
create index table_name_id_idx on table_name(id);
drop index table_name_id_tmp_idx;
commit;
It wouldn't be possible for every index, as you need this where
predicate for partial index that would return true
for existing rows and false
for inserted. But serial columns, timestamps etc. could work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With