I have an un-optimized query which runs for a range of different execution times at different times of the day ranging from 1 minute to 14 hours. The CPU utilization, memory and other concurrent load on the database remaining the same, what could result in such variation? Note that auto vacuum process runs during midnight and performance improves considerably in the morning. My assertion is that due to table fragmentation, dead tuples and a lot of reads, the statistics on the same table change which generates a different execution plan. To prove this assertion, I'd like to get the query plan for the current query being run. Note that it is not possible for me to simply EXPLAIN
the query before execution.
Wait I got it working. In postgresql.conf there are TWO settings for preloading libraries. The first one, shared_preload_libraries, won't work without a restart. BUT the other one session_preload_libraries WILL. So, edit the postgresql.conf to have this line in it:
session_preload_libraries = 'auto_explain'
Then reload:
pg_ctl reload (or pg_ctlcluster 9.x main reload etc)
Then alter the database to turn it on:
alter database smarlowe set auto_explain.log_min_duration=1;
And then all new connections get auto_explained plans.
Change duration to the millisecond setting that makes most sense for you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With