I'm trying to optimize my function.
The thing is when you run the query once you get one result.
Run the query a second time or third time the process time is much smaller.
SELECT map.get_near_link(a.X, a.Y, a.azimuth)
FROM traffic.avl;
First time 17 seg
Total query runtime: 17188 ms.
801 rows retrieved.
Second time 11 seg
Total query runtime: 11406 ms.
801 rows retrieved.
I'm guessing there is some kind of cache doing optimization behind scene. How can i disable this behavior so i can get a more realistic runtime value?
PostgreSQL does not have a shared query plan cache, but it has an optional query plan cache for prepared statements. That means that the developer has the choice to use a prepared statement with or without cached query plan.
There is no way to bypass or flush the database's cache. All you can do to clear it is restart the server.
Postgres manages a “Shared Buffer Cache”, which it allocates and uses internally to keep data and indexes in memory. This is usually configured to be about 25% of total system memory for a server running a dedicated Postgres instance, such as all Heroku Postgres instances.
PostgreSQL doesn't have a "cache" optimisation, in the sense of a query result cache.
It does cache table blocks that were recently read in shared_buffers
, but for most installs that only has a small effect. The main cache is the operating system's disk read cache. For more information see:
See and clear Postgres caches/buffers?
It sounds to me like you have a system with a reasonable amount of RAM and a fast CPU but a terribly slow disk. So queries that only hit the OS's disk cache are very fast, but queries that go to disk take a couple of seconds to read the data in. So caching effects are very strong.
You should explain (buffers, analyze, verbose) SELECT ...
your queries. Try with a couple of different input values until you get a slow one. Compare plans.
If the plans are the same, that's probably it.
If the plans are different, you're probably hitting a situation where the query planner is making bad choices based on variations in the table statistics. Increasing the statistics targets for the columns of interest may help (see the manual). If you get different plans and are stuck / want help, feel free to post a new question on dba.stackexchange.com with details.
sync; sudo service postgresql stop; echo 1 > /proc/sys/vm/drop_caches; sudo service postgresql start
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With