This post suggests I can use a cursor to fetch from a query at a throttled rate. How do I do this?
My aim is to reduce the impact of this low-priority query on other higher-priority queries.
Some of the tricks we used to speed up SELECT-s in PostgreSQL: LEFT JOIN with redundant conditions, VALUES, extended statistics, primary key type conversion, CLUSTER, pg_hint_plan + bonus. Photo by Richard Jacobs on Unsplash.
Using EXPLAIN. One of the most important tools for debugging performance issues is the EXPLAIN command. It's a great way to understand what Postgres is doing behind the scenes. The result would be the execution plan for the query.
The pg_cancel_backend() function is used to simply cancel out the query of a database using the process ID for a particular query. It doesn't terminate the database connection. While the pg_terminate_backend() function cancels the query using the process ID for the query and shuts the connected database.
You can do this by declaring server-side cursors, with the DECLARE
command:
DECLARE my_cursor CURSOR FOR select * from foo;
And then read its results using the FETCH
command repeatedly:
FETCH 10 FROM my_cursor;
By sleeping between the FETCH command, you're effectively limiting how fast the query can execute.
After you're done with it, you can get rid of the cursor by calling COMMIT
, ROLLBACK
, or CLOSE my_cursor
Do note that some kinds of queries cannot be directly streamed via a cursor, but will be ran to completion before they produce the first row of output. Queries with hash aggregates and large non-indexed sorts are an example. You can lower the cursor_tuple_fraction
setting (default 0.1) to discourage the planner to choose these sorts of plans, but it's not always possible.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With