Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PostgreSQL ERROR: canceling statement due to conflict with recovery

I'm getting the following error when running a query on a PostgreSQL db in standby mode. The query that causes the error works fine for 1 month but when you query for more than 1 month an error results.

ERROR: canceling statement due to conflict with recovery
Detail: User query might have needed to see row versions that must be removed

Any suggestions on how to resolve? Thanks

like image 657
AnApprentice Avatar asked Jan 29 '13 21:01

AnApprentice


8 Answers

No need to touch hot_standby_feedback. As others have mentioned, setting it to on can bloat master. Imagine opening transaction on a slave and not closing it.

Instead, set max_standby_archive_delay and max_standby_streaming_delay to some sane value:

# /etc/postgresql/10/main/postgresql.conf on a slave
max_standby_archive_delay = 900s
max_standby_streaming_delay = 900s

This way queries on slaves with a duration less than 900 seconds won't be cancelled. If your workload requires longer queries, just set these options to a higher value.

like image 166
Max Malysh Avatar answered Oct 05 '22 18:10

Max Malysh


Running queries on hot-standby server is somewhat tricky — it can fail, because during querying some needed rows might be updated or deleted on primary. As a primary does not know that a query is started on secondary it thinks it can clean up (vacuum) old versions of its rows. Then secondary has to replay this cleanup, and has to forcibly cancel all queries which can use these rows.

Longer queries will be canceled more often.

You can work around this by starting a repeatable read transaction on primary which does a dummy query and then sits idle while a real query is run on secondary. Its presence will prevent vacuuming of old row versions on primary.

More on this subject and other workarounds are explained in Hot Standby — Handling Query Conflicts section in documentation.

like image 38
Tometzky Avatar answered Oct 05 '22 19:10

Tometzky


There's no need to start idle transactions on the master. In postgresql-9.1 the most direct way to solve this problem is by setting

hot_standby_feedback = on

This will make the master aware of long-running queries. From the docs:

The first option is to set the parameter hot_standby_feedback, which prevents VACUUM from removing recently-dead rows and so cleanup conflicts do not occur.

Why isn't this the default? This parameter was added after the initial implementation and it's the only way that a standby can affect a master.

like image 25
eradman Avatar answered Oct 05 '22 17:10

eradman


As stated here about hot_standby_feedback = on :

Well, the disadvantage of it is that the standby can bloat the master, which might be surprising to some people, too

And here:

With what setting of max_standby_streaming_delay? I would rather default that to -1 than default hot_standby_feedback on. That way what you do on the standby only affects the standby


So I added

max_standby_streaming_delay = -1

And no more pg_dump error for us, nor master bloat :)

For AWS RDS instance, check http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html

like image 36
Gilles Quenot Avatar answered Oct 05 '22 17:10

Gilles Quenot


The table data on the hot standby slave server is modified while a long running query is running. A solution (PostgreSQL 9.1+) to make sure the table data is not modified is to suspend the replication and resume after the query:

select pg_xlog_replay_pause(); -- suspend
select * from foo; -- your query
select pg_xlog_replay_resume(); --resume
like image 39
David Jaspers Avatar answered Oct 05 '22 18:10

David Jaspers


I'm going to add some updated info and references to @max-malysh's excellent answer above.

In short, if you do something on the master, it needs to be replicated on the slave. Postgres uses WAL records for this, which are sent after every logged action on the master to the slave. The slave then executes the action and the two are again in sync. In one of several scenarios, you can be in conflict on the slave with what's coming in from the master in a WAL action. In most of them, there's a transaction happening on the slave which conflicts with what the WAL action wants to change. In that case, you have two options:

  1. Delay the application of the WAL action for a bit, allowing the slave to finish its conflicting transaction, then apply the action.
  2. Cancel the conflicting query on the slave.

We're concerned with #1, and two values:

  • max_standby_archive_delay - this is the delay used after a long disconnection between the master and slave, when the data is being read from a WAL archive, which is not current data.
  • max_standby_streaming_delay - delay used for cancelling queries when WAL entries are received via streaming replication.

Generally, if your server is meant for high availability replication, you want to keep these numbers short. The default setting of 30000 (milliseconds if no units given) is sufficient for this. If, however, you want to set up something like an archive, reporting- or read-replica that might have very long-running queries, then you'll want to set this to something higher to avoid cancelled queries. The recommended 900s setting above seems like a good starting point. I disagree with the official docs on setting an infinite value -1 as being a good idea--that could mask some buggy code and cause lots of issues.

The one caveat about long-running queries and setting these values higher is that other queries running on the slave in parallel with the long-running one which is causing the WAL action to be delayed will see old data until the long query has completed. Developers will need to understand this and serialize queries which shouldn't run simultaneously.

For the full explanation of how max_standby_archive_delay and max_standby_streaming_delay work and why, go here.

like image 30
Artif3x Avatar answered Oct 05 '22 17:10

Artif3x


It might be too late for the answer but we face the same kind of issue on the production. Earlier we have only one RDS and as the number of users increases on the app side, we decided to add Read Replica for it. Read replica works properly on the staging but once we moved to the production we start getting the same error.

So we solve this by enabling hot_standby_feedback property in the Postgres properties. We referred the following link

https://aws.amazon.com/blogs/database/best-practices-for-amazon-rds-postgresql-replication/

I hope it will help.

like image 31
Tushar.k Avatar answered Oct 05 '22 19:10

Tushar.k


Likewise, here's a 2nd caveat to @Artif3x elaboration of @max-malysh's excellent answer, both above.

With any delayed application of transactions from the master the follower(s) will have an older, stale view of the data. Therefore while providing time for the query on the follower to finish by setting max_standby_archive_delay and max_standby_streaming_delay makes sense, keep both of these caveats in mind:

  • the value of the follower as a standby / backup diminishes
  • any other queries running on the follower may return stale data.

If the value of the follower for backup ends up being too much in conflict with hosting queries, one solution would be multiple followers, each optimized for one or the other.

Also, note that several queries in a row can cause the application of wal entries to keep being delayed. So when choosing the new values, it’s not just the time for a single query, but a moving window that starts whenever a conflicting query starts, and ends when the wal entry is finally applied.

like image 42
bob Avatar answered Oct 05 '22 18:10

bob