Skip to content

PostgreSQL backups on replicas

Understand why backups fail on PostgreSQL replicas and how to prevent it.

When you run pg_dump against a replica (standby) node, you may see the following error:

plain
pg_dump: error: Dumping the contents of table "audits" failed: PQgetResult() failed.
pg_dump: detail: Error message from server: ERROR: canceling statement due to conflict with recovery
DETAIL: User was holding a relation lock for too long.

This is expected PostgreSQL behavior and not a bug.

Why this happens

Replicas continuously replay changes from the primary using WAL (Write-Ahead Logs). When pg_dump runs a long query on the replica, it holds locks while reading large tables. If the replica receives WAL updates that need to modify the same data the query is scanning, a conflict occurs and PostgreSQL must choose: wait for the query to finish (delaying replication) or cancel the query to let replication continue.

The max_standby_streaming_delay setting controls this trade-off. Its default is 30 seconds — if a conflict lasts longer, PostgreSQL cancels your query. If no conflicting WAL updates arrive, pg_dump can run for as long as it needs.

How to avoid this error

Run backups against the primary

The primary does not face recovery conflicts. This is the safest and most reliable option and is recommended for critical backups.

Schedule backups during low-write hours

If you must back up from a replica, run backups during periods when the primary has minimal writes — for example, overnight or during a maintenance window. Fewer WAL updates mean a lower chance of conflict.

Increase max_standby_streaming_delay

You can give queries more time before PostgreSQL cancels them:

sql
ALTER SYSTEM SET max_standby_streaming_delay = '5min';
SELECT pg_reload_conf();

Exclude large or busy tables

If a specific large table is consistently causing conflicts, exclude it from the backup:

bash
--exclude-table=public.your_table_name

Add this flag under Custom Dump Flags in your backup settings.