SimpleBackupsSimpleBackups

My Supabase backup failed: common errors and fixes

Posted on

If you just ran pg_dump and got back something like:

pg_dump: error: server version: 15.8; pg_dump version: 14.11
pg_dump: error: aborting because of server version mismatch

you're in the right place. When a Supabase backup fails, the fix almost always falls into one of five categories: version mismatch, connection error, auth failure, disk/memory exhaustion, or silent truncation. This guide works through them in the order we see them most often in support.

What you'll leave with: the specific command to diagnose your failure, the fix, and one change to your backup setup that prevents the same failure from happening again.

First: which backup method failed?

The fix depends on which backup path broke. There are two distinct ones, and the errors look completely different.

Supabase native backup runs entirely on Supabase's side. It's a physical snapshot of your Postgres volume, run automatically on a schedule. If it fails, Supabase logs the failure on their end. You can't trigger it manually, and you can't inspect the snapshot file directly. If you suspect a native backup silently failed, check the Supabase docs on backups for your plan's retention window, then file a support ticket if a recent recovery point is missing.

If data appears to be missing from your dashboard but you're not sure whether a backup failed or data was actually deleted, what to do when Supabase data disappears covers that diagnostic path.

Logical backup via pg_dump is what most teams run themselves: a script, a cron job, or a managed tool connecting to Supabase over the network and exporting a .sql or .dump file. Every error in this guide is about this path.

If you're unclear on the difference, how Supabase native backup works covers both models. The short version: native backup lives inside Supabase and covers Postgres only. Everything else, including Storage and Edge Functions, is not in the snapshot, as covered in what Supabase's native backup doesn't cover.

For the rest of this article, "backup failed" means your pg_dump-based job returned an error.

pg_dump version mismatch errors

This is the most common failure we pick up in support. The error looks like this:

pg_dump: error: server version: 15.8; pg_dump version: 14.11
pg_dump: error: aborting because of server version mismatch

pg_dump refuses to connect to a server running a newer major version than itself. It doesn't try to degrade gracefully. If your local pg_dump is Postgres 14 and your Supabase project is running Postgres 15, it stops immediately.

To check what versions you're dealing with:

# Check your local pg_dump version
pg_dump --version

# Check your Supabase project's Postgres version
psql "postgres://postgres.<project-ref>@aws-0-<region>.pooler.supabase.com:6543/postgres" -c "SELECT version();"

Version mismatch is the most common pg_dump failure we see. Always match your local pg_dump binary to the major version Supabase is running. If you're on 14 and Supabase upgrades your project to 15, your backup script breaks silently on the next run.

The fix: install the matching version of pg_dump. On Ubuntu/Debian:

sudo apt-get install postgresql-client-15
pg_dump --version  # confirm: pg_dump (PostgreSQL) 15.x

On macOS via Homebrew:

brew install postgresql@15
export PATH="/opt/homebrew/opt/postgresql@15/bin:$PATH"

If you're running backups from a Docker image, pin the image version to match: postgres:15-alpine, not postgres:latest.

For a full worked example of the correct pg_dump connection setup, see our pg_dump and pg_restore guide and the Supabase-specific backup walkthrough.

Connection refused and timeout errors

The second-most-common failure. Typical errors:

pg_dump: error: could not connect to server: Connection refused
        Is the server running on host "db.<project-ref>.supabase.co" and accepting
        TCP/IP connections on port 5432?

Or a timeout with no output after 30–60 seconds.

The cause: you're connecting to the wrong endpoint for pg_dump. Supabase exposes two connection types.

ModeHost formatPortSuitable for pg_dump?
Direct connectiondb.<project-ref>.supabase.co5432Yes, but requires IPv6 or a network that can reach it
Transaction pooler (Supavisor)aws-0-<region>.pooler.supabase.com6543Yes, preferred
Session pooleraws-0-<region>.pooler.supabase.com5432Yes

pg_dump requires a persistent session, so use the session pooler (port 5432) or the direct connection (port 5432). The transaction pooler (port 6543) works for individual queries but can drop long-running dump sessions on large databases.

Connection string for the session pooler:

pg_dump "postgres://postgres.<project-ref>@aws-0-<region>.pooler.supabase.com:5432/postgres" --format=custom --file=dump-$(date +%F).dump

If you're hitting timeouts on a large database, consider adding --lock-wait-timeout=30000 and running during off-peak hours.

Authentication and permission errors

FATAL: password authentication failed for user "postgres"

Or:

ERROR: permission denied for table users

Authentication failures have three common causes:

  1. Wrong password: Supabase's database password is different from your API keys. Copy it from Project Settings > Database > Connection string in the Supabase dashboard. It's not your anon key or service role key.

  2. Wrong user: The postgres user has full access. If you're backing up with a restricted role and see permission errors, either use postgres or grant the backup role SELECT on the schemas you need.

  3. Row-Level Security interfering: RLS policies apply even to pg_dump connections unless you connect as a superuser. The postgres user bypasses RLS. Restricted roles do not.

Disk space and memory errors

pg_dump: error: query failed: ERROR: could not write to file "/tmp/pg_dump_XXXX": No space left on device

Or a process killed mid-run with no output.

Dump files can be large. A 10 GB Supabase project often produces a 3–5 GB dump after compression (the ratio depends on your data). If your backup host doesn't have enough free disk, pg_dump fails mid-stream and leaves a partial file that looks valid but isn't.

Check available space before running:

df -h /tmp
df -h /your/backup/destination

For large projects, stream directly to an S3-compatible destination rather than writing a local file first:

pg_dump "postgres://postgres.<project-ref>@aws-0-<region>.pooler.supabase.com:5432/postgres" --format=custom | aws s3 cp - s3://your-bucket/backup-$(date +%F).dump

Note the pipeline risk: read the next section before piping like this.

Silent truncation (the dangerous one)

This is the failure mode nobody catches until they try to restore.

Your backup script exits 0. The file is there. The timestamp is recent. The backup "succeeded." Then you try to restore it and get:

pg_restore: error: input file appears to be a text format dump. Please use psql.

Or worse: the restore completes but only half your tables exist.

Silent truncation in piped commands (pg_dump | gzip | aws s3 cp) is the failure that most backup scripts don't catch. If any command in the pipeline writes a non-zero exit but the last command succeeds, the overall exit code is 0, and your monitoring sees a clean run.

We almost shipped a script that piped pg_dump directly to gzip into an S3 upload. On small test databases it worked every time. On a 40 GB project it silently truncated, because gzip continued writing after pg_dump errored and the final s3 cp succeeded with a partial file. We only caught it when a restore test failed three weeks later.

The fix: add set -o pipefail at the top of any bash script that pipes pg_dump:

#!/usr/bin/env bash
set -euo pipefail

pg_dump "postgres://..." --format=custom | aws s3 cp - s3://your-bucket/backup-$(date +%F).dump

echo "Backup complete: $(date)"

With pipefail, any non-zero exit in the pipeline fails the whole script. Add that line before trusting any piped backup job.

How to prevent backup failures

The single highest-leverage change: add an automated verification step that restores the backup and confirms row counts. Most teams never do this until they have a real incident. A backup that has never been tested is a liability, not an asset.

The second change: pin the pg_dump version in your CI/CD or container image and alert when Supabase's Postgres version changes. Supabase announces major version upgrades in advance.

For the full prevention playbook, backup Supabase Postgres correctly from the start (coming soon) covers the complete setup. And automating Supabase backup verification (coming soon) covers the restore-and-verify loop that catches silent truncation before it matters.

What to do next

Run the version check first:

pg_dump --version && psql "your-connection-string" -c "SELECT version();"

If those match, work through the error list above in order: version, connection endpoint, auth, disk, pipefail. Most failures resolve at step one or two.

If you've confirmed the backup file exists but you're not sure it's valid, the safest check is a partial restore to a test project. A file that pg_restore --list can read without errors is structurally intact; a file that errors on --list is corrupt.


If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles Supabase Postgres backups off-site, with alerts when a run fails. See how it works →


Keep learning


FAQ

Why does pg_dump fail with "server version mismatch"?

pg_dump enforces a hard requirement: it will not dump a server running a newer major version than itself. If your Supabase project is on Postgres 15 and your local pg_dump is 14, it exits immediately with the mismatch error. The fix is to install the matching version of pg_dump locally or in your container image, then pin that version so it doesn't drift when Supabase upgrades.

Can I use pg_dump with Supabase's connection pooler?

Yes, but you need to choose the right pool mode. The transaction pooler (port 6543) can interrupt long-running sessions for large dumps. Use the session pooler (port 5432 on the pooler hostname) or the direct connection string for pg_dump. Both support the persistent session that pg_dump requires to complete a full export.

What does "connection refused" mean for a Supabase backup?

It usually means you're using the wrong endpoint or port. Supabase's direct connection host (db.<project-ref>.supabase.co:5432) requires IPv6 or a compatible network path in some regions. If you're getting connection refused, switch to the session pooler endpoint (aws-0-<region>.pooler.supabase.com:5432) which is consistently accessible over IPv4.

How do I check if my pg_dump backup file is valid?

Run pg_restore --list your-backup-file.dump. If the command outputs a table of contents with your schema objects, the file is structurally intact. If it errors, the file is corrupt or truncated. For custom-format dumps, this check takes seconds and catches silent truncation before you find out the hard way at restore time.

Why is my Supabase backup file empty or truncated?

The most common cause is a piped command swallowing a non-zero exit. When you pipe pg_dump through gzip or directly to aws s3 cp, a failure in pg_dump doesn't necessarily fail the overall script if the last command in the pipe succeeds. Add set -o pipefail at the top of your backup script so any failure in the pipeline kills the whole run and surfaces the error.


This article is part of The complete guide to Supabase backup, an honest, practical reference from the team that backs up Supabase every day.