Supabase's "daily backup" doesn't produce a file you can download.
It isn't run by pg_dump. It doesn't leave a log anywhere you can inspect. It's a physical snapshot of the disk your Postgres volume sits on, taken nightly by the cloud provider Supabase runs on, and the only thing that can do anything with it is Supabase's own infrastructure.
Once you understand that one sentence, every other decision about Supabase backup makes sense. This article is the long version of how that works: what gets captured, where it lives, how long it stays, what Point-in-Time Recovery actually changes, how a restore happens, and what you can and can't do with the backup.
Why trust this article
We run Supabase backups every day for thousands of projects. This piece is the mental model we wish every Supabase developer had before they made their first production-grade backup decision, because the subtle facts about "daily" and "snapshot" and "restore" are where most teams' backup strategies quietly go wrong.
After reading this, you'll be able to describe what Supabase's native backup is, who it's for, and when you need something in addition to it.
What "daily backup" actually means on Supabase
Three words, three pieces of nuance worth unpacking.
"Daily"
Supabase takes one backup per day per paid project. That means your recovery granularity without PITR is 24 hours. If something went wrong at 3pm and the snapshot ran at 2am the same day, the snapshot from that morning already contains whatever went wrong earlier that morning too. The snapshot from the day before is the last one that predates your 2am-to-3pm window.
This is fine for most acute disasters and terrible for anything subtle that happens mid-day. It's also why PITR exists as a paid upgrade (§5).
"Backup"
What Supabase calls a backup is technically a snapshot of the underlying storage volume your Postgres database sits on. The cloud provider freezes the volume at a point in time, takes a bit-for-bit copy, and stores the copy somewhere else on the same cloud. This is fast to produce (no query load on your database, no dump format to serialize) and fast to restore back onto a matching Postgres instance. It is not a pg_dump. It has no SQL, no row-level logical structure, no way to inspect it with standard Postgres tooling.
We come back to why that distinction matters in §2.
"Native"
"Native" means Supabase runs it for you, automatically, as part of your plan. You don't configure a schedule. You don't pay per backup. You don't see the commands. The trade-off is that you also can't change any of that: no custom schedule, no custom retention, no download, no portability. You're taking the convenience and accepting the constraints.
That's the mental model. The next sections explain each constraint.
Physical snapshots vs logical backups (and why it matters)
A Postgres database can be backed up in two fundamentally different ways. Supabase's native backup is one of them. Understanding the other makes everything else clear.

Physical (what Supabase does)
A physical backup copies the raw files that Postgres writes to disk: the data pages, the indexes, the WAL, the control files. The copy is a bit-for-bit image of the database at a point in time.
Pros:
- Fast to take. The underlying storage provider (AWS, in Supabase's case) has snapshot primitives that complete in seconds regardless of database size.
- Fast to restore. Restoring is essentially mounting the snapshot as a new volume and starting Postgres against it.
- No query load. Since it's a storage-level operation, it doesn't compete with your application's reads and writes.
Cons:
- Not portable. A physical snapshot can only be restored onto Postgres running against the same storage architecture. You can't take a Supabase snapshot and restore it onto a Postgres instance you run yourself.
- Not downloadable. Supabase specifically doesn't expose the snapshot file; it's managed entirely inside their infrastructure.
- Version-locked. Restoring a physical snapshot onto a different Postgres major version is generally not supported.
Logical (what pg_dump does)
A logical backup reads the database through Postgres itself and writes out a representation of the data as SQL statements (or the custom binary format that pg_restore understands). The output is a portable file.
Pros:
- Portable. The output file can be restored onto any compatible Postgres instance, anywhere.
- Inspectable. You can examine the dump, extract specific tables, restore piecemeal.
- Version-flexible. Dumps can usually be restored onto a newer Postgres major version.
Cons:
- Slower. Large databases take minutes to hours, and the dump runs through Postgres, competing with live queries.
- You run it. No automation unless you build it. No retention unless you manage it.
Why the distinction matters
Supabase's native backup gives you the physical kind, automatically. If you want the logical kind too, you run pg_dump yourself (manually, via cron, or via a managed tool). Most production-grade strategies use both: physical snapshots for fast same-project recovery, logical dumps stored off-site for portability, compliance, and long-term retention.
The rest of this article explains what Supabase gives you on the physical side. For the logical side, How to back up your Supabase Postgres database (coming soon) walks through a production-ready pg_dump setup. For the foundational reference on pg_dump itself, see our pg_dump and pg_restore guide or the older Supabase backup walkthrough that still covers the basics.
Retention windows, by plan
How long Supabase keeps your daily snapshots depends entirely on your plan. These are the numbers Supabase publishes clearly and we reuse verbatim across the cluster.
| Plan | Price | Daily backups | Retention | PITR |
|---|---|---|---|---|
| Free | $0 | No | 0 days | No |
| Pro | $25/mo | Yes | 7 days | Paid add-on (usage-based, around $140/mo for 7-day PITR on a typical project) |
| Team | $599/mo | Yes | 14 days | Paid add-on (same usage-based pricing model) |
| Enterprise | Custom | Yes | 30 days | Usually included |
What "rolling retention" actually means
The retention window is rolling, not cumulative. On a Pro plan you always have the last 7 days of snapshots. Day 8's snapshot pushes out day 1's. Day 9's pushes out day 2's. You never accumulate more than 7 snapshots on Pro, 14 on Team, 30 on Enterprise.
This matters because it changes how you think about recovery. A bug that introduces data corruption 10 days ago is already outside the window on Pro. The oldest snapshot available is already post-corruption. The corrupt state is all you can restore to.
Short retention windows protect you against fast disasters (drop table, truncate, bad migration at lunchtime). They don't protect against slow disasters (a bug that drifts data over weeks). For the slow kind, you need an archive that lives longer than your retention window.
What Free gets
Free projects don't have automated daily backups at all. The row in the table says "No" for a reason. If you're running on Free, you have no native safety net during normal operation.
There is a separate snapshot-at-pause mechanism that kicks in when a Free project pauses after 7 days of inactivity, with a 90-day one-click restore window and manual download available after that. We cover the pause-specific behaviour in What Supabase's native backup doesn't cover. For operational purposes, treat Free as production-unsafe.
Where the backups live
This section is short, and all three bullets matter.
- Same region as your project. If your Supabase project runs in
aws-eu-central-1, the snapshots are stored on AWS infrastructure in the same region. - Same cloud provider as your project. Supabase is AWS-hosted, so "where the backup lives" means AWS. There is no option in the Supabase dashboard to send a copy to Google Cloud, Azure, Cloudflare R2, Backblaze, or any non-AWS object store.
- Under Supabase's AWS account, not yours. You don't have access keys to the bucket. You can't inspect the object, you can't write your own lifecycle policy, and you can't hand the object to a third-party auditor as a separate artifact.
For everyday recovery, this is fine. Snapshots are close to the database, restores happen entirely within Supabase's infrastructure, and the operational story is simple.
For compliance work it matters more. Auditors asking about geographical separation or multi-cloud posture will not accept "our backup lives in the same region and the same cloud as our primary, and we don't hold the credentials" as a complete answer. Closing that gap means an off-site logical backup you own, typically to a different cloud or at least a different AWS account under separate credentials.
Point-in-Time Recovery: the paid add-on that extends the window
PITR is Supabase's answer to the 24-hour granularity problem.

What PITR is
Postgres writes every change to a Write-Ahead Log (WAL) before applying it to the actual data files. PITR works by streaming that WAL to a backup location continuously, so that at restore time Supabase can:
- Start from the most recent daily snapshot before your target recovery time.
- Replay WAL entries forward, one change at a time, until it reaches the second you chose.
- Stop. That's your recovery state.
The result is second-level recovery granularity inside the PITR retention window. Not just "yesterday's snapshot," but "13:47:22 UTC last Thursday."
What PITR costs
PITR is a paid add-on on Pro and Team plans. The pricing is usage-based, calculated on WAL volume, which means busier databases cost more. For a typical project with a 7-day PITR window, the effective monthly cost is around $140 [verify: rates change; check current Supabase billing docs before quoting to customers]. Enterprise plans usually include PITR by default.
For the full decision on whether PITR is worth it for your project, When you actually need Supabase PITR (coming soon) goes through the RPO math and the scenarios where daily snapshots are genuinely sufficient.
What PITR doesn't do
PITR extends your database recovery granularity. It doesn't:
- Back up Supabase Storage. Storage is architecturally separate from Postgres.
- Back up Edge Functions. Those live on Deno Deploy, not on the Postgres volume.
- Back up project configuration (auth provider credentials, custom email templates, JWT secret).
- Survive project deletion. The WAL archive goes with the project.
PITR makes native backup better at what it already does. It doesn't expand what native backup covers.
How a restore actually works
You trigger a restore from the Supabase dashboard. There is no public API for native restore; the dashboard is the only path.
The flow
- Open the Supabase dashboard.
- Navigate to Database > Backups.
- Find the snapshot you want (or, with PITR, pick the second you want).
- Click Restore.
- Supabase provisions a fresh Postgres instance from the snapshot, optionally replays WAL forward to your chosen second, and either replaces your current project's database or spins up a new project from the snapshot.
The whole process is Supabase's internal machinery. You don't log in to a server. You don't run pg_restore. You don't move files. You click a button and wait.
What you should know before clicking it
- The restore may replace your live database. Depending on the option you pick, the restore targets either your current project (destructive to whatever is currently there) or a new project (safer, but costs a new project).
- Applications break during the restore. Connections drop while Supabase swaps the database volume.
- Roles and permissions come back with the restore. These are inside the physical snapshot.
- Storage and Edge Functions do not. See What Supabase's native backup doesn't cover for why.
Why no API
Supabase has not published a public API for triggering or scripting native restores at time of writing. The implication: if your disaster-recovery plan assumes you can programmatically trigger a restore (say, from a runbook or a CI job), native backup doesn't support that workflow. You need a human in the dashboard.
For teams where automation matters, this is usually the tipping point that pushes them toward logical backups they own and can automate end to end.
What you can and can't do with the backup file
This is the section most teams miss.
You cannot
- Download the snapshot. Supabase does not expose the file.
- Mount it on another server. Physical snapshots are tied to the storage architecture they were taken on.
- Run
pg_restoreagainst it. The snapshot isn't a logical dump;pg_restorehas nothing to read. - Inspect the snapshot before restoring. You can't open it in a SQL tool to verify specific rows or tables made it in.
- Archive it outside Supabase. No long-term export, no cold-storage tier, no copy-to-your-own-S3.
- Verify integrity yourself. You trust that the snapshot is consistent because Supabase takes it as a volume-level operation and those are atomic.
- Prove the backup's existence to an auditor as an independent artifact. The backup is a Supabase-managed object inside Supabase's AWS account.
You can
- Trigger a restore, from the dashboard, into the same project or a new project.
- Use PITR (paid) to choose the exact second.
- Trust the snapshot is physically consistent.
- Restore quickly, within the operational limits of Supabase's infrastructure.
Everything in the "cannot" list matters the moment you have a requirement native didn't anticipate: a different cloud, an audit, a migration off Supabase, a long-term archive, a self-serve restore runbook, a pre-restore inspection. Each of those requires a logical backup you own, running in parallel to native.
When native is enough (and when it isn't)
Here's the honest assessment.
Native is enough when
- Your project is on a paid plan (Pro or above).
- Your recovery posture is "if something goes wrong, I have up to 24 hours to notice" (or seconds with PITR).
- You're protected against fast, acute failures: accidental deletes, bad migrations, single-table corruption.
- You don't use Supabase Storage or Edge Functions, or you do and you accept that the daily backup doesn't cover them.
- Your compliance posture doesn't require multi-cloud, multi-region, or off-site-under-separate-credentials backups.
- Your disaster recovery runbook involves a human in the Supabase dashboard, not an automated pipeline.
Plenty of projects sit in that box. Native is genuinely sufficient for them, and adding off-site machinery would be overengineering.
Native is not enough when
- Your compliance framework (SOC 2, ISO 27001, GDPR) requires geographical separation of primary and backup.
- Your recovery plan assumes you can stand up the application on another cloud if Supabase itself is unavailable.
- You need backups older than 30 days, for audit trails, chargebacks, or legal retention.
- You use Supabase Storage or Edge Functions and those objects matter.
- Your team has experienced (or wants to rule out) the "compromised account deletes project, backups go with it" scenario.
- You want to inspect or verify a backup before restoring. How to automate Supabase backup verification walks through the file checks, test restores, and data comparisons that make that possible for logical backups you own.
- You want a scripted, automated restore path.
If any of those apply, native backup is part of your strategy, not the whole of it.
The full list of gaps and what to do about each is in What Supabase's native backup doesn't cover. For the side-by-side on native versus a managed off-site backup, see Supabase native backup vs. SimpleBackups (coming soon).
What to do next
Open your Supabase dashboard. Go to Database > Backups. Note the date of the oldest snapshot and the date of the newest. That's your current recovery window. Compare it to the longest period of silent drift you could reasonably imagine going unnoticed in your data. If the first number is shorter than the second, you have a concrete backup gap, and you now know exactly what shape it has.
The reason we built SimpleBackups for Supabase is exactly the hardest parts of the "cannot" list above: off-site logical backups under your own credentials, multi-cloud destinations, automated scheduling and verification, compliance-friendly audit trails. Native handles the easy 80%. If you want the other 20% without writing it yourself, that's what we do.
If you'd rather script it yourself, every gap has its own article in the complete guide with the pg_dump command, the cron setup, and the verification step to match.
Keep learning
This article is part of a cluster we're shipping over the coming weeks. Links marked (coming soon) aren't live yet; follow the hub to see what's published.
- What Supabase's native backup doesn't cover, the paired piece on every gap native has. Read this next if you decided native isn't enough.
- How to back up your Supabase Postgres database (coming soon), the practical
pg_dumphow-to that gives you a logical backup you own. - When you actually need Supabase PITR (coming soon), the RPO-vs-cost decision guide for the PITR add-on.
- Supabase native backup vs. SimpleBackups (coming soon), honest side-by-side including when native is genuinely enough.
- How to backup Supabase, the original deep-dive on
pg_dumpagainst a Supabase project.
FAQ
What is a physical snapshot, and how is it different from pg_dump?
A physical snapshot is a bit-for-bit copy of the storage volume your database sits on, taken at the cloud-provider level. It restores fast and inside the platform, but it isn't portable and can't be opened with standard Postgres tools. A pg_dump is a logical backup that reads the database through Postgres itself and writes the data as SQL or the custom format. It's portable, inspectable, and restorable anywhere, but it's slower to take and you have to run it yourself.
How often does Supabase back up my database?
Once per day on paid plans. That means your recovery granularity without PITR is 24 hours: the snapshot either captures your database before or after any specific event. Free projects have no automated daily backups during normal operation.
How long does Supabase keep backups?
Pro retains 7 days of daily backups. Team retains 14. Enterprise retains 30. Free has no retention because it has no automated daily backups. Retention is rolling, so the oldest snapshot drops off as each new one arrives.
What is PITR and when do I need it?
Point-in-Time Recovery streams Postgres Write-Ahead Logs continuously and replays them on restore, letting you recover to any second inside the PITR window rather than just the moment of the last daily snapshot. You want it if your Recovery Point Objective is shorter than 24 hours, and you can accept the usage-based add-on cost. If your data doesn't change often or you can tolerate losing a day of writes, daily snapshots are usually enough.
How do I restore a Supabase backup?
From the Supabase dashboard, under Database > Backups. Pick the snapshot (or the second, if you have PITR) and click Restore. Supabase provisions a Postgres instance from the snapshot and either replaces your current project's database or spins up a new project from the backup. There's no public API for triggering a native restore; it's a dashboard-driven operation.
Where are my Supabase backups stored?
In the same AWS region as your Supabase project, under Supabase's AWS account, not yours. You don't get access keys, and you can't replicate the snapshot to a different cloud, a different region, or an object store you own. For compliance-driven scenarios that require geographical separation or multi-cloud posture, you need an additional backup mechanism you run yourself.
This article is part of The complete guide to Supabase backup, an honest, practical reference from the team that backs up Supabase every day.