DigitalOcean's Managed Database backups run every day without you touching a thing. Seven days of retention, automatic, included in the price. That sounds complete until you ask two questions: can you download those backups? And what happens to them if something goes wrong with your account?
The answer to both is: no, and they're gone.
Native managed DB backups sit inside your DO account with 7-day retention. A pg_dump to an external bucket is the only way to control your own retention and survive an account-level incident. Link that with your overall off-site strategy and you have a defensible backup posture.
This guide covers what native backups do cover (and where they stop), the exact commands for Postgres, MySQL, and MongoDB, when to add PITR, and how to wire up a cron job that dumps, compresses, and ships to an external bucket automatically.
What native managed DB backups cover
DigitalOcean runs daily automated backups for every Managed Database cluster. Here is what you actually get, broken down by engine:
| Engine | Automated backup | Schedule | Retention | PITR available | Downloadable |
|---|---|---|---|---|---|
| PostgreSQL | Yes | Daily | 7 days | Yes (higher tiers) | No |
| MySQL | Yes | Daily | 7 days | Yes (higher tiers) | No |
| MongoDB | Yes | Daily | 7 days | No | No |
| Redis | Yes | Daily | 7 days | No | No |
| Kafka | Yes | Daily | 7 days | No | No |
A few things to notice in that table.
You cannot download native backups. There is no export button, no API endpoint that hands you a file. If you need a dump you can hold on to, you have to create it yourself.
PITR is engine-limited. Point-in-time recovery is available on Postgres and MySQL higher-tier clusters. MongoDB, Redis, and Kafka do not offer PITR under native DO managed backup. If your MongoDB cluster lost data fifteen minutes ago and you only have native backups, your nearest restore point is yesterday.
Seven days is the hard ceiling. Native retention does not grow past seven days regardless of cluster size or age. For any compliance framework that requires 30, 60, or 90 days of retention, native backup alone will not get you there.
What native backup does well: it's effortless, it handles the basics for most dev and staging environments, and restore through the DO console is reasonably fast for clusters under 20 GB. For production workloads where you need control over retention or an off-platform copy, keep reading.
Why trust this article
We back up DigitalOcean every day. The gaps in native managed DB backup we describe here are the same ones we see teams discover under pressure, usually when they need a restore that's older than seven days or needs to land somewhere DigitalOcean doesn't control.
pg_dump for Postgres clusters
pg_dump is the standard tool for exporting a Postgres database to a portable file. On a DO Managed Postgres cluster, the only wrinkle is the connection string.
Use the direct port (25060 for Postgres), not the connection pooler port. pg_dump through the pooler will fail or produce an incomplete dump. When you copy your connection details from the DO console, make sure you're using the direct connection string, not the pooling connection string.
Find your connection details in the DO console under your database cluster's Connection Details tab. Select the "Connection string" or "Connection parameters" view and confirm you are on the public network or VPC tab, not the pooler tab.
pg_dump
--host=your-cluster-do-user-0000000-0.db.ondigitalocean.com
--port=25060
--username=doadmin
--format=custom
--no-acl
--no-owner
--file=dump-$(date +%F).dump
defaultdb
Flag notes:
--format=custom: produces a compressed, parallel-restoreable.dumpfile. Prefer this over plain SQL for anything over a few MB.--no-acl: skipsGRANT/REVOKEstatements that reference DO-internal roles. These fail on restore into a different cluster.--no-owner: skipsALTER OWNERstatements for the same reason.--file: writes to a file rather than stdout. Naming with$(date +%F)gives youdump-2026-04-23.dump.
You'll be prompted for the password unless you set PGPASSWORD in your environment or use a .pgpass file.
The deeper walkthrough for Postgres to Spaces specifically is at /blog/how-to-backup-postgres-to-digitalocean-spaces, which covers storage destination setup in more detail.
mysqldump for MySQL clusters
DO Managed MySQL clusters use port 25060 for direct connections as well. The connection pooler for MySQL is on a different port; check your console to confirm which one you're copying.
mysqldump
--host=your-cluster-do-user-0000000-0.db.ondigitalocean.com
--port=25060
--user=doadmin
--password
--ssl-mode=REQUIRED
--single-transaction
--routines
--triggers
--hex-blob
defaultdb | gzip > dump-$(date +%F).sql.gz
Flag notes:
--ssl-mode=REQUIRED: DO Managed MySQL requires TLS. Without this flag, the connection will be rejected.--single-transaction: takes a consistent snapshot of InnoDB tables without locking them. Leave this out and you risk a dump with inconsistent state across tables.--routinesand--triggers: include stored procedures and triggers. Omitting them produces a dump that restores to a functionally different database.--hex-blob: avoids encoding issues with binary column data.- The pipe to
gzipcompresses inline. For a 5 GB database this typically produces a 1–2 GB file.
For a more detailed guide on the MySQL side, /blog/back-up-your-managed-digitalocean-mysql-database walks through the same workflow with storage destination setup included.
mongodump for MongoDB clusters
MongoDB's mongodump connects to your DO Managed MongoDB cluster using the connection URI from the console. The URI includes TLS parameters; copy it exactly rather than reassembling the host and port by hand.
mongodump
--uri="mongodb+srv://doadmin:YOUR_PASSWORD@your-cluster.mongo.ondigitalocean.com/admin?tls=true&authSource=admin"
--gzip
--archive=dump-$(date +%F).archive
Flag notes:
--uri: accepts the full connection string from the DO console. Using URI mode avoids having to specify TLS flags separately.--gzip: compresses each collection file inside the archive. For text-heavy collections, compression ratios are often 5:1 or better.--archive: writes a single file instead of a directory tree. Easier to manage and move.
If you're running mongodump from outside DigitalOcean's network, make sure you've added your IP to the cluster's trusted sources list in the DO console. The default trusted sources list is empty, which means connections from unknown IPs are silently rejected.
One thing to note about MongoDB specifically: DO does not offer PITR for Managed MongoDB clusters. Daily automated backup with 7-day retention is what you have natively. If you need finer-grained recovery, the mongodump approach on a scheduled basis is your primary option.
You can also look at /blog/how-to-backup-mongodb-to-digitalocean for an overview of the MongoDB-to-Spaces workflow.
When to add PITR
Point-in-time recovery lets you restore to any second within the retention window, not just a daily snapshot. For databases where you need to recover from an accidental DELETE or a bad migration that ran at 14:37 on a Tuesday, daily snapshots are too coarse.
DO's PITR is available on higher-tier Postgres and MySQL clusters. To check if your cluster supports it, go to the cluster's Backups tab in the console. If you see a PITR enable toggle, the cluster tier supports it.
When PITR is worth enabling:
- Your application has continuous writes and a daily restore point would lose meaningful data.
- You've had (or can imagine having) an accidental table truncation or bad migration.
- Your compliance posture requires sub-hour RPO.
When you can skip it:
- The database is a read replica, analytics store, or otherwise reconstructable from another source.
- The application is stateless in practice (logs, metrics, derived data).
- You're already running external dumps frequently enough that PITR wouldn't improve your recovery granularity.
The technical notes on how PITR is handled under the hood, and where it still leaves you exposed, are in /blog/digitalocean/how-digitalocean-native-backup-works.
One honest limitation: if your Managed DB retention window expires and you have no external copy, there is nothing to recover. That scenario comes up more often than you'd expect. The full case study is at /blog/digitalocean/digitalocean-managed-db-retention-expired.
Getting your database backup off-platform
Native backups and PITR both live inside your DigitalOcean account. If your account is suspended, compromised, or hit by a billing dispute, the backups go down with the cluster. The same is true for a region-level outage.
Getting a copy off-platform means dumping to a file and shipping it to an external object store: AWS S3, Backblaze B2, Cloudflare R2, or any S3-compatible target. If you're planning a cloud migration or provider switch, a portable off-platform dump is also the foundation of a solid handover, and the pre-migration backup guide walks through what to lock down before you cut over. Here is a production-ready script that handles the full cycle: dump, compress, upload, and clean up local files.
#!/usr/bin/env bash
set -euo pipefail
# --- Configuration ---
DB_HOST="your-cluster.db.ondigitalocean.com"
DB_PORT="25060"
DB_USER="doadmin"
DB_NAME="defaultdb"
PGPASSWORD="your_password"
export PGPASSWORD
S3_BUCKET="s3://your-backup-bucket/postgres"
BACKUP_DIR="/tmp/db-backups"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y-%m-%dT%H-%M-%S)
DUMP_FILE="$BACKUP_DIR/${DB_NAME}-${TIMESTAMP}.dump"
# --- Ensure backup dir exists ---
mkdir -p "$BACKUP_DIR"
# --- Dump ---
echo "[$(date -u +%FT%TZ)] Starting pg_dump..."
pg_dump
--host="$DB_HOST"
--port="$DB_PORT"
--username="$DB_USER"
--format=custom
--no-acl
--no-owner
--file="$DUMP_FILE"
"$DB_NAME"
echo "[$(date -u +%FT%TZ)] Dump complete: $DUMP_FILE ($(du -sh "$DUMP_FILE" | cut -f1))"
# --- Compress ---
gzip "$DUMP_FILE"
DUMP_FILE="${DUMP_FILE}.gz"
# --- Upload to external bucket ---
echo "[$(date -u +%FT%TZ)] Uploading to $S3_BUCKET..."
aws s3 cp "$DUMP_FILE" "$S3_BUCKET/"
--storage-class STANDARD_IA
--only-show-errors
echo "[$(date -u +%FT%TZ)] Upload complete."
# --- Cleanup local file ---
rm -f "$DUMP_FILE"
# --- Prune old backups in bucket beyond retention ---
aws s3 ls "$S3_BUCKET/"
| awk '{print $4}'
| sort
| head -n -"$RETENTION_DAYS"
| xargs -I {} aws s3 rm "$S3_BUCKET/{}" --only-show-errors 2>/dev/null || true
echo "[$(date -u +%FT%TZ)] Done. Retention enforced at ${RETENTION_DAYS} most recent backups."
Run this daily with a cron job on a server outside your DO account. The set -euo pipefail at the top ensures the script fails loudly if any command fails, rather than uploading an empty or partial dump silently.
Add this to crontab with crontab -e:
0 3 * * * /opt/bin/pg-backup.sh >> /var/log/pg-backup.log 2>&1
That runs at 03:00 UTC each day and appends output to a log file you can inspect if something goes wrong.
Same-host risk
Every native DigitalOcean backup (managed DB backups, snapshots, PITR) lives inside the same account as the resource it protects. A billing dispute, account suspension, or compromised credential wipes both at the same time. A dump in an external bucket owned by a different provider is the only way to break that dependency. See the full breakdown in the off-site compliance guide.
Verifying the dump restores
A backup you haven't tested is not a backup. It's a compressed file you hope contains what you think it does.
The minimum bar for a pg_dump backup is a restore into a test database:
# Spin up a local Postgres container for the test restore
docker run -d
--name pg-restore-test
-e POSTGRES_PASSWORD=testpass
-p 5433:5432
postgres:16
# Wait for it to be ready
sleep 3
# Restore from the dump
pg_restore
--host=localhost
--port=5433
--username=postgres
--dbname=postgres
--clean
--if-exists
--no-acl
--no-owner
dump-2026-04-23.dump.gz
# Spot-check row counts
psql
--host=localhost
--port=5433
--username=postgres
--dbname=postgres
-c "SELECT schemaname, tablename, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 10;"
# Tear down
docker stop pg-restore-test && docker rm pg-restore-test
This is a spot-check, not a full validation. A full validation confirms that your application can actually run against the restored data. But the spot-check catches the two most common silent failures: a truncated dump (the restore exits early) and a version mismatch (the restore rejects the file format).
The detailed restore procedure for DO Managed Postgres and MySQL is in /blog/digitalocean/restore-digitalocean-managed-database, including how to restore from a native DO backup versus from an external dump file.
Do this test once when you first set up the pipeline. Then run it at least once a month. For a structured approach to scheduling and monitoring these restore checks, see automating DigitalOcean backup verification.
What to do tonight
If your Managed Database cluster is in production and you only have native backups: add the dump script above to a cron job on any server outside your DO account, point it at a bucket you control, and let it run tonight. That's the whole gap closed.
If you also need compliance-grade retention (30 days, 60 days, 90 days), set RETENTION_DAYS in the script and add a lifecycle rule to your external bucket to move old dumps to cold storage after 30 days. Most providers offer this in two clicks.
If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles DigitalOcean Managed Database backups off-site, with alerts when a run fails. See how it works →
Keep learning
- How to back up a DigitalOcean Managed MySQL database
- How to back up Postgres to DigitalOcean Spaces
- What DigitalOcean native backup doesn't cover
- DigitalOcean off-site compliance
FAQ
Does DigitalOcean back up Managed Databases automatically?
Yes. Every Managed Database cluster on DigitalOcean receives automated daily backups with 7-day retention, included in the cluster price. This covers Postgres, MySQL, MongoDB, Redis, and Kafka. The backups are managed entirely by DigitalOcean; you do not configure a schedule or storage location.
How long does DigitalOcean keep database backups?
Seven days. DigitalOcean retains native Managed Database backups for exactly 7 days regardless of cluster type or size. Once a backup falls outside that window it is permanently deleted. If your project requires longer retention, you need to export dumps to an external storage location you control.
Can I download a native Managed Database backup?
No. DigitalOcean does not provide a way to download the automated daily backups. You can restore from them to a new cluster via the DO console, but you cannot export the backup file itself. To get a portable backup file, you must run pg_dump, mysqldump, or mongodump against the live cluster and save the output yourself.
Does PITR work with all database engines?
No. Point-in-time recovery is available on higher-tier PostgreSQL and MySQL clusters. MongoDB, Redis, and Kafka clusters on DigitalOcean do not support PITR. For MongoDB, your finest-grained native recovery option is the daily automated backup, which means up to 24 hours of potential data loss.
How do I restore from a pg_dump on DigitalOcean?
Use pg_restore to load the dump into a Postgres database. If you're restoring to a DO Managed Postgres cluster, connect using the direct port (25060) and run pg_restore --format=custom --no-acl --no-owner --dbname=your_connection_string dump.dump. For a step-by-step walkthrough including version mismatch handling, see the restore guide.
This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.