SimpleBackupsSimpleBackups

How to restore a DigitalOcean Managed Database

Posted on

You opened the DigitalOcean dashboard looking for the "restore" button. If your database is PostgreSQL, MySQL, or MongoDB, you have at least three different restore paths in front of you, and they behave very differently: native managed restore, cluster fork, and reimport from an off-site dump. Picking the wrong one adds hours of downtime and a connection string migration you did not plan for.

This article walks through all three paths, tells you which one fits which situation, and covers the part every tutorial skips: what happens to your connection string after the restore completes.

Restoring from the native managed backup

DigitalOcean Managed Databases include daily automated backups retained for seven days. This is included in the base plan price. You do not configure it; it runs automatically. The seven-day window is the constraint you work around.

If your database is still within that window and your DigitalOcean account is accessible, the native restore is the fastest path. It does not require any local tooling. The entire operation runs from the dashboard or API. If the seven-day window has already closed, the guide to what happens when your managed database backup expires covers your remaining options.

From the dashboard:

  1. Navigate to Databases in the left sidebar.
  2. Select the cluster you want to restore.
  3. Click the Backups tab.
  4. Choose the backup timestamp closest to your target state.
  5. Click Restore from backup. DigitalOcean provisions a new cluster from that snapshot. Your original cluster is not modified.

The restored cluster comes up as a separate cluster in your account. Your original cluster stays online throughout the operation. Once the restore completes and you have verified the data, you point your application at the new cluster's connection string and optionally destroy the old one.

Native restore always creates a new cluster, it does not overwrite your existing one. Factor this into your capacity planning: for the duration of the restore and verification period, you are paying for two clusters simultaneously.

One practical point on timing: DigitalOcean does not publish an SLA for restore duration. In our experience with customers restoring clusters, smaller databases (under 10 GB) typically provision in under fifteen minutes. Larger clusters take longer. The dashboard shows restore progress, but it does not surface an ETA. Plan your maintenance window conservatively.

Point-in-Time Recovery (PITR): higher-tier PostgreSQL and MySQL clusters support PITR, which allows recovery to any second within the seven-day window rather than to the nearest daily snapshot. If your cluster has PITR enabled, the restore dialog surfaces a timestamp input rather than a list of daily backups. The process is otherwise identical.

For a detailed breakdown of how DigitalOcean manages these backups under the hood, see how DigitalOcean native backup works.

Forking a database cluster

Forking is a distinct operation from restoring. A fork creates a new cluster from a backup point but keeps the source cluster running and does not destroy the backup. You end up with two live clusters: the original and the fork. The fork is independent after creation; changes to the source do not propagate to the fork, and changes to the fork do not affect the source.

The main use cases for forking:

  • You want to test a migration or schema change against real production data without touching production.
  • You need a staging environment that exactly mirrors a recent production state.
  • You are investigating a data integrity issue and need a non-destructive way to query a historical state.

From the dashboard:

  1. Navigate to Databases, select the cluster, click the Backups tab.
  2. Click Fork cluster on any available backup.
  3. Give the fork a name, confirm the region and plan size, and click Fork cluster.

The fork provisions as a full independent cluster. It has its own connection string, its own billing line, and its own backup schedule once provisioned.

After forking, update application configs before routing traffic to the fork. The fork's connection string is different from the source cluster's. Sending production traffic to the wrong cluster is a common post-fork mistake. Confirm the host, port, and database name in your connection string before flipping any application config.

Forking is not free. The fork runs as a full cluster at the same plan size as the source. If you fork to investigate an issue, remember to destroy the fork when the investigation is complete. Left running indefinitely, a fork doubles your database hosting cost.

For the full picture of what native backup covers and where it stops, what DigitalOcean native backup doesn't cover is worth reading before you rely on the fork path for a disaster recovery plan.

Restoring from a pg_dump file

If your native backup window has expired (older than seven days), or if you made off-site exports as part of a more complete backup strategy, you restore from a dump file rather than from the native backup.

For PostgreSQL, the standard export format is a pg_dump custom-format archive (.dump). Restoring this archive to a DigitalOcean Managed Database uses pg_restore with the managed cluster's connection details.

Prerequisites:

  • The .dump archive available locally or on a machine that can reach the managed cluster.
  • The pg_restore client installed (version should match or be within one major version of the server PostgreSQL version).
  • The connection string for the managed cluster: host, port, username, database name, and SSL certificate.
pg_restore
  --host=db-postgresql-nyc3-12345-do-user-1234567-0.b.db.ondigitalocean.com
  --port=25060
  --username=doadmin
  --dbname=defaultdb
  --no-privileges
  --no-owner
  --format=custom
  --verbose
  dump-2026-04-15.dump

Flag notes:

  • --no-privileges and --no-owner: DigitalOcean Managed Database users do not have superuser rights. Attempting to restore privilege grants or ownership assignments to system roles fails. These two flags skip those statements.
  • --format=custom: required when the source dump was created with pg_dump --format=custom. Omit this flag if you are restoring from a plain SQL dump instead.
  • --verbose: streams progress to stdout. Useful for monitoring large restores.

The pg_restore connection requires SSL. DigitalOcean Managed Database connections enforce SSL by default. If your client throws an SSL handshake error, download the CA certificate from the cluster's Connection Details page in the dashboard and pass it with --sslrootcert=/path/to/ca-certificate.crt.

Before restoring, make sure the target database exists and is empty (or create a new one). Restoring into a database with existing tables can cause constraint conflicts unless you pass --clean to drop objects before recreating them.

For a complete guide on creating the off-site dumps that feed this restore path, backing up PostgreSQL to DigitalOcean Spaces covers the full export and storage workflow.

Restoring from a mysqldump file

MySQL Managed Database exports use mysqldump, and the restore path uses the mysql client to reimport the SQL file directly.

Prerequisites:

  • The .sql dump file (optionally gzip-compressed as .sql.gz).
  • The mysql client installed, version-compatible with the server MySQL version.
  • The cluster connection details: host, port, username, password, database name.
mysql
  --host=db-mysql-nyc3-99999-do-user-1234567-0.b.db.ondigitalocean.com
  --port=25060
  --user=doadmin
  --password
  --ssl-mode=REQUIRED
  defaultdb < dump-2026-04-15.sql

If your dump is gzip-compressed, decompress on the fly with:

gunzip -c dump-2026-04-15.sql.gz | mysql
  --host=db-mysql-nyc3-99999-do-user-1234567-0.b.db.ondigitalocean.com
  --port=25060
  --user=doadmin
  --password
  --ssl-mode=REQUIRED
  defaultdb

The --password flag without a value prompts for the password interactively. For scripted restores, pass the password via environment variable or .my.cnf rather than on the command line (the command line exposes it to the process list).

Large MySQL dump restores can take several hours for databases in the hundreds of gigabytes. A dropped connection mid-restore leaves the database in a partial state. For large restores, run the command inside a screen or tmux session so a dropped SSH connection does not kill the process.

For a detailed walkthrough of creating and managing MySQL exports on DigitalOcean, back up your Managed DigitalOcean MySQL database covers the full export workflow.

Restoring from a mongodump archive

MongoDB Managed Database exports use mongodump, and the restore path uses mongorestore.

Prerequisites:

  • The archive file produced by mongodump --archive (single-file format), or the directory output of a standard mongodump.
  • mongorestore installed, from the MongoDB Database Tools package, version-compatible with the server MongoDB version.
  • The cluster connection string from the DigitalOcean dashboard. MongoDB Managed Databases use a mongodb+srv:// connection string.
mongorestore
  --uri="mongodb+srv://doadmin:YOUR_PASSWORD@db-mongodb-nyc3-12345.mongo.ondigitalocean.com/admin?tls=true&authSource=admin"
  --archive=dump-2026-04-15.archive
  --gzip
  --drop
  --verbose

Flag notes:

  • --archive: specifies the single-file archive path. Omit this and replace with --dir=/path/to/dump/directory if your dump was created without --archive.
  • --gzip: required if the archive was created with mongodump --gzip. Omit if the archive is uncompressed.
  • --drop: drops existing collections before restoring. Use this when restoring to a database that already has data you want to replace. Skip it if you want an additive merge, though merging with an existing dataset is rarely what you want in a recovery scenario.
  • --verbose: logs each collection as it restores.

DigitalOcean MongoDB Managed Databases enforce TLS. The tls=true parameter in the --uri handles this. Do not pass --sslAllowInvalidCertificates in production; get the CA certificate from the cluster's connection details instead.

Connection string changes after restore

Every restore path produces a different outcome for your connection string. This is the most common source of confusion after a successful restore.

Here is what changes in each scenario:

Restore pathConnection string changes?What changes
Native restore to new clusterYesHost, port, and cluster ID in the hostname
ForkYesHost, port, and cluster ID in the hostname
pg_dump / mysqldump / mongodump reimport to existing clusterNoNothing, same cluster endpoint
pg_dump / mysqldump reimport to new clusterYesHost, port, and cluster ID in the hostname

When the host changes, every service that holds a reference to the old connection string needs to be updated before routing traffic to the new cluster. This includes:

  • Application environment variables (.env, Kubernetes secrets, Vercel/Railway environment configs)
  • Connection pool configurations (PgBouncer, ProxySQL)
  • Read replica references if your application uses split read/write connections
  • Monitoring agents and database observability tools
  • Any CI/CD scripts or migration runners that connect to the database

The DigitalOcean dashboard shows the new cluster's connection string under Connection Details as soon as the cluster is provisioned and accepting connections. Copy it early in the verification step, before you start updating application configs.

When switching connection strings, update one service at a time and watch your error rate before proceeding to the next. A bulk config swap across all services simultaneously makes it much harder to identify which service is misbehaving if something goes wrong.

One subtlety specific to forked clusters: the fork's username and password default to the values set at provisioning time, not the source cluster's credentials. If your source cluster had additional database users beyond doadmin, those users are not automatically replicated to the fork. You need to recreate them manually on the fork before routing traffic.

Testing the restored database

A restore that you have not verified is not a restore. It is an assumption. These steps take under ten minutes and confirm the restore is actually usable before you decommission the old cluster or close the incident.

Step 1: Row count spot-check.

Connect to the restored cluster and compare row counts on your highest-traffic tables against the source.

-- Run on both the source and the restored cluster
SELECT
  schemaname,
  tablename,
  n_live_tup AS estimated_rows
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC
LIMIT 20;

Exact row counts will differ if you restored to a point in time. The check is to confirm that tables exist, are populated, and are within a plausible range of the expected count. A table with zero rows when you expect millions is the failure you are checking for.

Step 2: Recent data check.

Query for the most recent row in a few key tables and confirm the timestamp aligns with your expected restore point.

-- Adjust the table and column names to match your schema
SELECT MAX(created_at) FROM orders;
SELECT MAX(created_at) FROM users;

If the most recent row is older than expected, you may have restored to the wrong backup point, or the daily backup captured the database at a time before new writes hit.

Step 3: Application-level smoke test.

Run your application's health check or smoke test suite against the restored cluster before switching production traffic. A passing smoke test confirms that the schema matches what your application expects and that basic read/write paths are functional. This step catches version mismatches and missing tables that raw SQL checks miss.

Step 4: Confirm SSL connectivity.

If your application enforces SSL (and it should), confirm that the SSL handshake succeeds against the new cluster's endpoint before decommissioning the old one. A certificate mismatch after a cluster switch is a silent failure that often surfaces as intermittent connection drops rather than a clean error.

The full guide on backup strategy for Managed Databases, including how to set up the off-site exports that make this restore path possible, is covered in backup DigitalOcean Managed Databases.

Restore method comparison

Each restore path has a different cost structure in terms of speed, potential data loss, and operational complexity. Use this table to match the path to your situation.

Restore methodSpeedMax data loss windowRequires off-site exportComplexity
Native restore (daily snapshot)Fast (UI-driven)Up to 24 hours (or less with PITR)NoLow
Fork from backupFast (UI-driven)Up to 24 hours (or less with PITR)NoLow
pg_dump / mysqldump reimportSlower (proportional to DB size)Depends on export frequencyYesMedium
mongorestore from archiveSlower (proportional to DB size)Depends on export frequencyYesMedium

The native restore and fork paths are faster and simpler, but they have a hard dependency on the seven-day retention window and on your DigitalOcean account being accessible.

Same-host risk

Native restore only works if the managed backup still exists (seven-day window) and your DigitalOcean account is accessible. An off-site pg_dump gives you a restore path that does not depend on DigitalOcean at all. If your DigitalOcean account is suspended, compromised, or inaccessible during an incident, the off-site dump is the only path back to your data. The off-site compliance article covers what this means in practice for teams with regulatory requirements. For guidance on setting up the off-site exports that make this path possible, see digitalocean off-site compliance.

What to do now

If you are reading this before an incident: take an off-site dump today and practice the pg_restore or mysql import against a staging cluster. The first time you run a restore should never be during an actual outage. It takes less than an hour to walk through the full path on a staging database and confirm it works.

If you are reading this during an incident: start with the native restore path if your backup is within the seven-day window. It is the fastest path to a running database. Run the verification steps in ยง7 before routing production traffic.

If you've read this far, you probably already know whether native retention is enough for your project. If it isn't, SimpleBackups gives you cross-region off-site backup, automated verification, and a restore you can actually test.

Keep Learning

FAQ

Can I restore a DigitalOcean Managed Database to a specific point in time?

Yes, but only on higher-tier PostgreSQL and MySQL clusters with Point-in-Time Recovery (PITR) enabled. PITR allows recovery to any second within the seven-day retention window. On plans without PITR, you can only restore to one of the daily snapshot points. MongoDB Managed Databases do not currently support PITR via the native backup interface; off-site dumps are the only path to fine-grained recovery for MongoDB.

Does forking a database create downtime?

No. Forking provisions a new cluster from a backup point without touching the source cluster. The source cluster stays online and fully operational throughout. Your application experiences no downtime during the fork operation. The fork itself is unavailable while it is provisioning, but it has no effect on the source.

What happens to the connection string after a restore?

It depends on the restore path. Restoring to a new cluster (native restore or fork) produces a new hostname and port. Reimporting a dump into an existing cluster does not change the connection string. Any service that holds the old connection string (environment variables, connection pool configs, monitoring agents) needs to be updated before routing traffic to a newly provisioned cluster.

Can I restore to a smaller cluster size?

Yes, DigitalOcean allows restoring to a smaller plan size than the source. The constraint is available disk space: the restored cluster's storage must be large enough to hold the data from the backup. If the source database is 30 GB, restoring to a plan with 25 GB of storage will fail. DigitalOcean's minimum cluster sizes also apply; you cannot restore a multi-node cluster configuration to a single-node plan smaller than the minimum.

How long does a Managed Database restore take?

DigitalOcean does not publish an SLA for restore duration. For native restores, smaller databases (under 10 GB) typically complete in under fifteen minutes. Larger databases take longer, proportional to the backup size. For pg_dump or mysqldump reimports, duration depends on dump size, network throughput between your source and the cluster, and the number of indexes that need to be rebuilt after data load. For large databases, running the restore inside a screen or tmux session protects against connection timeouts.


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.