SimpleBackupsSimpleBackups

How to back up DigitalOcean before a migration

Posted on

Most teams that lose data during a migration didn't forget to back up. They backed up inside the same account they were migrating away from, and then something went wrong with that account.

That's the scenario this article exists to prevent. Before you touch your first resource, you need a verified backup that lives somewhere you can still reach if your DigitalOcean account becomes unreachable, locked, or suspended mid-migration.

This guide gives you a concrete checklist for backing up Droplets, Managed Databases, Volumes, Spaces, and DOKS before a migration. It covers the backup method for each product, how to verify it worked, how to confirm the restore actually runs, and what to do if the migration fails partway through.

Why migrations are the highest-risk moment

Migrations sit at the intersection of two dangerous conditions: you're making irreversible changes to production, and you're operating across two platforms at once.

Both conditions create exposure. Irreversible changes mean a mistake can't be undone by pressing Ctrl+Z. Operating across two platforms means you might not notice a problem in the source account until after you've already destroyed something in the destination.

The common fallback plan looks reasonable on paper: "We have DigitalOcean backups and snapshots, so we can roll back if anything goes wrong." The gap is that DigitalOcean's native backups are stored inside the same account as the resource they protect. If your DigitalOcean account is the thing that becomes inaccessible, those snapshots go with it.

Same-host risk during migration

During a migration, you're operating on two platforms at once. If your only backup is a snapshot inside the DO account you're migrating away from, you have no fallback if that account becomes inaccessible. This is the same-host risk. It's not hypothetical: billing disputes, fraud flags, and accidental lockouts have stranded teams at the worst possible moment. See off-site compliance for DigitalOcean for a full treatment.

The other risk is incomplete backup coverage. Teams often remember to snapshot the Droplet and forget about the attached Block Storage volume. Or they have a Managed Database with 7 days of native retention but no portable pg_dump they can actually move to the new environment. Each product in your stack needs a separate backup decision.

The pre-migration backup checklist

Use this table before you begin any migration. Every row should be checked, verified, and confirmed as "off-site" before you issue a single destructive command.

ProductBackup methodVerified?Off-site copy?Restore tested?
DropletsSnapshot via dashboard or doctl
Block storage volumesVolume snapshot (separate from Droplet)
Managed Databasespg_dump / mysqldump + native backup
SpacesMirror to separate bucket or provider
DOKSNamespace export + PV snapshot

"Verified" means you confirmed the backup completed without errors, not just that you started the job.

"Off-site copy" means the backup exists outside the DigitalOcean account you're migrating away from.

"Restore tested" means you spun up a throwaway resource and confirmed the data is actually there and intact.

None of these columns are optional. A backup you haven't verified is a guess. A backup you haven't tested restoring is a superstition.

Droplets: snapshot plus off-site copy

A Droplet snapshot gives you a point-in-time image of the entire disk. It's the fastest way to get back to exactly where you were if a migration goes wrong. But as noted above, a snapshot inside your DigitalOcean account isn't truly independent of that account.

The backup plan for Droplets before a migration has two parts: take the snapshot, then copy the data off-site independently.

Take the snapshot

You can snapshot a Droplet from the DigitalOcean dashboard, or via doctl:

# Power off the Droplet first for a consistent snapshot (optional but recommended)
doctl compute droplet-action power-off <droplet-id> --wait

# Take the snapshot
doctl compute droplet-action snapshot <droplet-id>
  --snapshot-name "pre-migration-$(date +%Y%m%d)"
  --wait

# Confirm the snapshot was created
doctl compute snapshot list --resource-type droplet

The --wait flag blocks until the action completes. Without it, you get a job ID but no confirmation that it finished. For a pre-migration snapshot, always wait.

Block storage volumes are not included in Droplet snapshots. Snapshot them separately:

doctl compute volume-action snapshot <volume-id>
  --snapshot-name "vol-pre-migration-$(date +%Y%m%d)"
  --wait

Verify and script it

For a structured pre-migration snapshot process, the following script takes snapshots for all Droplets and volumes in a project, verifies each one, and logs the result:

#!/usr/bin/env bash
set -euo pipefail

PROJECT_ID="your-project-id"
SNAP_DATE=$(date +%Y%m%d)
LOG="pre-migration-snapshots-${SNAP_DATE}.log"

echo "Starting pre-migration snapshots: $(date)" | tee "$LOG"

# Snapshot all Droplets in the project
for DROPLET_ID in $(doctl compute droplet list --format ID --no-header); do
  echo "Snapshotting Droplet $DROPLET_ID..." | tee -a "$LOG"
  doctl compute droplet-action snapshot "$DROPLET_ID"
    --snapshot-name "pre-migration-droplet-${DROPLET_ID}-${SNAP_DATE}"
    --wait 2>&1 | tee -a "$LOG"
done

# Snapshot all volumes
for VOLUME_ID in $(doctl compute volume list --format ID --no-header); do
  echo "Snapshotting volume $VOLUME_ID..." | tee -a "$LOG"
  doctl compute volume-action snapshot "$VOLUME_ID"
    --snapshot-name "pre-migration-vol-${VOLUME_ID}-${SNAP_DATE}"
    --wait 2>&1 | tee -a "$LOG"
done

# Verify snapshots exist
echo "Verifying snapshots..." | tee -a "$LOG"
doctl compute snapshot list --resource-type droplet --format "ID,Name,CreatedAt,Size" | tee -a "$LOG"
doctl compute snapshot list --resource-type volume --format "ID,Name,CreatedAt,Size" | tee -a "$LOG"

echo "Snapshot run complete: $(date)" | tee -a "$LOG"

Review the log before proceeding. If any Droplet or volume snapshot is missing, resolve it before continuing.

For a deeper walkthrough of automating Droplet and volume snapshots, see how to automate DigitalOcean server and volume snapshots.

The off-site step

The script above creates snapshots inside your DigitalOcean account. That's necessary but not sufficient for a migration backup. You also need the data somewhere independent.

For Droplets, the practical off-site approach is an application-level backup: if your Droplet runs a database, pg_dump or mysqldump to an external object store. If it runs files, sync the files. If it runs both, do both.

If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles DigitalOcean Droplet, volume, and database backups off-site, with alerts when a run fails.

For a complete guide to Droplet backup options, see how to back up DigitalOcean Droplets.

Databases: dump plus native backup

Managed Database backups on DigitalOcean are automatic, daily, and kept for 7 days. PITR is available on higher-tier Postgres and MySQL clusters. That coverage is enough for day-to-day operations, but it has two problems in a migration context.

First, you can't download native Managed Database backups. They're stored inside DigitalOcean's infrastructure, and the only way to use them is to restore to a new DigitalOcean cluster. If you're migrating to a different platform, native backups give you no path forward.

Second, the same-host risk applies here too. If your DigitalOcean account becomes inaccessible during the migration, you can't access the native backups for your databases.

The solution is a portable dump taken before the migration starts.

Take a pg_dump or mysqldump

For a Postgres Managed Database:

# Get your connection string from the DO dashboard or via doctl
doctl databases connection <database-id> --format URI --no-header

# Dump the database to a local file
pg_dump
  --host=<your-db-host>
  --port=25060
  --username=doadmin
  --format=custom
  --file="pre-migration-$(date +%Y%m%d).dump"
  defaultdb

For MySQL:

mysqldump
  --host=<your-db-host>
  --port=25060
  --user=doadmin
  --password
  --all-databases
  --single-transaction
  --result-file="pre-migration-$(date +%Y%m%d).sql"

After dumping, upload the file to an object store outside your DigitalOcean account. AWS S3, Backblaze B2, Cloudflare R2, or any S3-compatible target works.

# Example: upload to AWS S3
aws s3 cp pre-migration-20260423.dump s3://your-off-site-bucket/db-backups/

For the complete database backup walkthrough, see how to back up DigitalOcean Managed Databases.

Combine the dump with native backup

The dump and the native backup serve different purposes. The dump gives you a portable file you can restore to any Postgres or MySQL instance anywhere. The native backup gives you fast, in-account rollback if the migration fails early and you stay on DigitalOcean.

Take both. The dump is the migration-safe option. The native backup is the fast rollback if things go sideways early.

Spaces: full mirror before you touch anything

Spaces is the product with the weakest native backup story in the DigitalOcean lineup. There is no built-in backup feature. Versioning is available but off by default. There is no replication to another region or another provider.

If you delete or overwrite an object in Spaces and versioning was not enabled, it's gone. Period.

For a migration, this makes Spaces the highest-risk product to touch. The correct approach: mirror the entire bucket to an external location before you do anything else.

Mirror to an off-site bucket

The rclone tool handles Spaces-to-external-bucket copies well. Configure it with your Spaces credentials and your destination credentials, then:

# Sync Spaces bucket to external destination (e.g., AWS S3)
rclone sync
  spaces:your-do-bucket
  s3:your-external-bucket/pre-migration-mirror
  --progress
  --transfers 10

# Verify object counts match
DO_COUNT=$(rclone ls spaces:your-do-bucket | wc -l)
EXT_COUNT=$(rclone ls s3:your-external-bucket/pre-migration-mirror | wc -l)
echo "DO objects: $DO_COUNT | External objects: $EXT_COUNT"

If the counts don't match, do not proceed with the migration until you've resolved the discrepancy.

For a full walkthrough of Spaces backup options, see how to back up DigitalOcean Spaces. For the native backup coverage gap in detail, see how DigitalOcean native backup works.

Testing the restore before you start

This is the step most teams skip, and it's the one that determines whether your backup is real.

A backup you haven't tested is not a backup. It's a file with unknown contents sitting somewhere you hope is recoverable.

Test the restore on a throwaway resource before you start the migration. If the restore fails after you've already torn down production, you have nothing to fall back to.

How to test a Droplet snapshot restore

# Create a new Droplet from your pre-migration snapshot
SNAPSHOT_ID=$(doctl compute snapshot list
  --resource-type droplet
  --format "ID,Name"
  --no-header | grep "pre-migration" | awk '{print $1}' | head -1)

doctl compute droplet create test-restore
  --image "$SNAPSHOT_ID"
  --size s-1vcpu-1gb
  --region nyc3
  --wait

# SSH in and verify your application is there and healthy
doctl compute ssh test-restore

Run your application's smoke tests inside the restored Droplet. Confirm your files, services, and configuration are intact. Then destroy the test Droplet:

doctl compute droplet delete test-restore

For a detailed guide to Droplet restores, see how to restore a DigitalOcean Droplet.

How to test a database restore

For the pg_dump you created earlier:

# Create a test database and restore the dump into it
createdb test_restore_db
pg_restore
  --host=<test-host>
  --username=<test-user>
  --dbname=test_restore_db
  --verbose
  pre-migration-20260423.dump

# Check row counts on critical tables
psql -h <test-host> -U <test-user> -d test_restore_db
  -c "SELECT schemaname, tablename, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 20;"

If the row counts look right and your queries return expected results, the backup is good.

What to verify across all products

Before you consider the pre-migration backup complete, confirm:

  • Every Droplet snapshot completed without errors.
  • Every volume snapshot completed without errors.
  • The database dump file is not empty and not truncated (check its size against what you'd expect, and test the restore as above).
  • The Spaces mirror object count matches the source.
  • At least one restore test ran successfully.

Only then do you have a real backup.

What to do if the migration fails

Even with a solid backup, migrations fail. The goal is to fail recoverable.

If you catch the failure early (source account still intact)

If something goes wrong before you've destroyed or modified anything in the source account, the rollback is straightforward:

  1. Stop the migration work.
  2. Restore from the snapshot you took pre-migration.
  3. Verify the restored environment is working.
  4. Diagnose what went wrong before attempting the migration again.

The DigitalOcean dashboard lets you restore a Droplet from a snapshot in a few clicks. The doctl path:

doctl compute droplet create restored-production
  --image <snapshot-id>
  --size <original-size>
  --region <original-region>
  --wait

If the migration fails mid-way (source account modified)

This is the harder scenario. If you've already made changes in the source account (deleted resources, modified configurations, started decommissioning) and something fails in the destination, you need the off-site backup.

This is exactly why the off-site step is not optional. If your only fallback is a snapshot inside the DigitalOcean account you were decommissioning, and decommissioning broke something in that account, you have no fallback.

With an off-site database dump, you can restore to any environment, including your new destination or a new DigitalOcean account entirely.

With a Spaces mirror on external storage, your objects are safe regardless of what happens to the source bucket.

If you find data integrity problems after completing the migration

Corrupted or incomplete data in the destination is harder to catch. This is where the "restore tested" column in the checklist matters. If you validated the dump before migration and the destination data doesn't match, the problem happened during migration, not during backup.

Compare critical table row counts between source (from your pre-migration log or restore test) and destination. If they differ, restore the affected tables from your dump.

Document what went wrong and when. A timestamped log of migration steps makes post-incident analysis faster, especially if you need to file a support ticket.


If you want the short version: snapshot every Droplet and volume, dump every database, mirror every Spaces bucket, verify each backup completed, test at least one restore, then start the migration. That's the whole checklist. Every section above exists for the parts where the short version isn't enough.

If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles DigitalOcean Droplet, database, and Spaces backups off-site, with alerts when a run fails. See how it works →

Keep learning

FAQ

Should I snapshot or pg_dump before migrating?

Both. They serve different purposes. A snapshot gives you a fast in-account rollback if the migration fails early and you stay on DigitalOcean. A pg_dump gives you a portable file you can restore to any Postgres instance anywhere, including your migration destination. If you're migrating away from DigitalOcean entirely, the pg_dump is the only option that will actually work.

How long do DigitalOcean snapshots take?

Snapshot time scales with disk usage, not total disk size. A Droplet with 10 GB used on a 100 GB disk will snapshot faster than a Droplet with 90 GB used on the same disk. For most standard Droplets, expect anywhere from a few minutes to 30 minutes. Use the --wait flag with doctl so your script doesn't proceed until the snapshot is confirmed complete.

Can I roll back a migration using a snapshot?

Yes, if you still have access to the DigitalOcean account where the snapshot lives. Create a new Droplet from the snapshot, verify it, then point your DNS or load balancer at it. The caveat: if you've already deleted the original Droplet and something happens to your DigitalOcean account during the migration, you need the off-site backup, not the snapshot.

What if my migration fails mid-way?

Stop the migration work immediately. If the source account is still intact, restore from your pre-migration snapshots. If the source account is partially decommissioned, restore from your off-site backup (database dump, Spaces mirror). The goal is to get to a known-good state, diagnose what went wrong, then attempt the migration again with a clearer picture of the failure mode.

Do I need to back up Spaces before migrating?

Yes, and it's especially important for Spaces because there is no native backup. Versioning is off by default, and there is no built-in replication or snapshot feature. If you accidentally delete or overwrite an object in Spaces during a migration, it cannot be recovered unless you have a mirror. Use rclone or an equivalent tool to copy the entire bucket to external storage before touching anything.


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.