SimpleBackupsSimpleBackups

How to restore a DigitalOcean volume snapshot

Posted on

You took the snapshot. Something went wrong — a bad deployment, an accidental rm -rf, a migration that corrupted your data directory. Now you need the data back, and you are staring at a list of volume snapshots in the DigitalOcean control panel wondering exactly what to do next.

Restoring a volume snapshot is not as straightforward as restoring a Droplet backup. There is no "restore" button that swaps the old volume for the new one. Instead, you create a new volume from the snapshot, attach it, mount it, verify it, and then — if this is a production swap — detach the old volume and point your application at the restored one.

This guide walks you through the full procedure: creating a volume from a snapshot, attaching it to a Droplet, mounting and verifying the filesystem, swapping the old volume out, and the failure modes that bite people mid-restore.

Prerequisites

Before you start, make sure you have:

  • doctl installed and authenticated with a valid API token. The doctl CLI reference covers installation.
  • The ID of the volume snapshot you want to restore. Find it in the Snapshots section of the control panel, or via doctl compute snapshot list --resource-type volume.
  • SSH access to the target Droplet.
  • The region where the snapshot lives. Volume snapshots can only be restored in the same region as the source volume.

If you do not have a snapshot to restore from, see how to back up DigitalOcean volumes for the complete snapshot setup guide.

Creating a new volume from a snapshot

DigitalOcean does not restore a snapshot over an existing volume. The workflow is: create a brand-new volume from the snapshot, then work with that new volume. The original volume stays intact until you explicitly delete it.

This is actually safer than an in-place restore. You can verify the recovered data before you commit to replacing anything.

Start by finding the snapshot you want to use:

doctl compute snapshot list
  --resource-type volume
  --format ID,Name,ResourceId,Created
  --no-header

Note the snapshot ID from the output. Then create the new volume from it:

doctl compute volume create
  --region nyc1
  --size 100GiB
  --snapshot-id <snapshot-id>
  --name "vol-restored-2026-04-23"

Flags explained:

  • --region: must match the region of the snapshot. Volume snapshots cannot be used across regions. If your snapshot is in ams3, the new volume must also be in ams3.
  • --size: must be equal to or larger than the original volume size. You cannot create a smaller volume from a snapshot than the snapshot's source.
  • --snapshot-id: the ID of the snapshot to restore from.
  • --name: any descriptive name. Including the date helps when you have several recovery attempts in flight.

The volume creation takes a few seconds to a few minutes depending on size. Poll for completion:

doctl compute volume list --format ID,Name,Status

Once the Status column shows available, the volume is ready to attach.

Volume snapshots can only be restored in the same region as the source volume. If you need the data in a different region, copy the snapshot to that region first via the control panel, then restore from the copy.

Attaching the restored volume to a Droplet

With the new volume created, you need its ID and the ID of the Droplet you want to attach it to:

# Get the new volume ID
doctl compute volume list --format ID,Name

# Get the target Droplet ID
doctl compute droplet list --format ID,Name,Region

Then attach the volume:

doctl compute volume-action attach <new-volume-id> <target-droplet-id>

The attach action completes in a few seconds. Confirm it worked:

doctl compute volume get <new-volume-id> --format ID,Name,DropletIDs

The DropletIDs column should contain your Droplet's ID. If it is empty, the attach did not succeed and you need to retry.

You can also do this via the control panel: Manage → Volumes → click the volume → More → Attach to Droplet. Either path works; doctl is easier to script.

Mounting and verifying the filesystem

Attaching the volume in DigitalOcean's API makes it available to the Droplet's operating system as a new block device. It does not mount it automatically. You have to mount it yourself inside the Droplet.

SSH into the Droplet:

ssh root@<droplet-ip>

Find the new device name. Newly attached volumes appear as /dev/sda, /dev/sdb, /dev/sdc, and so on, in attachment order. If this is the second volume on the Droplet, it will typically show up as /dev/sdc (the first attached volume is /dev/sdb if the boot disk is /dev/sda). Confirm with:

lsblk

Look for an unformatted device without a mountpoint. You can also check dmesg | tail -20 immediately after attaching, which usually shows the new device registration.

Create a mount point and mount the volume:

mkdir -p /mnt/vol-restored
mount /dev/sdc /mnt/vol-restored

Replace /dev/sdc with whatever device name lsblk showed for your new volume.

Do not mount the restored volume over the same mount point as your existing data volume. Mounting over an occupied directory hides the existing data behind the new mount. Use a fresh mount point (like /mnt/vol-restored) for verification before doing any swap.

Once mounted, verify the content:

# Check that expected directories and files are present
ls -lh /mnt/vol-restored/

# Check available space and overall health
df -h /mnt/vol-restored

# Check filesystem for errors
fsck -n /dev/sdc

The -n flag on fsck runs a read-only check without modifying anything. If fsck reports clean, you are in good shape. If it reports errors, see the What can go wrong during restore section before proceeding.

For application data, go further. If the volume holds a database data directory, try starting the database process against it:

# PostgreSQL example: check the data directory is intact
pg_controldata /mnt/vol-restored/pgdata

Confirm the cluster state is in production or shut down cleanly. If it shows in crash recovery, the snapshot was taken while the database was mid-write and the data directory needs recovery before it is usable.

The connection between snapshotting and application consistency is covered in the backup DigitalOcean volumes guide. The short version: crash-consistent snapshots are fine for most use cases, but not for databases with open transactions at snapshot time.

Swapping the old volume for the restored one

Once you have verified the restored volume has the data you need, you can make it permanent. The swap sequence is:

  1. Stop the application or put it into maintenance mode.
  2. Unmount the old data volume.
  3. Detach the old volume from the Droplet.
  4. Unmount the restored volume from its temporary mount point.
  5. Attach the restored volume in the old volume's place (or update the application to point at the new mount path).
  6. Mount the restored volume at the correct path.
  7. Start the application.

In shell commands, from inside the Droplet (assuming the old volume was at /mnt/app-data):

# Stop the application
systemctl stop your-app.service

# Unmount the old volume
umount /mnt/app-data

# Unmount the restored volume from the temp mount
umount /mnt/vol-restored

Then from your workstation, detach the old volume and attach the restored one in its place:

# Detach the old volume
doctl compute volume-action detach <old-volume-id> <droplet-id>

# The restored volume is already attached; just mount it at the correct path

Back inside the Droplet, mount the restored volume at the production path:

mount /dev/sdc /mnt/app-data

Verify the application can read from it, then restart:

systemctl start your-app.service

If you have an /etc/fstab entry for the old volume, update it now. The restored volume has a different UUID. Find the new UUID with blkid /dev/sdc and replace the old UUID in /etc/fstab. If you skip this step, the Droplet will fail to mount the volume on reboot.

# Find the UUID of the restored volume
blkid /dev/sdc

# Update /etc/fstab
# Replace the old UUID line with the new one
# Example fstab line:
# UUID=<new-uuid>  /mnt/app-data  ext4  defaults,nofail  0  2

Don't forget fstab

A common restore failure we see: the data is back, the application restarts successfully, and then the Droplet reboots for a kernel update and never comes back up. The old UUID in /etc/fstab references a volume that no longer exists. Update it before you close the incident.

Once the application is running cleanly against the restored volume, keep the old volume around for at least 24 hours before deleting it. You want to confirm there are no write path issues before discarding the previous state.

What can go wrong during restore

Restoring a volume snapshot is not a guaranteed clean operation. Here are the failure modes we see most often.

Wrong region. You try to create a volume from a snapshot in a region where the snapshot does not exist. doctl returns an error like snapshot not found or snapshot is not available in this region. Solution: check the snapshot's region with doctl compute snapshot get <snapshot-id> --format ID,Name,Regions and match your --region flag accordingly.

Size mismatch. You try to create a 50 GiB volume from a snapshot that came from a 100 GiB volume. DigitalOcean rejects this. The new volume must be at least as large as the source. If you are trying to save on storage costs, you cannot do it at restore time. Resize the volume after restore if needed.

Filesystem errors. The snapshot was taken while the filesystem had open writes. When you mount the volume, the filesystem is dirty. In most cases, fsck repairs this automatically:

fsck /dev/sdc

Run without -n to allow repairs. Accept the interactive prompts to fix inodes. Most dirty filesystem states after a crash-consistent snapshot are recoverable this way.

Corrupted database data directory. If the volume held a database and the snapshot captured a mid-write state, the data directory may need crash recovery. For PostgreSQL, this typically means starting the PostgreSQL process against the data directory and allowing it to run WAL replay. For MySQL/InnoDB, the engine runs recovery on startup automatically. For MongoDB, check the WiredTiger log on startup.

UUID collision. If you attach a restored volume to a Droplet that still has the original volume, both volumes have the same filesystem UUID (the UUID is part of the filesystem metadata, not the DigitalOcean volume ID). Tools that rely on UUID — including /etc/fstab, systemd mount units, and some RAID configurations — may behave unexpectedly. Assign a new UUID to the restored volume after mount verification if you plan to run both volumes on the same Droplet:

# For ext4 filesystems (unmount first)
tune2fs -U random /dev/sdc

Account or region event. Volume snapshots are stored in your DigitalOcean account. If the account is compromised or the region has an outage, the snapshot may be inaccessible when you need it most. For the full picture of what same-host risk means for your recovery posture, see the off-site compliance guide.

For workloads where snapshots alone are not enough, the off-site compliance article covers what an off-platform recovery copy looks like in practice.

What to do next

If the restore went cleanly, do two things before you close out: update /etc/fstab with the new volume UUID, and test a reboot. A restore that survives a reboot is a real restore. One that hasn't been through a reboot is an open question.

If you do not yet have a snapshot schedule in place for your volumes, the backup DigitalOcean volumes article covers the full setup from console snapshot to automated daily rotation with pruning.

And if you want to understand what DigitalOcean's native backup infrastructure actually covers across all products — Droplets, volumes, managed databases, Spaces — the how DigitalOcean native backup works article maps it all.

If you've read this far, you probably already know whether native retention is enough for your project. If it isn't, SimpleBackups gives you cross-region off-site backup, automated verification, and a restore you can actually test.

Keep learning

FAQ

Can I restore a volume snapshot to a different region?

Not directly. Volume snapshots can only be used to create new volumes in the same region as the snapshot. If you need the data in a different region, first copy the snapshot to the target region using the DigitalOcean control panel (Snapshots → More → Copy to region), then create a volume from the copy. The copy operation takes a few minutes for large volumes.

Does restoring a volume snapshot overwrite the existing volume?

No. DigitalOcean creates a brand-new volume from the snapshot. The original volume is not touched. You choose what to do with the old volume after you verify the restored one: keep it as a fallback, delete it to stop paying for it, or archive it as a snapshot.

How long does it take to create a volume from a snapshot?

For most volumes under 100 GiB, the volume creation completes in under a minute. Larger volumes take longer. A 500 GiB volume typically finishes in two to five minutes. Poll doctl compute volume get <id> --format Status and wait for available before trying to attach.

Can I attach a restored volume to a different Droplet?

Yes. The restored volume is a normal block storage volume. You can attach it to any Droplet in the same region, regardless of which Droplet the original volume was attached to. The only constraint is region: both the Droplet and the volume must be in the same DigitalOcean region.

What if the filesystem on the snapshot is corrupted?

Run fsck /dev/sdc (with the correct device name) after attaching the volume and before mounting it. For most cases of filesystem inconsistency from a crash-consistent snapshot, fsck repairs it automatically. If fsck cannot repair the errors, the snapshot itself is corrupted and you need to fall back to an earlier snapshot. This is why keeping multiple recovery points — not just the most recent snapshot — matters.


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.