You enabled Droplet backups. You saw the weekly backup show up in the console. You felt covered. But the 200 GB block storage volume attached to that Droplet, the one where your application writes user uploads, log archives, and database files, was not in any of those backups.
DigitalOcean does not include attached volumes in Droplet backups. It never has. The documentation says so clearly, but it is easy to miss until the moment you go looking for a restore point and find nothing there.
This article covers how to back up DigitalOcean block storage volumes correctly: manual snapshots through the console, automated snapshots via doctl and cron, how to copy volume data off-platform, and how to verify that a snapshot will actually restore. By the end, you will have a working backup strategy for your volumes, not just a plan.
Why volumes aren't in Droplet backups
Droplet backups capture the system disk: the boot volume attached directly to the Droplet image. Block storage volumes are a separate product. They attach to Droplets over the network, but they are managed independently in DigitalOcean's infrastructure.
When DigitalOcean's backup agent runs on your Droplet, it snapshots the boot disk. It does not traverse attached network volumes. The two products have separate snapshot APIs, separate billing, and separate restore paths. This is by design, not an oversight.
The practical consequence: if your application moves data off the system disk and onto a block storage volume (a reasonable architecture for any stateful workload), you are responsible for backing up that volume yourself. The Droplet backup gives you a consistent copy of your operating system and application code. It gives you nothing for the data volume.
The how DigitalOcean native backup works article maps all eight DigitalOcean products and what each one covers natively. Block storage volumes are one of the products with no automatic backup at all.
The good news is that volume snapshots are a first-class DigitalOcean feature. They are not a workaround. The bad news is that nothing creates them automatically for you. You have to set that up.
Manual volume snapshots via the console
The fastest way to create a volume snapshot is through the DigitalOcean control panel.
Navigate to Volumes in the left sidebar, click on the volume you want to snapshot, and select Take Snapshot from the Actions menu. Give the snapshot a descriptive name. Including the date is worth it: something like app-data-vol-2026-04-23 is easier to work with than snapshot1. Click Create.
DigitalOcean creates the snapshot while the volume remains attached and in use. The snapshot is crash-consistent, meaning it captures the state of disk blocks at a point in time, as if power had been cut at that moment. For databases and write-heavy applications, crash-consistent is not the same as application-consistent. Your application may be mid-write when the snapshot fires.
For workloads that require filesystem consistency, detach the volume or freeze I/O before snapshotting. Otherwise you risk capturing a partial write. For PostgreSQL and MySQL, this means flushing and locking tables or pausing writes, or mounting the volume read-only, before triggering the snapshot.
The official DigitalOcean volume snapshot documentation walks through the control panel flow in detail.
Snapshots are billed at $0.06 per GB per month. A 100 GB volume snapshot costs $6/month to keep. Here is what that looks like at common volume sizes:
| Volume size | Snapshot size (approx.) | Monthly cost |
|---|---|---|
| 50 GB | ~50 GB | ~$3.00/mo |
| 100 GB | ~100 GB | ~$6.00/mo |
| 500 GB | ~500 GB | ~$30.00/mo |
Snapshot size tracks volume size closely because DigitalOcean stores a full point-in-time copy, not an incremental. If you keep seven daily snapshots of a 100 GB volume, you are paying roughly $42/month in snapshot storage alone. That cost adds up quickly, which is one reason the automation script in the next section includes a cleanup step.
Manual snapshots work for infrequent milestones: before a deployment, before a migration, before a schema change. For routine data protection, you need automation.
Automating snapshots with doctl and cron
doctl is DigitalOcean's official command-line tool. It wraps the DO API and is the right tool for scripted snapshot management.
Install doctl and authenticate it with your API token:
# Install doctl (macOS with Homebrew; adjust for your OS)
brew install doctl
# Authenticate
doctl auth init
The doctl CLI reference covers the full installation options for Linux and Windows.
To take a snapshot of a volume, you need its ID. List your volumes to find it:
doctl compute volume list --format ID,Name,Size
Then create a snapshot:
doctl compute volume snapshot
--snapshot-name "app-data-$(date +%Y-%m-%d)"
<volume-id>
The --snapshot-name flag accepts a string. Using $(date +%Y-%m-%d) in the name embeds the current date automatically, so each snapshot is uniquely named without manual intervention.
For daily automated backups, wrap this in a cron job that also prunes old snapshots. Here is a script you can drop on a server or cron host:
#!/usr/bin/env bash
# volume-snapshot.sh
# Creates a daily snapshot of a DigitalOcean volume and removes snapshots older than 7 days.
# Requires doctl authenticated via DIGITALOCEAN_ACCESS_TOKEN or `doctl auth init`.
set -euo pipefail
VOLUME_ID="your-volume-id-here"
SNAPSHOT_PREFIX="app-data"
RETAIN_DAYS=7
# Create today's snapshot
SNAPSHOT_NAME="${SNAPSHOT_PREFIX}-$(date +%Y-%m-%d)"
echo "Creating snapshot: ${SNAPSHOT_NAME}"
doctl compute volume snapshot
--snapshot-name "${SNAPSHOT_NAME}"
"${VOLUME_ID}"
# List and delete snapshots older than RETAIN_DAYS
echo "Pruning snapshots older than ${RETAIN_DAYS} days..."
CUTOFF=$(date -d "-${RETAIN_DAYS} days" +%Y-%m-%d 2>/dev/null
|| date -v "-${RETAIN_DAYS}d" +%Y-%m-%d)
doctl compute snapshot list
--resource-type volume
--format ID,Name,Created
--no-header
| while IFS=$'\t' read -r snap_id snap_name snap_date; do
# Extract YYYY-MM-DD from the snapshot name (relies on the naming convention above)
snap_day=$(echo "${snap_name}" | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}' || true)
if [[ -n "${snap_day}" && "${snap_day}" < "${CUTOFF}" ]]; then
echo "Deleting old snapshot: ${snap_name} (${snap_id})"
doctl compute snapshot delete "${snap_id}" --force
fi
done
echo "Done."
Save this as /opt/scripts/volume-snapshot.sh, make it executable with chmod +x, then schedule it via cron:
# Edit cron for the root user (or whichever user has doctl auth configured)
crontab -e
# Run daily at 03:00 UTC
0 3 * * * /opt/scripts/volume-snapshot.sh >> /var/log/volume-snapshot.log 2>&1
Running at 03:00 UTC puts the snapshot during low-traffic hours for most European and American time zones. Pipe output to a log file so you can verify the script ran and catch any errors without waiting for a failure to surface.
If you already have automation in place for Droplet snapshots, the existing how to automate DigitalOcean server and volume snapshots guide covers the API-based approach in more detail.
The approach above gives you seven daily snapshot checkpoints and keeps your storage costs predictable. For 100 GB volumes, seven snapshots cost roughly $42/month. For volumes over 500 GB, the cost becomes a real line item. Weigh that against what the data is worth.
Copying volume data off-platform
Volume snapshots are a good start. They are not enough on their own.
Same-host risk
Volume snapshots stay in your DigitalOcean account. If the account is compromised, suspended, or if a billing dispute locks you out, the snapshots go with everything else. Account-level events do not discriminate between your production data and your backups. For the full picture of what this risk means in practice, see the off-site compliance guide.
For data that matters beyond a single account event, you need a copy outside DigitalOcean. The two practical paths are:
Rsync to remote storage. Mount the volume, then rsync its contents to an S3-compatible destination outside DO (AWS S3, Backblaze B2, Cloudflare R2, or another provider). This gives you a file-level copy you can restore from without depending on DigitalOcean's snapshot infrastructure.
# Example: sync volume mount point to an S3-compatible bucket
# Requires rclone configured with your remote storage credentials
rclone sync /mnt/app-data remote:your-bucket/app-data-backup
--checksum
--transfers=4
--log-file=/var/log/rclone-sync.log
Application-level export. For databases running on a volume, a logical dump (pg_dump, mysqldump, mongodump) is more portable than a block-level copy. A logical dump restores cleanly across versions and across providers. Schedule it separately from the volume snapshot, send it off-platform, and keep the volume snapshot as the fast in-account recovery option.
Both approaches work alongside snapshots. Snapshots are fast to restore within DO. Off-platform copies survive account-level events that snapshots do not.
The what DigitalOcean native backup doesn't cover article covers the full matrix of gaps, including the off-platform gap, across all DigitalOcean products.
Verifying a volume snapshot
A snapshot you have never tested is not a backup. It is a file you hope will work when you need it.
Verification does not have to be elaborate. The minimum viable test is: restore the snapshot to a new volume, mount it on a test Droplet, and confirm the data is readable.
Here is the sequence using doctl:
# Step 1: Find the snapshot ID you want to verify
doctl compute snapshot list --resource-type volume --format ID,Name,Created
# Step 2: Create a new volume from the snapshot
# Replace <snapshot-id> and <region> with your values
doctl compute volume create
--region nyc1
--size 100GiB
--snapshot-id <snapshot-id>
--name "vol-restore-test"
# Step 3: Attach the new volume to a test Droplet
doctl compute volume-action attach <new-volume-id> <test-droplet-id>
Once attached, SSH into the test Droplet and mount the volume:
# Mount the restored volume (the device name may vary; check `lsblk`)
mount /dev/sda /mnt/restore-test
# Spot-check that expected files and directories are present
ls -lh /mnt/restore-test/
# For application data, run application-level checks
# e.g., check that your SQLite or Postgres data directory is intact
Check that the directory structure matches what you expect. Open a few files. If the volume held a database, start the database process against the restored data directory and run a query.
The full restore walkthrough, including how to make the restored volume permanent and how to update application config to point at it, is in restoring a DigitalOcean volume.
Run this test at least once when you set up your snapshot schedule. Then run it again every quarter, or any time you change the application data structure significantly. A snapshot that restored cleanly six months ago may not restore cleanly today if your data format changed.
What to do tonight
If you have volumes in production with no snapshot schedule, the immediate action is to take a manual snapshot from the console right now and set up the cron job from ยง3. That covers the basics.
If you have a compliance requirement (SOC 2, GDPR, ISO 27001), a snapshot in the same account is not enough. You need an off-site copy on infrastructure you control. The rsync or logical-export approach from ยง4 gets you there.
If scripting and scheduling all of this yourself sounds like a second job, SimpleBackups handles DigitalOcean volume backups off-site, with alerts when a run fails. See how it works โ
Keep learning
- How to automate DigitalOcean server and volume snapshots: script-first guide using the DO API directly
- How DigitalOcean native backup works: full map of what each product covers and what it misses
- What DigitalOcean native backup doesn't cover: the gaps, including why off-site matters
- Restoring a DigitalOcean volume: what happens after the snapshot, step by step
- DigitalOcean backups explained: high-level overview of all native backup options
FAQ
Are volumes included in Droplet backups?
No. DigitalOcean Droplet backups capture only the system disk (boot volume). Block storage volumes attached to a Droplet are a separate product and are not included in Droplet backups or snapshots. You must create volume snapshots separately through the Volumes dashboard or the API.
How much do volume snapshots cost?
Volume snapshots are billed at $0.06 per GB per month, the same rate as Droplet snapshots. A 100 GB volume snapshot costs about $6/month to retain. If you keep seven daily snapshots of a 100 GB volume, expect roughly $42/month in snapshot storage. Snapshots are billed based on actual snapshot size, not the allocated volume size.
Can I schedule automatic volume snapshots?
DigitalOcean does not offer a built-in schedule for volume snapshots. You need to set up automation yourself using doctl (the official CLI) and cron, or use a third-party tool. The cron script in this article runs daily and automatically prunes snapshots older than seven days.
Can I restore a volume snapshot to a different Droplet?
Yes. You can create a new block storage volume from any volume snapshot and attach that volume to any Droplet in the same region. The snapshot is not tied to the original Droplet. You can also copy snapshots between regions using the DigitalOcean control panel if you need to spin up in a different region.
What happens to volume snapshots if I delete the volume?
Deleting a block storage volume does not automatically delete its snapshots. Snapshots persist independently after the source volume is deleted. You will continue to be billed for them until you explicitly delete them from the Snapshots section of the control panel. This is useful if you decommission a volume but want to keep a recovery point, but it also means orphaned snapshots can accumulate and generate unexpected charges.
This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.