SimpleBackupsSimpleBackups

How to back up DigitalOcean Droplets (3 methods)

Posted on

You enabled the native Droplet backup add-on. You paid the 20% fee. You assumed you were covered.

You are not necessarily covered. The backup lives in your DigitalOcean account. If that account gets compromised, suspended, or hit by a billing dispute, the backup disappears with the Droplet. And the native backup gives you no download option, no off-site copy, and no way to verify a restore without actually running one.

This article walks you through all three methods for backing up a Droplet: the native add-on, on-demand snapshots, and pulling a copy off-platform. You'll see exactly what each one does, what it costs, and how to automate the pieces that matter.

How native Droplet backups work

DigitalOcean's native backup add-on is a paid extra that costs 20% of your Droplet price. For a $24/month Droplet, that's $4.80/month. You enable it from the Droplet control panel or via the API at creation time.

What you get with it:

  • Standard Droplets: one backup per week, kept for four weeks (four recovery points total).
  • Premium Droplets: one backup per day, kept for four weeks (up to 28 recovery points).
  • Backups run on a schedule DigitalOcean controls. You can shift the window by a few hours, but not choose the day or time with precision.
  • Restoring creates a new Droplet from the backup image. Your existing Droplet is unaffected.

What you do not get:

  • Downloads. You cannot pull the backup image to your own storage.
  • Off-site copies. The backup lives in the same DigitalOcean account as the Droplet.
  • Application-consistent snapshots. The backup is a block-level image taken while the Droplet is running. For most web apps this is fine; for databases, you want either a quiesced snapshot or a database-level backup.

Volumes are not included in Droplet backups. If your Droplet has attached block storage, you need a separate snapshot workflow. See the DigitalOcean Volume backup guide for the exact steps.

For a deeper look at what the native add-on covers and what it skips, the DigitalOcean native backup explained article goes through each edge case. And if you want the full list of gaps, what native Droplet backup doesn't cover is the reference.

Snapshots: the manual alternative

On-demand snapshots are different from the weekly backup add-on. They are triggered by you, billed at $0.06/GB/month, and kept until you delete them. DigitalOcean's snapshot docs cover the UI path; the CLI path is faster for one-off use.

To trigger a snapshot with doctl:

doctl compute droplet-action snapshot \
  --droplet-id 123456789 \
  --snapshot-name "my-droplet-$(date +%Y%m%d)" \
  --wait

Flag breakdown:

  • --droplet-id: your Droplet's numeric ID (find it with doctl compute droplet list).
  • --snapshot-name: give it a name with a date suffix so you can identify it later.
  • --wait: block until the action completes. Without this flag, the command returns immediately and you have no confirmation.

The snapshot is taken while the Droplet is running. DigitalOcean powers the Droplet off briefly during the snapshot to ensure consistency, then powers it back on. This causes a brief downtime window, usually under a minute, but it does happen. Plan accordingly.

You can also snapshot from a powered-off Droplet if you want zero downtime impact on your application, at the cost of having to schedule the outage yourself.

For a full comparison of when to use snapshots versus the weekly backup add-on, see DigitalOcean snapshots vs. backups, which covers retention, cost, and restore behavior side by side.

Same-host risk

Native Droplet backups and snapshots both live inside your DO account. Pull them off-platform or they're not protecting you from the scenario that matters most. Account compromise, region outage, and billing suspension all take the backup with the Droplet. More on this in the off-site compliance guide. If a Droplet is deleted and the backup window has already closed, recovering a deleted DigitalOcean Droplet covers what limited options remain.

Off-site: pulling a snapshot out of DigitalOcean

DigitalOcean does not provide a direct download link for Droplet snapshots. You cannot simply click "export" and get a file. The path requires converting the snapshot to a transferable format and exporting it.

The practical approach: snapshot the Droplet, then use doctl to export the snapshot as a temporary download URL, then pull it to your storage of choice. Here is a script that does the full chain.

#!/usr/bin/env bash
# export-droplet-snapshot.sh
# Exports the latest snapshot for a Droplet to S3-compatible storage.
#
# Requirements:
#   - doctl authenticated (doctl auth init)
#   - rclone configured with a remote named "offsite" pointing to S3, B2, or equivalent
#   - jq installed
#
# Usage: ./export-droplet-snapshot.sh <droplet-id> <rclone-destination>
# Example: ./export-droplet-snapshot.sh 123456789 offsite:my-bucket/droplet-backups/

set -euo pipefail

DROPLET_ID="${1:?Usage: $0 <droplet-id> <rclone-destination>}"
RCLONE_DEST="${2:?Usage: $0 <droplet-id> <rclone-destination>}"
SNAPSHOT_NAME="droplet-${DROPLET_ID}-$(date +%Y%m%d-%H%M%S)"

echo "[1/4] Creating snapshot: ${SNAPSHOT_NAME}"
doctl compute droplet-action snapshot \
  --droplet-id "$DROPLET_ID" \
  --snapshot-name "$SNAPSHOT_NAME" \
  --wait

echo "[2/4] Fetching snapshot ID"
SNAPSHOT_ID=$(doctl compute snapshot list \
  --resource-type droplet \
  --format ID,Name \
  --no-header \
  | grep "$SNAPSHOT_NAME" \
  | awk '{print $1}')

if [ -z "$SNAPSHOT_ID" ]; then
  echo "ERROR: snapshot not found after creation. Check doctl compute snapshot list."
  exit 1
fi

echo "[3/4] Requesting export URL for snapshot ${SNAPSHOT_ID}"
# Note: DO export API returns a temporary signed URL valid for ~1 hour
EXPORT_URL=$(doctl compute image export \
  --image-id "$SNAPSHOT_ID" \
  --format URL \
  --no-header \
  --wait)

echo "[4/4] Transferring to ${RCLONE_DEST}"
rclone copyurl "$EXPORT_URL" "${RCLONE_DEST}/${SNAPSHOT_NAME}.img.gz" \
  --progress \
  --stats-one-line

echo "Done. Snapshot exported to ${RCLONE_DEST}/${SNAPSHOT_NAME}.img.gz"

A few notes on this script before you run it:

  • The doctl compute image export command triggers an asynchronous export. The --wait flag is important; without it the URL is empty until the export finishes.
  • The export URL is signed and expires in roughly one hour. Start the rclone copyurl transfer immediately after you get it.
  • rclone supports S3, Backblaze B2, Cloudflare R2, Wasabi, and most S3-compatible targets. Configure your remote once with rclone config and reuse it across scripts.
  • If you prefer the AWS CLI, replace the rclone copyurl line with aws s3 cp "$EXPORT_URL" "s3://your-bucket/path/${SNAPSHOT_NAME}.img.gz".

This is the only method that gets your Droplet data into storage you control. It is also the most work. That tradeoff is worth understanding before you decide which approach to run.

For a broader discussion of why off-site matters for compliance postures, see the DigitalOcean off-site compliance guide, which covers SOC 2 and GDPR requirements in detail.

The cost comparison

Here is how the three approaches stack up on the dimensions that actually matter for a backup decision:

MethodScheduleRetentionDownloadableCostSame-host risk
Native Droplet backup (standard)Weekly (DO-controlled)4 weeksNo20% of Droplet priceYes
Native Droplet backup (Premium)Daily (DO-controlled)4 weeksNo20% of Droplet priceYes
On-demand snapshotManual or scriptedUntil deletedNo$0.06/GB/monthYes
Off-site export (snapshot + export)ScriptedAs long as you keep the fileYes$0.06/GB/month snapshot + destination storageNo

A concrete example: a $48/month CPU-Optimized Droplet with a 50 GB disk.

  • Native backup add-on: $9.60/month. Four weekly restore points. You cannot take it off DigitalOcean.
  • Daily snapshots (scripted, keep 30): 50 GB × $0.06 × 30 days ≈ $90/month. More restore points, same same-host risk.
  • Off-site export (one copy per week, kept for 4 weeks in B2 at $0.006/GB/month): $0.06 × 50 for snapshot + $0.006 × 50 × 4 ≈ $3/month + $1.20/month = $4.20/month. Cheapest option once you have the script, and the only one that takes the data off-platform.

The native add-on is the path of least resistance. The off-site export is the only path to genuine protection.

Automating the whole thing

Running the export script manually is fine for one-off copies before a deployment. If you're moving infrastructure or switching providers, see the pre-migration backup guide for what to capture and verify before you cut over. For ongoing backup, you want it on a schedule.

The simplest approach: a cron job on a separate server (not the Droplet you're backing up), or a hosted cron service.

# Crontab entry: run every Sunday at 02:00 UTC
# Replace 123456789 with your Droplet ID
0 2 * * 0 /opt/scripts/export-droplet-snapshot.sh 123456789 offsite:my-bucket/droplet-backups/ >> /var/log/droplet-backup.log 2>&1

If you want more control over the snapshot schedule itself, the doctl compute droplet-action snapshot approach is covered in the automation guide, including how to list and clean up old snapshots so your costs don't compound.

Run your cron job from a server that is not the Droplet you are backing up. If the Droplet goes down, you still want the backup job to fire.

The harder part is not scheduling the job. The harder part is knowing when it fails. Log rotation, alerting on non-zero exits, and Slack/email notifications on failure are the pieces that separate a backup setup from a backup illusion.

If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles DigitalOcean Droplet backup off-site, with alerts when a run fails. See how it works →

Verifying your backup actually restores

A backup you have never tested is a hypothesis. The most common discovery point for a broken backup is the moment you need it, which is the worst possible time.

Verification for Droplet backups has two levels:

Level 1: confirm the snapshot exists. This is not verification. It is confirmation that a file was written. Necessary but insufficient.

# List all snapshots for a Droplet
doctl compute snapshot list --resource-type droplet

# Or filter by name prefix
doctl compute snapshot list --resource-type droplet --format ID,Name,SizeGigaBytes,Created

Level 2: restore from the snapshot to a test Droplet. This is the only real verification. Create a new Droplet from the snapshot, confirm the application starts, confirm the data is intact, then destroy the test Droplet.

# Create a new Droplet from a snapshot
doctl compute droplet create my-restore-test \
  --image <snapshot-id> \
  --size s-1vcpu-1gb \
  --region nyc1 \
  --wait

# Confirm it came up, then destroy it when done
doctl compute droplet list --format ID,Name,Status | grep my-restore-test
doctl compute droplet delete <droplet-id> --force

Run this test at least once a month, or before any major deployment. To build a repeatable, automated verification process around this, see automating DigitalOcean backup verification, which covers scheduling restore tests and alerting on failures. The Droplet restore guide covers the full restore flow, including restoring to a different region. If a restore does not go as expected, troubleshooting a DigitalOcean Droplet backup that won't restore walks through the most common failure modes and how to work around them.

For databases, a Droplet restore test is not enough. You should also verify that the database engine starts cleanly and that a table-level query returns expected results. A block-level image of a database that was mid-write when the snapshot triggered can sometimes come back with a corrupted InnoDB or Postgres data directory.

FAQ

Can I schedule automatic Droplet snapshots?

DigitalOcean does not offer a native scheduled snapshot feature. You can schedule them yourself using the doctl CLI in a cron job, or use the DigitalOcean API directly. The native backup add-on runs on a DO-controlled weekly (or daily on Premium) schedule, but that is a separate product from snapshots.

How do I move a Droplet backup to AWS S3?

DigitalOcean does not let you download native backups directly. For snapshots, you can use doctl compute image export to generate a temporary signed download URL, then pipe that to the AWS CLI with aws s3 cp <url> s3://your-bucket/path/. The export script in this article walks through the full process.

Are Droplet backups incremental?

DigitalOcean does not publicly document whether native backups are incremental or full images. The pricing is flat at 20% of Droplet cost regardless, so the question is mostly academic for budgeting. Snapshots are full images each time, which is why costs accumulate if you keep many of them.

What's the maximum Droplet backup size?

There is no published maximum size limit for Droplet backups or snapshots. Practically speaking, snapshot size reflects the Droplet's disk size. A Droplet with a 500 GB SSD will produce a snapshot close to 500 GB (depending on how much data is written). At $0.06/GB/month, large Droplets make snapshot-per-day strategies expensive quickly.

Can I restore a Droplet backup to a different region?

Yes. You can transfer a snapshot to another DigitalOcean region, then create a Droplet from it there. The transfer takes time proportional to the snapshot size. This is a within-DO transfer; it does not give you an off-site copy outside DigitalOcean's infrastructure. For genuine off-site recovery, you need the export approach described in this article.

Keep learning


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.