SimpleBackupsSimpleBackups

How to back up DigitalOcean Spaces (no native option)

Posted on

If you're here because you went looking for the "Backup" tab inside DigitalOcean Spaces and couldn't find it, that's because it doesn't exist. DigitalOcean has no native backup for Spaces. No automated copy, no retention window, no built-in cross-region replication. You either build the backup yourself or you don't have one.

This article walks you through what you actually have available, starting with versioning (and why it's not a substitute for backup), then a working rclone-based mirror to an external bucket, syncing to AWS S3 or Backblaze B2, scheduling all of it on a cron job, and the edge cases around metadata and ACLs that trip people up on first run.

Why Spaces has no native backup

DigitalOcean built Spaces as an S3-compatible object store. It handles the infrastructure concerns: redundancy within a datacenter, durability targets, and availability. What it doesn't build is the thing you'd need to protect against accidental deletion, ransomware, account compromise, or a billing dispute that locks you out.

Compare that to what DigitalOcean offers elsewhere: Droplet backups, volume snapshots, managed database backups. Spaces gets none of it. The DigitalOcean Spaces documentation confirms versioning is available, but that's the entirety of the data-protection surface.

This is a pattern that plays out across the whole DigitalOcean product lineup: native backup covers the easy part for most products and nothing at all for others. See what DigitalOcean native backup doesn't cover for the full picture across every product.

Why this gap matters

We back up DigitalOcean every day. The gap we see most teams underestimate is Spaces: they assume object storage is inherently durable and stop there. Durability protects against hardware failure. It does nothing for logical deletion, overwrites, or account-level events.

Versioning: close but not a backup

Spaces supports versioning, and it's genuinely useful. When enabled, every write to an object creates a new version. A delete creates a delete marker, leaving prior versions accessible. You can restore a previous version of a file or recover a deleted object as long as it was versioned before the deletion.

That's not a backup, and the distinction matters.

Versioning is off by default on DigitalOcean Spaces. If you haven't explicitly enabled it, a deleted object is gone immediately with no recovery path.

Here's what versioning doesn't protect you from:

  • Account compromise. An attacker with your API key can delete all versions, not just current objects. AWS S3 has MFA delete to protect against this. Spaces does not.
  • Accidental bucket deletion. Deleting the bucket itself removes all versions with it.
  • Regional outage. If the region hosting your Space goes down, versioned objects are in the same region.
  • Unversioned history. Any object that existed before you enabled versioning has no prior versions. The protection starts from the moment you enable it, not retroactively.
  • Storage cost. Every version counts against your usage. A bucket that sees frequent overwrites grows fast.

For a more complete look at what happens when objects are deleted, see DigitalOcean Spaces accidental deletion.

Versioning is worth enabling. It's a cheap safety net for the most common mistake (overwriting a file). But it's a safety net, not a backup. Treat it as one layer of several.

Mirroring a Space with rclone

rclone is a command-line tool for cloud storage. It speaks S3 natively, which means it talks to DigitalOcean Spaces directly through the S3-compatible endpoint. Install it from the rclone documentation for your operating system.

Configure your Spaces source with rclone config. You'll need:

  • Your Spaces access key and secret key (from the API page in the DO control panel)
  • The region endpoint, which follows the pattern <region>.digitaloceanspaces.com (for example nyc3.digitaloceanspaces.com)

Add a remote named do-spaces using the s3 provider type, with DigitalOceanSpaces as the provider, and point it at the endpoint for your region. Then configure a second remote for your destination bucket: a different Spaces region, AWS S3, Backblaze B2, or any S3-compatible store.

Once both remotes are in your rclone.conf, the sync command is:

rclone sync
  do-spaces:your-bucket-name
  dest-remote:destination-bucket-name
  --progress
  --transfers 8
  --checkers 16
  --s3-acl private

A few notes on the flags:

  • sync makes the destination match the source. Files in the destination that no longer exist in the source get deleted. If you want to keep deleted files in the destination, use copy instead.
  • --transfers 8 runs 8 file transfers in parallel. Adjust based on your object count and available bandwidth.
  • --checkers 16 runs 16 checksum comparisons in parallel. Useful for large buckets.
  • --s3-acl private ensures objects at the destination are not publicly accessible unless you explicitly set them otherwise. Check whether your destination requires a different ACL.

For a first sync on a large bucket, add --dry-run and review what rclone says it will do before letting it write.

Syncing to AWS S3 or Backblaze B2

If you want to send your Spaces copy off-platform entirely, AWS S3 and Backblaze B2 are the two most common destinations. Both are S3-compatible, and DigitalOcean's endpoint speaks the same protocol AWS does.

You don't need rclone for this specific case. The AWS CLI's s3 sync command works directly against a DigitalOcean Spaces bucket using the --endpoint-url flag:

aws s3 sync
  s3://your-do-bucket/
  s3://your-aws-destination-bucket/
  --source-region us-east-1
  --region us-east-1
  --endpoint-url https://nyc3.digitaloceanspaces.com
  --no-verify-ssl

Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your Spaces credentials before running this. The --endpoint-url redirects the CLI from AWS to your Spaces region. Replace nyc3 with your Space's region code.

For Backblaze B2, rclone is the cleaner path since B2 has its own auth model, though it also supports an S3-compatible API endpoint. Configure a b2 remote in rclone and use the same rclone sync pattern from the previous section, pointing at your B2 bucket.

Before you commit to a destination provider, check where your bucket lands physically. Backblaze B2 has US and EU regions. If you're subject to GDPR, make sure your off-site copy stays in the EU. Off-site doesn't help your compliance posture if you've just moved the data to a jurisdiction you can't use.

Choosing between rclone and the AWS CLI comes down to your environment. The AWS CLI is already installed in most Linux environments and works for AWS or AWS-compatible targets. rclone handles more providers (including Backblaze B2 natively) and gives you more control over transfer parallelism.

Automating the mirror on a schedule

A manual sync is better than nothing. An automated sync you can rely on is what you actually need.

Put the sync in a shell script on a server that is not inside the same DigitalOcean account as your Spaces bucket. A small VPS on a different provider, a GitHub Actions runner, or a dedicated backup machine all work. The point is that if your DigitalOcean account is compromised or suspended, the machine running your backup job is unaffected. If you're migrating to another provider and need to bring your Spaces data with you, a verified off-site copy is the safest starting point: see the pre-migration backup guide for the full checklist before you cut over.

Here's a script you can drop into /opt/scripts/spaces-backup.sh:

#!/usr/bin/env bash
set -euo pipefail

TIMESTAMP=$(date +%Y-%m-%dT%H:%M:%S)
LOG_FILE="/var/log/spaces-backup.log"
SOURCE="do-spaces:your-bucket-name"
DEST="dest-remote:destination-bucket-name"

echo "[$TIMESTAMP] Starting sync: $SOURCE -> $DEST" >> "$LOG_FILE"

rclone sync "$SOURCE" "$DEST"
  --transfers 8
  --checkers 16
  --s3-acl private
  --log-file "$LOG_FILE"
  --log-level INFO

if [ $? -eq 0 ]; then
  echo "[$TIMESTAMP] Sync completed successfully." >> "$LOG_FILE"
else
  echo "[$TIMESTAMP] Sync FAILED. Check log above." >> "$LOG_FILE"
  exit 1
fi

Make it executable:

chmod +x /opt/scripts/spaces-backup.sh

Schedule it with cron. This runs the sync every day at 03:00 UTC and appends stdout/stderr to the log file:

0 3 * * * /opt/scripts/spaces-backup.sh >> /var/log/spaces-backup-cron.log 2>&1

Set RCLONE_CONFIG in the cron environment if your rclone.conf lives somewhere other than the default path:

0 3 * * * RCLONE_CONFIG=/home/ubuntu/.config/rclone/rclone.conf /opt/scripts/spaces-backup.sh >> /var/log/spaces-backup-cron.log 2>&1

Watch the log file after the first scheduled run. rclone exits non-zero on failure, and the script propagates that. Wire up an alerting mechanism (a dead-man's switch, a Slack webhook, a simple email-on-failure cron wrapper) so a failed sync doesn't go unnoticed.

This same pattern applies to the storage replication approach we've documented for SimpleBackups users, which removes the need to manage this script and the schedule yourself.

Same-host risk

Spaces has no backup of any kind. If you delete an object and versioning is off, it's gone. If the account is compromised, the entire bucket is exposed. Running your backup job from a separate account or provider closes this gap. See the off-site compliance guide for a fuller treatment.

What to do about metadata and ACLs

Object metadata and ACLs are easy to overlook in a sync operation and annoying to discover missing later.

Metadata. Spaces stores per-object metadata (Content-Type, Cache-Control, custom headers) alongside the object itself. rclone sync copies metadata by default when the destination supports it. AWS S3 preserves it. Backblaze B2 does too, via the S3-compatible API. Verify this on your first sync by checking a few objects at the destination and confirming their metadata headers match the source. Use rclone lsjson --metadata your-remote:your-bucket to inspect metadata on both sides.

ACLs. If your Spaces bucket has a mix of public and private objects, be deliberate about what you pass to --s3-acl. Setting --s3-acl private on sync protects your data at the destination but means publicly accessible objects in the source arrive as private. That's usually what you want for a backup. You don't want your backup bucket to be a publicly accessible mirror of your production storage.

If you need to preserve per-object ACLs exactly, you'll need --s3-acl bucket-owner-full-control for cross-account copies, or drop the flag and rely on the destination bucket's default ACL policy, which is safer. Check your destination's default before assuming.

Lifecycle rules. Spaces supports lifecycle rules (expiry, transition). These rules do not transfer. If your production bucket automatically expires objects after 90 days, the destination bucket won't replicate that behavior unless you manually configure the same rule there. For a backup destination, you probably want to keep objects longer than the source lifecycle, so this is usually a feature, not a bug. Just be explicit about it.

To verify your restore works, try restoring an object from the destination bucket into a test prefix on a regular basis. A backup you've never tested is an assumption, not a guarantee. For a systematic approach to scheduling and monitoring those checks across your DigitalOcean resources, see automating DigitalOcean backup verification. For the full Spaces restore flow, see how to restore DigitalOcean Spaces.

What to do next

Pick one destination for your off-site copy and run rclone sync --dry-run tonight. See what it would transfer. That tells you immediately whether your credentials are correct, whether the destination bucket is reachable, and roughly how long the first sync will take. A dry run costs nothing and surfaces most configuration problems before they matter.

Once you're satisfied, add the cron entry and check the log file the next morning. If the sync completed cleanly, wire up an alert for failures and move on.

If scripting and scheduling all this yourself sounds like a second job, SimpleBackups handles DigitalOcean Spaces backups off-site with alerts when a run fails. See how it works →

Keep learning

FAQ

Does DigitalOcean back up Spaces?

No. DigitalOcean provides no native backup for Spaces buckets. There is no automated copy, no retention window, and no built-in cross-region replication. If you want a backup, you build it yourself using rclone, the AWS CLI, or a managed tool like SimpleBackups.

Can I enable versioning on DigitalOcean Spaces?

Yes. Versioning is available in DigitalOcean Spaces but is off by default. When enabled, every write creates a new version and deletes create a delete marker rather than removing the object immediately. This protects against accidental overwrites and some accidental deletions, but it is not a substitute for a full off-site backup because versioned objects are still in the same region and the same account.

How do I replicate a Space to another region?

DigitalOcean has no built-in cross-region replication for Spaces. The standard approach is to use rclone with two configured remotes (one for each region) and run rclone sync on a schedule. You can also use the AWS CLI with --endpoint-url pointing at the source Spaces region and a standard AWS S3 destination. Both approaches require an external machine to run the sync job.

What happens if I accidentally delete files from a Space?

If versioning is enabled, the objects are still there as older versions. You can restore them via the DigitalOcean control panel or the API. If versioning is off, the objects are gone immediately and there is no recovery path within DigitalOcean. Your only option is to restore from an external backup. This is why enabling versioning and maintaining an off-site copy are both necessary, not one or the other.

Can I use rclone with DigitalOcean Spaces?

Yes. DigitalOcean Spaces uses an S3-compatible API, and rclone supports it directly. Configure a remote with the s3 provider type, set the provider to DigitalOceanSpaces, and point the endpoint at your region (for example nyc3.digitaloceanspaces.com). From there, all standard rclone commands work: sync, copy, ls, lsjson, and others.


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.