SimpleBackupsSimpleBackups

Why off-site backup matters (and why same-host isn't enough)

Posted on

Most teams enabling DigitalOcean backups believe they're covered. They see "Backup enabled" in the dashboard, the weekly snapshot runs, the retention clock starts. The assumption is: if something goes wrong, restore from backup. It is a reasonable assumption. It is also incomplete.

Every native DigitalOcean backup — Droplet backups, snapshots, volume snapshots, managed database backups — lives inside the same account as the resource it protects. One account event: a billing dispute, a fraud flag, an IAM error, a region outage, and the backup goes down with the production system. Not eventually. Simultaneously.

This article covers why that matters, what the 3-2-1 rule says to do about it, what compliance frameworks like SOC 2 and ISO 27001 actually require, and how to design a DigitalOcean backup strategy that holds up when things go wrong.

The goal is not to alarm you. It is to help you build something real.

Why trust this article

We back up DigitalOcean every day. The pattern we see is consistent: teams discover the gaps in native backup the hard way, usually at the moment they needed the data back. The scenarios in this article are not hypotheticals. They are the failure modes we see repeat in support queues and post-mortems.

The same-host problem in plain English

When DigitalOcean creates a Droplet backup or a managed database snapshot, it stores that backup inside the DigitalOcean infrastructure, tied to your account. The official documentation for enabling Droplet backups confirms there is no option to direct native backups to external storage or a different account.

That sounds like a minor architectural detail. It is not.

The backup and the system it is protecting share the same threat surface. If anything compromises or blocks access to your DigitalOcean account, it compromises or blocks access to both simultaneously. In security terminology, the backup has no independence from the resource it exists to protect.

To understand the practical implications, consider what "same host" actually means in DigitalOcean's architecture:

  • Your Droplet runs in your account.
  • The backup of that Droplet is stored in your account.
  • The billing, the IAM permissions, the API keys, and the account status all apply to both.

This is different from storing a backup on a separate disk. This is storing the backup in a logically dependent namespace. If the namespace is compromised, both fail.

To understand how DigitalOcean native backup actually works under the hood, including what it captures and how it is stored, that article covers the technical detail. For this piece, the key point is architectural: same-account storage means same-account risk.

Three scenarios where same-host backup fails

Understanding the risk abstractly is one thing. These three concrete scenarios show what it looks like in practice.

Scenario 1: Account suspension

DigitalOcean can suspend accounts for several reasons: billing disputes, fraud detection triggers, or abuse reports. When a suspension happens, you lose access to the entire account. Every resource in that account becomes inaccessible. That means your production Droplets, your managed databases, and all of your backups.

The support cycle for reinstating a suspended account can take hours or days. If the suspension came from a billing dispute — say, an expired payment method that triggered a balance flag — you may have advance warning. If it came from an automated fraud detection trigger or an abuse report, you typically do not.

If your backups are in the same account as your production systems, you cannot restore to anywhere during the suspension. You are waiting for the ticket to resolve while your application is offline.

Scenario 2: Accidental deletion and propagation

IAM misconfiguration is one of the most common sources of data loss in cloud infrastructure. An overpermissioned role, a runbook that targets the wrong resource, a disgruntled employee: these scenarios share a common outcome. If the principal with delete permissions has access to both production resources and backup resources, deletion can propagate to both.

DigitalOcean's permission model applies at the account and project level. If you or someone on your team has the ability to delete a Droplet, that same permission typically extends to the snapshots and backups of that Droplet. There is no enforced separation between "can delete production" and "cannot touch backups."

This is the failure mode that catches teams who believe access control is their backup strategy. It is not. Backups stored under the same permission boundary as the data they protect inherit that boundary's weaknesses. What DigitalOcean native backup doesn't cover explores this gap in more depth, including the specific account-level operations that affect backup accessibility.

Scenario 3: Region-level outage

DigitalOcean data centers are organized by region: NYC1, NYC3, AMS3, SFO3, and so on. When you create a Droplet backup or snapshot, it is stored in the same region as the Droplet by default. You can transfer snapshots between regions manually, but it is not automatic.

DigitalOcean has experienced multi-hour outages affecting specific data centers. During a DC-level event, both the resource and its snapshots become unreachable. You cannot restore a NYC3 Droplet from a NYC3 snapshot during a NYC3 outage, because the snapshot is also in NYC3.

Cross-region snapshots help. They do not solve the account-dependency problem, but they do reduce exposure to region-level events. We cover the full cross-region backup strategy for DigitalOcean in a dedicated article on cross-region backup.

The 3-2-1 backup rule applied to DigitalOcean

The 3-2-1 rule is the oldest and most durable heuristic in backup strategy. It says: keep 3 copies of your data, on 2 different types of storage media, with 1 copy off-site. It predates cloud infrastructure, but it maps cleanly onto DigitalOcean.

Here is what the 3-2-1 rule looks like for a DigitalOcean workload:

Copy 1: Production. Your running Droplet, managed database, or Spaces bucket. This is the primary copy.

Copy 2: Native snapshot or backup. A DigitalOcean Droplet backup or managed database snapshot. This copy is stored in your account, in the same region. It satisfies the "second copy" requirement. It does not satisfy the "off-site" requirement.

Copy 3: Off-site copy. A backup stored in a different provider account or a different cloud storage account entirely. AWS S3, Backblaze B2, Wasabi, a separate DigitalOcean account you do not use for production: any of these qualifies. The critical requirement is that the access path to this copy is independent from the access path to production and Copy 2.

Most DigitalOcean setups stop at Copy 2. They have 2-1-0: two copies, one medium, zero off-site. The native backup runs, the retention window is covered, and the team feels protected. Until the account event that takes both copies down simultaneously.

The two media types requirement is also worth examining. DigitalOcean block storage and S3-compatible object storage are different storage media. If your Copy 2 is a block-storage-based snapshot and your Copy 3 is an object storage file in a separate account, you satisfy the media diversity requirement as well as the off-site requirement.

The cheapest off-site option is an S3-compatible bucket at a provider like Backblaze B2 or Wasabi. For most workloads, 30 days of daily database backups costs under $5 per month. The cost difference between "no off-site" and "compliant off-site" is often smaller than a single hour of lost engineering time during an incident.

What SOC 2 Type II actually requires for backup

SOC 2 Type II is an auditing standard that evaluates how a service organization manages customer data over time. Auditors review controls against the Trust Services Criteria defined by the AICPA. Two criteria are directly relevant to backup strategy.

CC9.1 covers risk mitigation. The criteria require that the organization identifies risks to the confidentiality, integrity, and availability of data, and implements controls to mitigate those risks. For production data, the risk of a backup failing because it shares the same access path as production is a real and documentable risk. An auditor reviewing your backup posture will ask how that risk is mitigated. "We have native backups enabled" is a partial answer. "Our backups are stored off-site in an independent account" is the answer that closes the finding.

A1.2 covers availability commitments. The criteria require that the organization has the processing capacity and data recovery capabilities to meet its availability commitments. A backup that cannot be accessed during the most likely recovery scenarios (account suspension, region outage) is a backup that fails the availability test.

Same-account backups satisfy "a backup exists." They do not satisfy "the backup is independent of the production environment." SOC 2 Type II auditors are experienced enough to ask whether the backup is accessible in the failure scenarios you have documented. If the answer is "it depends on the account being accessible," that is a finding.

This does not mean you need a 14-page disaster recovery plan to pass a SOC 2 audit. It means your backup posture needs to demonstrate independence, not just existence.

The practical implementation: take your DigitalOcean database backups and push them to a separate cloud storage account. Use server-side encryption. Document the retention policy. Test a restore. That combination closes the relevant SOC 2 findings in most engagements. See how SimpleBackups compares to native DigitalOcean backup for a side-by-side of what each approach covers against these criteria.

What ISO 27001 expects

ISO 27001 is an information security management standard. Its Annex A controls are a catalog of security measures that organizations implement and document. The control relevant here is A.12.3.1: Information backup.

The control requires that backup copies of information, software, and system images are taken and tested regularly in accordance with an agreed backup policy. The standard says backups should be stored separately from the primary systems to prevent damage affecting both. The word "separately" is doing significant work in that sentence.

ISO 27001 auditors and certification bodies interpret "separately" as physically and logically separate. A backup in the same DigitalOcean account as production is in the same logical namespace. Same account, same billing, same API access, same suspension risk. Most auditors will flag this as insufficient separation, particularly for organizations seeking certification rather than just self-assessment.

The separation requirement means a different account, not just a different region within the same account. A cross-region DigitalOcean snapshot is better than a same-region snapshot for availability reasons, but it does not satisfy A.12.3.1 if both copies are accessible via the same account credentials.

The practical path: store backups in object storage outside the DigitalOcean account. Document this as a control implementation in your Statement of Applicability. Annotate it with the tool you use, the destination bucket, the retention window, and the test restore schedule.

ISO 27001 requires evidence, not just claims. "We have a backup policy" is not evidence. "We have monthly restore test logs showing successful recovery from off-site backup" is evidence. Make sure your backup process generates audit-ready logs.

What GDPR says about backup and recovery

GDPR's requirements for backup are less prescriptive than ISO 27001 but the underlying obligation is clear. Article 32 requires that controllers and processors implement technical and organizational measures to ensure a level of security appropriate to the risk, including "the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident."

That phrase, "restore the availability and access to personal data in a timely manner," is the test. A backup strategy that cannot restore access to personal data during an account suspension or region outage does not satisfy that requirement.

Same-host backups can fail together with production in exactly those scenarios. If your DigitalOcean account is suspended or a data center goes offline, you cannot restore access to personal data from a backup stored in the same account and region. That is a gap.

GDPR-compliant DigitalOcean backup covers the full compliance picture in more depth, including data residency, storage location requirements for EU data, and how to document your backup posture for GDPR accountability obligations. For this article, the key point is that GDPR's availability requirement implies backup independence, not just backup existence. [verify: Article 32 citation above reflects the text of GDPR Article 32(1)(b)(c) — recommend review against official text before publish]

FrameworkRelevant controlWhat it requiresSame-account backup satisfies it?
SOC 2 Type IICC9.1, A1.2Off-site, independent backup; availability in documented failure scenariosPartial (backup exists, not independent)
ISO 27001A.12.3.1Separate storage from primary systems; tested regularlyNo (same account = same logical namespace)
GDPRArticle 32Ability to restore personal data in a timely manner after a physical or technical incidentNo (same-account failure = simultaneous)

Designing an off-site strategy for DigitalOcean

The architecture for DigitalOcean off-site backup is not complex. The goal is to add one independent copy that is stored in a different account, accessible via different credentials, and not subject to the same account-level failure modes as your production systems.

Here is the pattern:

Step 1: Keep native backups. Continue running DigitalOcean's native Droplet backups and managed database snapshots. These are your Copy 2: fast, cheap, and good for quick restores from configuration changes, failed deployments, or minor data issues. Do not remove them.

Step 2: Add an automated export. For each critical resource, set up an automated job that exports the data to an S3-compatible bucket in a separate account. For managed databases, this means exporting a dump file: pg_dump or mysqldump, compressed and encrypted, sent to your off-site bucket. For Droplets running application data, this typically means exporting the relevant data directories, not a full server image. For Kubernetes workloads with persistent volumes, native DigitalOcean backup covers nothing at all — backing up DigitalOcean Kubernetes covers the Velero-based approach for getting DOKS persistent volume data into off-site storage.

Step 3: Use separate credentials. The credentials for the off-site bucket should be stored separately from your DigitalOcean API keys. If your DigitalOcean account is compromised, the attacker should not be able to reach your off-site backup using the same credential set. Store bucket access keys in a secrets manager that is not integrated with your DigitalOcean project.

Step 4: Test the restore. An untested backup is not a backup. Schedule a quarterly restore test for each critical resource. The restore should run from the off-site copy, not the native snapshot. Document the result: how long it took, whether the data was complete, whether the application came up cleanly. These test logs become your compliance evidence for SOC 2 and ISO 27001. For a repeatable, automated approach to this, see automating DigitalOcean backup verification.

# Example: export a DigitalOcean managed Postgres database and ship to S3-compatible off-site storage
# Replace values in angle brackets with your environment's specifics

TIMESTAMP=$(date +%Y%m%d-%H%M%S)
DUMP_FILE="db-backup-${TIMESTAMP}.dump"

# Dump the database
pg_dump
  --host=<managed-db-host>
  --port=25060
  --username=<db-user>
  --format=custom
  --no-acl
  --no-owner
  --file="/tmp/${DUMP_FILE}"
  <db-name>

# Encrypt the dump (GPG symmetric, passphrase from env)
gpg --batch --yes --passphrase "${BACKUP_PASSPHRASE}"
  --symmetric --cipher-algo AES256
  --output "/tmp/${DUMP_FILE}.gpg"
  "/tmp/${DUMP_FILE}"

# Ship to off-site bucket (using AWS CLI against any S3-compatible endpoint)
aws s3 cp "/tmp/${DUMP_FILE}.gpg"
  "s3://<off-site-bucket>/postgres/${DUMP_FILE}.gpg"
  --endpoint-url "${OFFSITE_S3_ENDPOINT}"

# Clean up local copy
rm "/tmp/${DUMP_FILE}" "/tmp/${DUMP_FILE}.gpg"

The --endpoint-url flag lets you point the AWS CLI at any S3-compatible provider: Backblaze B2, Wasabi, Cloudflare R2, a second DigitalOcean Spaces account. Pick the one that fits your latency, cost, and compliance requirements.

This script runs well as a cron job on a small Droplet or in a GitHub Actions workflow. The key requirement is that it runs from an environment that has network access to your managed database endpoint and credentials for the off-site bucket.

If you want to run this without maintaining a separate cron server, DigitalOcean Functions (serverless) can trigger the export on a schedule. You do not need a dedicated server for the backup job — you need a process that runs outside the resource being backed up.

The cost of off-site vs the cost of losing everything

Teams sometimes hesitate on off-site backup because it introduces a new line item. The number is usually small, but it is a visible cost and visible costs attract scrutiny.

Here is what the math actually looks like.

A 20 GB managed PostgreSQL database, backed up daily, compressed to roughly 30% of its uncompressed size, produces around 6 GB of backup data per day. At 30-day retention, that is 180 GB of storage. At Backblaze B2 pricing of $0.006/GB/mo [verify: check current B2 pricing before publish], that is roughly $1.08 per month. At Wasabi's pricing model, it is a similar figure. AWS S3 Standard is higher, but still under $5/month for this workload.

Compare that to the alternative. A four-hour production outage for a typical B2B SaaS team has costs that include: engineer time on incident response, support queue from customers, potential SLA breach penalties, and reputation damage that is harder to quantify but real. A data loss event is worse: you are not just recovering from outage, you are rebuilding state that cannot be regenerated.

We are not inventing catastrophic scenarios. We are naming the one risk that a $2/month object storage bucket eliminates.

The hesitation around backup cost is usually not about the number. It is about the setup friction: creating accounts at a new provider, configuring credentials, writing and maintaining the export script, scheduling the job, watching for failures. That friction is real. It is also exactly the friction that off-site backup tooling exists to remove.

The reason we built SimpleBackups for DigitalOcean is exactly the gap this article describes: native backup covers the easy part, and none of the hard part. If you want off-site, encrypted, tested backups for your Droplets, managed databases, and Spaces — without writing and maintaining the infrastructure yourself — that is what it does.

Practical close

The checklist for getting off-site backup right on DigitalOcean is short.

First, inventory your critical resources: which Droplets, managed databases, and Spaces buckets must be recoverable. Not everything needs the same level of protection. Start with the data you would lose sleep over.

Second, confirm your Copy 2: native backups should be enabled for Droplets (the add-on costs 20% of the Droplet price) and managed databases include daily backups in the plan. These are baseline. Verify they are running.

Third, add Copy 3: an off-site export to a bucket in a separate provider account. Use the script pattern above or a tool that does it for you. The key constraint is that the off-site copy is accessible via different credentials than your DigitalOcean account.

Fourth, schedule a restore test. Put it in the calendar for next month. Run it. Document it. This converts your backup from a feeling of safety into actual safety.

Fifth, if compliance is in scope: document the off-site control, map it to the relevant framework criteria, and make sure your restore test logs are retained. A control with no evidence is a control that does not exist in an audit.

You can build all of this in an afternoon. The value persists indefinitely.

Keep Learning

FAQ

Does DigitalOcean backup satisfy SOC 2 requirements?

Partially. Native DO backups satisfy the "backup exists" criteria (CC9.1 risk mitigation) but do not satisfy the "backup is independent of the production environment" expectation. SOC 2 Type II auditors review whether backups are accessible during the failure scenarios you have documented. If both production and backup fail simultaneously under an account-level event, that is an audit finding. Off-site, independently-accessed copies are what close the gap.

What is the 3-2-1 backup rule?

The 3-2-1 rule requires three copies of your data, stored on two different types of storage media, with one copy stored off-site. Applied to DigitalOcean: Copy 1 is your production resource, Copy 2 is the native DO snapshot or backup in the same account, Copy 3 is an off-site export in a separate cloud provider account. Most teams have Copy 1 and Copy 2. Copy 3 is what the rule is actually about.

Can I store DigitalOcean backups in AWS S3?

Yes. DigitalOcean does not provide a native option to direct its backup process to external storage, but you can export your own backups using standard database dump tools (pg_dump, mysqldump) and ship the resulting files to any S3-compatible bucket. AWS S3, Backblaze B2, Wasabi, and Cloudflare R2 all work. The export needs to be orchestrated outside of DigitalOcean's native backup feature, either with a script, a cron job, or a tool like SimpleBackups.

What happens to my backups if my DO account is suspended?

Your backups become inaccessible at the same time as your production resources. Account suspension is an account-level event. It affects every resource and every backup in the account simultaneously. The only backups accessible during a DigitalOcean account suspension are those stored in a separate account at a different provider. This is the defining reason that same-account backup does not satisfy availability requirements in compliance frameworks.

Is same-account backup considered off-site?

No. "Off-site" in the 3-2-1 rule means stored in a location that is physically and logically independent from the primary location. Two resources in the same DigitalOcean account are in the same logical namespace, share the same account status, and fail under the same account-level events. "Off-site" requires a different account, and ideally a different provider. A different region within the same account reduces region-level outage risk but does not address same-account risk.


This article is part of The complete guide to DigitalOcean backup, an honest, practical reference from the team that backs up DigitalOcean every day.