Skip to content
use-cases / kill-the-staging-tax / hero
CONTAINERS · SNAPSHOTS · COST

Kill the staging-server tax

Most teams pay for production, then pay again for a staging stack that looks roughly like production. On Hoody, staging is a snapshot of the production container — branched on the same bare metal when someone needs it, frozen back to disk when they don't.

use-cases / kill-the-staging-tax / mechanism

Three calls. One snapshot. No duplicate stack.

Snapshots are cheap on Hoody because the storage layer is copy-on-write. The base image is referenced, not copied. Staging shares production's pages until something diverges — then only the delta is paid for.

01
STEP 01

Take a snapshot of prod

POST /containers/$PROD/snapshots with an alias. The base image stays referenced; only the metadata is new. The call returns a snapshot name within a second.

02
STEP 02

Branch staging from it

POST /containers/$PROD/copy with source_snapshot=prod-baseline. A new container starts on the same hardware, sharing pages with prod. Writes go to a delta — staging is only billed for what it changes.

03
STEP 03

Freeze when idle

Stop the staging container when QA is done. Disk persists, RAM and CPU drop to zero. Restore in 5–15 seconds when the next ticket lands. Drift is impossible because each branch starts from a known prod state.

shell · against the Hoody Containers API
POST · snapshot + copy
# 1. snapshot production — alias it so humans can find itcurl -X POST https://api.hoody.com/api/v1/containers/$PROD/snapshots \ -H 'Authorization: Bearer $TOKEN' \ -d '{"alias":"prod-baseline","expiry":30}'# 2. branch staging from that exact snapshotcurl -X POST https://api.hoody.com/api/v1/containers/$PROD/copy \ -d '{"target_project_id":"$STAGING","source_snapshot":"prod-baseline"}'→ 200 OK · staging container starts in 5–15 s, paying for the delta only

Snapshots can carry an expiry in days; cleanup is automatic. Copies can choose a different target_project_id and target_server_id, so QA can live on a separate region or sub-account without changing the recipe.

use-cases / kill-the-staging-tax / powers

What dropping the duplicate unlocks

When staging is a branch instead of a parallel rental, several recurring nuisances stop existing. The bill is only the most visible one.

BUDGET

One stack, not three

Production, staging, and QA used to be three rentals you billed separately. Now they're one container plus two cheap branches that wake up when needed. The marginal environment costs a delta, not a duplicate.

FIDELITY

Drift becomes impossible

Each staging branch starts from a real production snapshot — same OS image, same packages, same data shape, same env vars. The 'works on staging, breaks on prod' bug class is closed by construction.

SPEED

Restore in seconds, not hours

PATCH the container against an older snapshot to roll a bad deploy back, or branch a fresh QA copy off last night's backup. No rsync, no DB dump, no 90-minute provision job — just a snapshot name.

use-cases / kill-the-staging-tax / ledger

The monthly ledger, with the duplicate removed

Same workload — production, staging, QA — counted two ways. Once as three full rentals, once as one container with two snapshot branches.

BEFORE · DUPLICATE STACK$702 / mo

Two EC2 m5.large instances at $0.096/hr (730 hours), plus two db.t3.medium Multi-AZ RDS instances at $0.380/hr. Staging idle most of the week; the meter doesn't care.

DELTA · WHAT GETS PAIDdelta only

Staging branches off a prod snapshot. Shared pages are referenced, not duplicated. Only the bytes a QA run actually writes are billed — usually a few hundred MB instead of 100 GB.

AFTER · ONE CONTAINER$49 / mo

One Hoody container handles prod 24×7. Staging and QA wake up from a snapshot when work needs them, freeze back to disk when it doesn't. One bill, three environments, no drift.

AWS prices use public on-demand rates for us-east-1 EC2 m5.large and RDS db.t3.medium Multi-AZ as of early 2026. Hoody bare-metal entry pricing starts at $29/month and varies by spec, region, and rental duration; snapshots and storage are included in the server price. Numbers shown are a representative comparison, not a quote.

use-cases / kill-the-staging-tax / punchline

Staging used to be a duplicate of production. Now it's a snapshot of it.

BEFORE · TWO RENTALS, ONE PURPOSErent prod, rent staging, sync the two by handtwo EC2s · two RDSs · two monitors · one drifting copy
AFTER · ONE CONTAINER, MANY VIEWSsnapshot $PROD → copy with source_snapshotshared base · delta-only branches · drift-free by construction
use-cases / kill-the-staging-tax / replaces

What this replaces

The standard ways teams pay the staging tax. Each one bills you for an environment that's idle most of the week or drifts away from prod by the time you need it.

  • AWS EC2 staging duplicatesSecond VM billed 730 h/month for 12 h of use
  • parallel Heroku staging dynosPer-dyno + per-add-on bill, idle most days
  • multiple VPS for env layersLinear cost in environments — three boxes, three bills
  • custom drift-detection toolsTooling to find the gap between staging and prod
  • "we don't have staging" riskThe bug ships because branching prod was too expensive
  • RDS staging replicasSecond database to keep in sync, on the meter 24×7
use-cases / kill-the-staging-tax / cta

Stop renting an environment that drifts. Branch one that can't.

use-cases / kill-the-staging-tax / related

Read the others