
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Most teams pay for production, then pay again for a staging stack that looks roughly like production. On Hoody, staging is a snapshot of the production container — branched on the same bare metal when someone needs it, frozen back to disk when they don't.
an actual reading of the column most teams already have but don't audit
Snapshots are cheap on Hoody because the storage layer is copy-on-write. The base image is referenced, not copied. Staging shares production's pages until something diverges — then only the delta is paid for.
POST /containers/$PROD/snapshots with an alias. The base image stays referenced; only the metadata is new. The call returns a snapshot name within a second.
POST /containers/$PROD/copy with source_snapshot=prod-baseline. A new container starts on the same hardware, sharing pages with prod. Writes go to a delta — staging is only billed for what it changes.
Stop the staging container when QA is done. Disk persists, RAM and CPU drop to zero. Restore in 5–15 seconds when the next ticket lands. Drift is impossible because each branch starts from a known prod state.
Snapshots can carry an expiry in days; cleanup is automatic. Copies can choose a different target_project_id and target_server_id, so QA can live on a separate region or sub-account without changing the recipe.
When staging is a branch instead of a parallel rental, several recurring nuisances stop existing. The bill is only the most visible one.
Production, staging, and QA used to be three rentals you billed separately. Now they're one container plus two cheap branches that wake up when needed. The marginal environment costs a delta, not a duplicate.
Each staging branch starts from a real production snapshot — same OS image, same packages, same data shape, same env vars. The 'works on staging, breaks on prod' bug class is closed by construction.
PATCH the container against an older snapshot to roll a bad deploy back, or branch a fresh QA copy off last night's backup. No rsync, no DB dump, no 90-minute provision job — just a snapshot name.
Same workload — production, staging, QA — counted two ways. Once as three full rentals, once as one container with two snapshot branches.
Two EC2 m5.large instances at $0.096/hr (730 hours), plus two db.t3.medium Multi-AZ RDS instances at $0.380/hr. Staging idle most of the week; the meter doesn't care.
Staging branches off a prod snapshot. Shared pages are referenced, not duplicated. Only the bytes a QA run actually writes are billed — usually a few hundred MB instead of 100 GB.
One Hoody container handles prod 24×7. Staging and QA wake up from a snapshot when work needs them, freeze back to disk when it doesn't. One bill, three environments, no drift.
AWS prices use public on-demand rates for us-east-1 EC2 m5.large and RDS db.t3.medium Multi-AZ as of early 2026. Hoody bare-metal entry pricing starts at $29/month and varies by spec, region, and rental duration; snapshots and storage are included in the server price. Numbers shown are a representative comparison, not a quote.
Staging used to be a duplicate of production. Now it's a snapshot of it.
The standard ways teams pay the staging tax. Each one bills you for an environment that's idle most of the week or drifts away from prod by the time you need it.
Stop renting an environment that drifts. Branch one that can't.