
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
On AWS, staging dies because every idle hour is a billable hour. On Hoody, idle containers eat disk and zero CPU — so the staging your reviewer touched three weeks ago is still there, with the exact state they left it in. The graveyard turns into a working set.
five containers · ~54 active days total · five environments still resolvable on a URL
Three states, one container row, one bill. The active state burns CPU. The idle state burns nothing. The wake state takes a few hundred milliseconds and your staging is back exactly the way you left it.
Your teammate is logged in, exercising the new endpoint, watching the dashboard. The container's processes are scheduled, its memory pages are hot, its CPU time is real. The flat-rate server is doing its job.
The container is suspended. Its filesystem still resolves, its disk delta still exists, its proxy domain still answers. KSM dedupes the RAM pages and BTRFS dedupes the disk blocks across containers on the same server — idle marginal cost is structurally near zero. It adds nothing to the flat-rate server price you already pay.
The first request that arrives wakes the container. The same container ID, the same env vars, the same volumes, the same SSH host. The state your reviewer left behind is the state that comes back. No restore script, no fresh provision, no day rebuilding what you deleted.
Hoody bills the server, flat-rate. The idle state is the rest of the container's life — and it is the state where every staging environment lives most of the time. KSM and BTRFS dedup mean idle containers add nothing to that server price.
Once idle is free, you stop making the decisions that staging was making for you.
The environment your reviewer used three weeks ago is still there, suspended, addressable by container ID. The CFO doesn't see it on the bill because it isn't on the bill. The conversation that used to end in 'nuke two of three' doesn't happen.
The reviewer pings the URL, the container wakes, their session resumes. No fresh provision, no seed data, no waiting for a Heroku dyno to come back from sleep. The previous afternoon's work is the next afternoon's starting point.
Last quarter's launch staging, the abandoned payments rebuild, the customer-specific demo from Q4 — they all stay alive at zero cost. When somebody asks 'do we still have that environment?' the answer is yes.
The line items on the AWS invoice for an always-on staging fleet, and what those line items collapse into when idle costs nothing.
The CFO doesn't ask about the three idle environments because they don't appear. The conversation about deleting them never starts.
Numbers come from the Hoody Containers API and the snapshot model — not from invented benchmarks.
An idle container adds no per-hour charge. You pay for the bare metal server — flat-rate. KSM and BTRFS dedup mean idle containers fold into the server you already rent.
Snapshots are content-addressed and stored as deltas. The base image is shared across every container that descended from it. Storage is included in the flat-rate server price — no separate per-delta charge.
GET /api/v1/containers/[id] resolves the suspended container. The first request that touches its proxy domain wakes it; the state it had when you stopped watching is the state that returns.
Per the Hoody Containers API: containers persist as rows with snapshot_count and last_used_snapshot fields. Snapshot retention defaults to your project's policy; expires_at is configurable per snapshot.
Staging gets to live, because letting it live no longer costs anything.
The standard always-on staging stack — and the cron jobs and tribal knowledge that grow up around it. Each one charges by the hour. Hoody bills the server, flat-rate; for staging environments that sit idle most of the time, the marginal cost is structurally nothing.
Stop deleting environments to save money. The graveyard is now a working set.