
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Snapshot your staging container once. Each new PR clones the snapshot into its own container with its own URL. The container wakes when a reviewer opens the link, naps when no one is watching, and gets destroyed by a one-line cron when the PR closes. Sixty branches, sixty URLs, and a flat bill.
the URL on the right is the preview env — one click, real container, real database
Three calls. One cron line. The CI pipeline you already have triggers them in the order you'd write yourself if you had to.
Pick the container that runs your staging stack — app, database, queues, fixtures. POST a snapshot, name it staging-base. Files, processes, and memory get captured. The snapshot is a deltable starting point, not a tarball — clones share its pages instead of copying them.
Your CI gets the GitHub push webhook and POSTs to the containers API with source_snapshot=staging-base. A new container boots in seconds with the seeded database and the PR's branch checked out. The URL goes back as a status check.
A 5-minute cron entry walks merged PRs and DELETEs their containers — or your merge webhook does it inline. The container's disk delta is reclaimed, the URL is freed, and the container slot returns to the pool for the next PR.
Step 02 takes about as long as a yarn install. Step 03 is one HTTP call. Nothing else has to know that the PR's container existed.
Three real endpoints from the Hoody Containers and Snapshots APIs. Drop them into the GitHub Actions step you already have.
/api/v1/containers/[staging_id]/snapshotsBody: [ "alias": "staging-base", "expiry": 30 ]. Returns a snapshot name like snap-20260501-093000. Run this once per main-branch deploy — every PR clone descends from the most recent capture.
/api/v1/projects/[project_id]/containersBody picks server_id and a container_image; pass environment_vars to inject the PR number, branch ref, and database name. The container boots from your snapshot's filesystem, not from scratch — caches and seed data are already there.
/api/v1/containers/[pr_container_id]One call. The container shuts down and its disk delta is reclaimed; nothing else has to be torn down. A cron entry on a 5-minute cadence handles the PRs that closed while no one was watching.
Endpoints from the Hoody Container Snapshots API and Containers API. Snapshot expiry is in days; container creation accepts environment_vars, ssh_public_key, autostart, ramdisk, and realm_ids — see the docs for the full request schema.
The math stops gating reviewer behaviour. Three habits that were too expensive at $40 a preview show up by themselves once the per-PR cost is rounding error.
Nobody checks out the branch locally to repro the bug. They open the URL, click the broken thing, leave a screenshot in the PR. The review loop runs on what the code actually does, not what the diff suggests it does.
The fifty PRs no one is currently reviewing cost zero CPU and zero RAM. They share the staging-base snapshot's pages on disk, so even their footprint is mostly the delta. The bill is bounded by the box, not by the count of open PRs.
Your designer, your support engineer, your sales lead — anyone with the URL can poke at the PR. They were never going to git checkout a branch. With a link, they actually look at the change before it lands.
A team opening 30 PRs a month. The before number is the standard preview-environment bill. The after number is one Hoody bare-metal box that fits all 30 plus your staging.
$480/mo
Per-seat Pro pricing plus build minutes plus bandwidth on a 6-engineer team running 30 PR deploys a month. Most teams cap previews at the active 10 because the next 20 cost real money.
$30/mo
One mid-tier server from the Hoody marketplace runs staging plus 30 PR clones plus your CI cache. Add the next sixty for $0 — copy-on-write means each clone is the snapshot's pages plus a delta.
Vercel Pro list pricing is $20/seat/month plus usage; Hoody bare-metal entry pricing starts at $29/month and varies by spec, region, and rental duration. Container density depends on workload — typical web apps pack tens to hundreds; large stateful services need more headroom.
Preview environments stop being a budget item. They become the default.
Preview-environment products price per seat, per minute, or per running container. Hoody prices per server — the 30th preview costs the same as the 1st.
Snapshot once. Clone per PR. Destroy on merge. The reviewer never feels the seam.