
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Eight hundred isolated containers across three bare-metal servers. Each customer gets their own filesystem, their own URL, their own kernel namespace — one flat-rate server bill, no per-tenant meter. The honest architecture is no longer the expensive one.
fleet ops dashboard · 812 tenants on 3 bare-metal nodes · one flat-rate bill, no per-tenant meter
Your billing webhook hits a Hoody Exec script. The script copies a fresh-customer container from the template snapshot, the new tenant lands on its own URL, and the fleet dashboard increments by one. Three HTTP calls, no orchestrator.
A serverless V8 isolate. The webhook URL is just a TypeScript file in scripts/1/. No Express, no server config, no container of its own.
BTRFS copy-on-write — each new container consumes only the delta from the template on the rented server. Firewall and network rules clone with the snapshot. Lands on whichever fleet server has headroom.
The signed authorize endpoint mints a one-hour container_claim. Your app redirects the customer into their own sandbox. Total signup time: under sixty seconds.
The whole pipeline is three HTTP calls. No Kubernetes operator, no namespace YAML, no cluster admin. The fleet adds tenants the same way a hash table adds entries — except every entry is a real Linux container.
Their billing model charges per tenant. Hoody bills per server. Once the billing unit swaps from tenant to box, the per-tenant figure shrinks as you add density — and the curve flattens as you grow.
Adding the next hundred tenants doesn't change the bill — it changes the divisor. KSM dedups identical memory pages across containers; BTRFS copy-on-write keeps base-image bytes shared on the server. Each new container uses only the delta from the template; billing stays at the flat-rate server.
Hoody server pricing is marketplace-driven and varies by region, spec, and vendor. The example fleet uses three nodes; marketplace servers start at $29/month and vary by region, spec, and duration; competitor estimates are illustrative ranges from public pricing for comparable per-tenant compute. Density assumes typical SaaS workloads — tenants that idle most of the day. Heavy databases or AI workloads need more headroom per container.
Once isolation is cheap, the architecture stops compromising. The features your CFO used to veto become defaults.
Stripe webhook → Hoody Exec → POST /containers/$TEMPLATE/copy. The new tenant boots from the same snapshot every other tenant booted from. Identical baseline, isolated future. No tenant_id columns to thread, no shared row to forget.
DELETE /api/v1/containers/$CID. The filesystem goes, the SQLite goes, the cron jobs go, the audit log goes — because they all lived in one place. No "DELETE … WHERE tenant_id … plus 12 other tables you forgot."
A customer's runaway script hits its container's CPU and RAM quotas. The 811 other containers on the fleet don't notice. No noisy-neighbor audits, no shared lock table, no shared connection pool — kernel namespaces do the isolation work the application layer used to fake.
Per-tenant isolation used to cost per-tenant. Now it costs per-server.
Per-tenant isolation has historically meant either a clever WHERE clause or a per-tenant bill. Container-per-customer at fleet scale displaces both:
Eight hundred isolated tenants on the same servers your laptop replaces. The honest architecture is finally the affordable one.