
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
When a customer signs up, one API call provisions their isolated environment. No tenant_id columns, no namespace YAML — just a POST that returns a container URL in seconds.
When a user signs up, this is what happens.
Each POST to /api/v1/projects/{id}/containers spins up an isolated environment. One call, one tenant, one URL handed back to your app.
Your Stripe (or any billing) webhook hits a Hoody Exec script. No Express, no server config — just a file in scripts/.
The new container has its own filesystem, its own SQLite, its own ramdisk. Tenant A literally cannot see tenant B's data.
The response includes a container URL. Your app redirects the user into their own sandbox in the same deploy window.
Container network and firewall rules are copied from your template. Every new tenant starts from the same security baseline.
Stop the container and it costs nothing. BTRFS keeps only the delta from your template — disk stays cheap even at scale.
One DELETE call removes the container and all their data. GDPR offboarding is not a script, it is a single HTTP call.
The whole flow is one webhook handler. No Kubernetes operator, no namespace YAML, no cluster admin. Three HTTP calls: webhook in, container out, URL to user.
The traditional choices were a column on every table or a fleet of VMs you could not afford. Hoody is a third shape: containers cheap enough to give one to every customer.
Multi-tenancy stops being an architecture problem. It becomes a `cp` command.
POST /containers/$TEMPLATE/copyDELETE /containers/$CIDPATCH /containers/$CID [ env_vars ]Per-tenant isolation has historically meant either a clever WHERE clause or an expensive cluster. Container-per-customer displaces the usual workarounds:
Idle customers cost nothing. Active ones scale on demand. The whole thing runs on $49 of bare metal until you have hundreds of paying users.