Skip to content
use-cases / per-customer-sandboxes-fleet-scale / hero
CONTAINERS · MULTI-TENANT SAAS · FLEET SCALE

Per-customer sandboxes at fleet scale

Eight hundred isolated containers across three bare-metal servers. Each customer gets their own filesystem, their own URL, their own kernel namespace — one flat-rate server bill, no per-tenant meter. The honest architecture is no longer the expensive one.

Read the fleet docs
use-cases / per-customer-sandboxes-fleet-scale / mechanism

How a signup turns into one of 812 sandboxes

Your billing webhook hits a Hoody Exec script. The script copies a fresh-customer container from the template snapshot, the new tenant lands on its own URL, and the fleet dashboard increments by one. Three HTTP calls, no orchestrator.

01 · WEBHOOK

Stripe calls your exec endpoint

POST /api/v1/exec/scripts/1/webhooks/signup

A serverless V8 isolate. The webhook URL is just a TypeScript file in scripts/1/. No Express, no server config, no container of its own.

02 · COPY

Script clones the customer template

POST /api/v1/containers/$TEMPLATE/copy

BTRFS copy-on-write — each new container consumes only the delta from the template on the rented server. Firewall and network rules clone with the snapshot. Lands on whichever fleet server has headroom.

03 · ROUTE

New URL handed back to the user

https://$PROJECT-$CID-...containers.hoody.com

The signed authorize endpoint mints a one-hour container_claim. Your app redirects the customer into their own sandbox. Total signup time: under sixty seconds.

The whole pipeline is three HTTP calls. No Kubernetes operator, no namespace YAML, no cluster admin. The fleet adds tenants the same way a hash table adds entries — except every entry is a real Linux container.

use-cases / per-customer-sandboxes-fleet-scale / economics

The math that makes fleet-scale isolation cheap

Their billing model charges per tenant. Hoody bills per server. Once the billing unit swaps from tenant to box, the per-tenant figure shrinks as you add density — and the curve flattens as you grow.

FLEET LEDGER · 812 TENANTS
# three bare-metal nodes, marketplace pricing3 flat-rate servers · one monthly bill# blended across eu-1, us-1, ap-1812 tenants (287 + 304 + 221)# the cost-per-tenant collapsesbill ÷ 812 = cost shrinks as density grows

Adding the next hundred tenants doesn't change the bill — it changes the divisor. KSM dedups identical memory pages across containers; BTRFS copy-on-write keeps base-image bytes shared on the server. Each new container uses only the delta from the template; billing stays at the flat-rate server.

PER TENANT · OTHER STACKS
  • AWS FARGATE PER TENANTvCPU + memory billed per task, even idle
    $8–25
  • K8S NAMESPACE PER TENANTCluster overhead amortized across namespaces
    $3–10
  • DEDICATED TENANT PODReserved RAM + CPU, paid hot or cold
    $5–15
  • HOODY · CONTAINER PER TENANTone server price ÷ tenant density — bound by the box, not the count
    flat rate

Hoody server pricing is marketplace-driven and varies by region, spec, and vendor. The example fleet uses three nodes; marketplace servers start at $29/month and vary by region, spec, and duration; competitor estimates are illustrative ranges from public pricing for comparable per-tenant compute. Density assumes typical SaaS workloads — tenants that idle most of the day. Heavy databases or AI workloads need more headroom per container.

use-cases / per-customer-sandboxes-fleet-scale / powers

What container-per-tenant unlocks at this price

Once isolation is cheap, the architecture stops compromising. The features your CFO used to veto become defaults.

ONBOARDING

Every new customer is a `cp` away

Stripe webhook → Hoody Exec → POST /containers/$TEMPLATE/copy. The new tenant boots from the same snapshot every other tenant booted from. Identical baseline, isolated future. No tenant_id columns to thread, no shared row to forget.

OFFBOARDING

GDPR delete is one HTTP call

DELETE /api/v1/containers/$CID. The filesystem goes, the SQLite goes, the cron jobs go, the audit log goes — because they all lived in one place. No "DELETE … WHERE tenant_id … plus 12 other tables you forgot."

BLAST RADIUS

One tenant's bug stays inside one tenant

A customer's runaway script hits its container's CPU and RAM quotas. The 811 other containers on the fleet don't notice. No noisy-neighbor audits, no shared lock table, no shared connection pool — kernel namespaces do the isolation work the application layer used to fake.

use-cases / per-customer-sandboxes-fleet-scale / punchline

Per-tenant isolation used to cost per-tenant. Now it costs per-server.

USED TO BE$3–25 / tenantfargate, namespace, or dedicated pod
NOWone flat bill812 sandboxes, 3 bare-metal boxes, no per-tenant meter
use-cases / per-customer-sandboxes-fleet-scale / replaces

What this replaces

Per-tenant isolation has historically meant either a clever WHERE clause or a per-tenant bill. Container-per-customer at fleet scale displaces both:

  • AWS Fargate per tenantvCPU + RAM billed per task, hot or cold
  • Kubernetes per-namespaceCluster + control plane overhead per tenant
  • Shared multi-tenancy with tenant_id filteringOne forgotten WHERE leaks customer data
  • Postgres row-level security overheadPolicy on every table, audit on every query
  • Dedicated tenant podsReserved compute paid whether used or not
use-cases / per-customer-sandboxes-fleet-scale / cta

Eight hundred isolated tenants on the same servers your laptop replaces. The honest architecture is finally the affordable one.

use-cases / per-customer-sandboxes-fleet-scale / related

Read the others