One server. Unlimited containers. $0 marginal cost.
The fundamental unit of infrastructure cost changes from per-environment to per-server. Once the server is paid for, spinning up another container is free. Experiments, branch environments, throwaway demos, per-client isolation — all become default rather than budgeted.
Bare metal · KSM density · BTRFS dedup · 100% resource utilization · physical isolation included
From per-environment billing to per-server capacity.
The VPS model treats every environment as a recurring cost. The bare-metal-plus-containers model treats the server as a one-time capacity decision. Once you stop paying per-environment, you stop thinking in environments.
VPS — per-environment billing
- —Dev, staging, prod = three line items
- —Branch environment = budget conversation
- —Personal sandbox = pay even when idle
- —Client work = one VPS per client, every month
Hoody — per-server capacity
- —One server rental covers every environment you can fit
- —Branch environment = one API call, free
- —Personal sandbox = spin up, use, discard
- —Client work = one container per client on the same server
A concrete worked example.
The 100x Foundation doc describes a solo founder running 12 SaaS products at 3–5 containers per product. Here's the same math side-by-side. Exact numbers depend on server provider and workload — this is the shape, not the price sheet.
| Line item | Traditional VPS | Hoody |
|---|---|---|
| Server cost | $40/container × 60 = $2,400/mo | $100/mo bare-metal server |
| Adding container #61 | +$40/mo every month forever | +$0 if within server capacity |
| Idle containers | Full price anyway | ~0 bytes via KSM + BTRFS dedup |
| Dedicated hardware | Enterprise tier, ~$200–1000/mo | Included — server IS the hardware |
Costs are illustrative and depend on server provider, workload, and how much density your containers actually share. The economic shape — zero marginal cost, shared capacity, idle-is-free — holds across providers.
When containers are free, experiments become the default.
Traditional infrastructure makes experimenting a conscious decision with a budget cost attached. Hoody makes experimenting the path of least resistance. This quietly changes how developers and agents work.
Per-branch environments
Every git branch gets a container. Ten open branches = ten containers = same bill as one branch.
Parallel hypothesis testing
An AI agent trying 10 different approaches spawns 10 containers. Whichever wins gets kept; the rest go in DELETE.
Staging that matches prod exactly
Not an approximation. Same image, same config, same snapshot source — at zero extra cost.
On-demand client demos
Spin up a demo container for a sales call. Delete after. No line item on the monthly bill.
100% of the server you paid for. No noisy neighbors. No idle tax.
On traditional VPS, you pay for dedicated resources that sit mostly idle. On a shared bare-metal box with KSM + BTRFS dedup, your containers compete for only the resources they actually need. Full utilization available when workload demands; nothing wasted when it doesn't.
CPU: all cores, all containers
Linux scheduler gives the whole machine to whichever container needs it. No per-container vCPU cap.
RAM: deduplicated via KSM
Common pages shared across containers. 60 containers might use less RAM than 10 VPS instances.
Disk: deduplicated via BTRFS
Same base image across containers = shared blocks. Storage grows with divergence, not container count.
Network: no per-container quota
Your server's bandwidth is your pool. Allocate however your workload dictates.
Bare metal is the baseline, not the enterprise tier.
On a public cloud VPS, you're one tenant on a hypervisor shared with strangers. Side-channel attacks (Spectre, Meltdown) exist because of that sharing. On Hoody, the server is yours. No shared hypervisor; no Spectre-class attack surface from other customers above you.
No shared hypervisor
Your containers share the host with each other — isolated via LXC + Firecracker. They never share a host with strangers.
Compliance isolation included
Customer data residency, HIPAA-adjacent isolation, PCI scope reduction — all follow from server ownership.
You control the hardware
Rent from OVH, Hetzner, Equinix, your own colo. Hoody runs its containers on your chosen metal.
When this model doesn't pay off.
Honest about the cases where the economics don't flip. The per-server model shines when you can use the density. It doesn't shine for workloads that want one giant isolated container.
One-massive-container workloads
If you need a single container with dozens of CPUs and hundreds of GB of RAM, you're paying for exclusive hardware anyway. VPS at that tier may be competitive.
Highly spiky traffic
A container running at 100% CPU constantly leaves no density for neighbors. Density-based math assumes some diversity in workload.
Zero-ops requirements
If you cannot manage a bare-metal server at all, even with Hoody's tooling, managed Kubernetes or Fly.io will fit better. Some teams want no hardware decisions ever.
Edge-latency requirements
Need 20 global POPs for a CDN workload? Rent 20 Hoody servers, or use an edge-specialist provider. One bare-metal box is one geographic point.
Stop paying for environments. Pay for capacity once.
Rent a server. Spawn as many containers as you'll use. The bill stops growing with your workflow.
See also — /platform/control-plane for wallet and server rental APIs, /methods/efficiency-security for KSM + BTRFS details.