
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Your SaaS lets each customer schedule their own report generation. The naive design is one shared scheduler, customer IDs in the job payload, fingers crossed nobody starves anyone else. The Hoody design gives every tenant their own container and their own hoody-cron service.
Three lifecycle states, one HTTP API. PROVISION adds entries, cron ticks run them, DELETE suspends. Each tenant's cron lives in its own container — no shared queue, no noisy-neighbor risk.
Each customer container exposes the hoody-cron HTTP API. Provision with POST, verify with GET, suspend with DELETE. No shared queue, no priority lane, no scheduler config to redeploy.
# POST managed entry for acme-corp tenant
POST acme-cron.hoody.com/users/root/entries
Content-Type: application/json
{
"schedule": "0 9 * * *",
"command": "/usr/local/bin/digest.sh",
"comment": "daily digest",
"enabled": true
}HTTP/1.1 201 Created
Content-Type: application/json
{
"id": "7d3f2a1b-8c4e-4f9a-b2d5",
"schedule": "0 9 * * *",
"schedule_human": "At 09:00",
"enabled": true,
"user": "root"
}Each tab shows the exact API call your control plane makes. The managed entries API uses UUIDs so you can target individual jobs without replacing the whole crontab. Per-user isolation means nothing about acme-corp's schedule is visible to globex-saas.
One flat-rate server. Sixty tenant containers. The math is brutally simple.
When initech-inc's scrape.js hangs, acme-corp's 9am digest still fires. Different crontabs, different process trees, different filesystems.
POST a new entry and the tenant's hoody-cron service picks it up immediately. No central scheduler to reload, no broadcast to send.
When globex-saas asks why their 6pm rollup ran twice, you read one container's log — not a shared scheduler grep across nine machines.
Three axes where the old design taxes your team and the Hoody design just doesn't.
The old column is what every team writes the first time they ship multi-tenant scheduling. The new column is what you ship when the platform gives every tenant their own container by default.
What a single bare-metal Hoody box does when every customer gets their own crontab.
Sixty customer containers on one bare-metal node, each with its own hoody-cron service running. No shared scheduler to bottleneck.
From PUT request to first tick of the new schedule, observed across a fleet of 60 containers on a typical 64-core node.
There is literally no shared queue, priority lane, or scheduler thread that two tenants compete for. Isolation is the substrate.
Capacity numbers are typical observed values on a 64-core / 256GB bare-metal node running standard Hoody container density. Actual capacity depends on per-tenant CPU and memory budgets and the work each cron job does. The zero in cross-tenant queues is structural, not a benchmark.
One customer's cron can't starve another's because they aren't on the same crontab.
The architectures teams build to share one crontab across tenants. Hoody puts each tenant in their own crontab — no router, no fairness queue, no noisy neighbour.
Stop writing tenant_id everywhere. Give every customer their own container and let cron do what cron has always done, in isolation.