
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Your customers type the cron expression. You POST it to their container's crontab. There's no shared queue to fair-share, no minimum interval to enforce, no support ticket about "why my job didn't run on peak Monday."
A real customer-facing settings page — schedules edited by the tenant, parsed by their container.
Your settings page renders an input field. Their tenant container exposes a Cron API. Submit forwards one POST. No global scheduler, no per-tenant filtering logic, no "max 100 jobs across all customers" cap.
// the form submit relays the customer's expression unchanged
POST https://acme-cron.hoody.com/users/root/entries
Content-Type: application/json
{
"schedule": "0 9 * * 1-5",
"command": "/jobs/sync_crm.sh",
"comment": "Sync Salesforce contacts",
"enabled": true
}HTTP/1.1 201 Created
Content-Type: application/json
{
"id": "sch_8a3f1c",
"schedule": "0 9 * * 1-5",
"next_run": "2026-05-04T09:00:00Z",
"enabled": true
}
// the schedule is now in this tenant's crontab and nowhere elseThe Hoody Cron service runs inside each tenant container with managed entries and per-user isolation. The schedule lives where the work runs.
When the schedule lives next to the work, the rules of multi-tenant scheduling change. The constraints are local. The blast radius is local. The features are local.
There's no global thread pool to fight over. Your most demanding tenant runs every minute and never starves your quiet ones — they're not on the same crontab.
You stop being the gatekeeper of "is */1 * * * * allowed for your tier?" Their container, their cron, their CPU bill. Your support inbox empties out.
Snapshot the tenant container, you snapshot the crontab. Roll back, restore, fork — the schedules go with it. No external scheduler state to sync.
The difference shows up in three places: the customer's experience, your support load, and the engineering surface area.
The shared-scheduler version of this feature is a sea of caveats. The BYO version is a five-field input box.
Three numbers that change the day you stop running a global queue. Each maps to a feature you no longer have to write or operate.
No more tier-gated minimum interval, no max-jobs-per-tenant, no fair-share knobs. The container is the limit.
Customer types, you forward, the container parses. The settings page submit is a single REST call, not an orchestration.
minute · hour · day-of-month · month · day-of-week. Plus macros (@hourly, @daily). Standard POSIX, not your DSL.
Numbers reflect the BYO container-bound model — actual cron entries scale with each container's CPU and the customer's plan.
The customer's cron expression is the customer's, not yours to validate against a global queue.
The infrastructure pieces a BYO container-bound cron quietly retires.
Stop being the gatekeeper of someone else's schedule. Hand them the cron field, hand the work to their container.