
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Check `.hoody/crontab` into the repo next to your jobs. When the deploy script spins up a container for `main`, `feature/billing-v2`, or any preview branch, it PUTs that file to the new container's Cron API. The schedule ships with the branch — and disappears when the branch does.
one file per branch · same path in every repo · no shared cron server
Every branch container runs Hoody Cron. The deploy script reads the checked-in crontab and PUTs it to the new container's raw-crontab endpoint. The container runs the schedule the file describes — nothing more, nothing less.
#!/bin/sh
# Provision a fresh container for this branch.
BRANCH=$(git branch --show-current)
CTR=$(hoody containers create --from main-snapshot)
# Replace the container's crontab with the one in the repo.
curl -X PUT --data-binary @.hoody/crontab \
-H "Content-Type: text/plain" \
https://$CTR-cron-1.hoody.com/users/root/crontab
# Done. The branch's schedule lives in its container.
echo "deployed $BRANCH → $CTR"# Hoody Cron raw-crontab endpoint — replaces the entire file atomically.
PUT /users/root/crontab HTTP/1.1
Host: ctr_4d72b9-cron-1.hoody.com
Content-Type: text/plain
0 2 * * * /srv/jobs/billing-rollup-v2.sh
*/15 * * * * /srv/jobs/sync-stripe.py
@hourly curl -fsS http://localhost/healthz
*/5 * * * * /srv/jobs/diff-v1-v2.sh
HTTP/1.1 200 OK
# 200 OK: cron daemon reloads, schedule active in under a second.The crontab is data the branch ships, not state the cron server remembers. Delete the container, and there is no entry left to clean up — the file went with the disk.
Once the schedule is a file in the repo, three categories of work disappear.
When you change `billing-rollup.sh` to v2, the new schedule lands in the same pull request. Reviewer sees the cron line right next to the script. Revert one commit and the schedule reverts with it.
Branch containers are ephemeral. When you merge or close the branch, you tear down the container. The crontab lived inside it, so the schedule disappears without a janitor — there's no shared cron server holding stale entries.
An hourly experimental job on `experiment/llm-rollups` runs in its own container with its own filesystem. Staging's cron daemon never sees it; production's cron daemon never sees it. There are no `if BRANCH_ENV` guards inside the jobs themselves.
The standard "one ops-managed crontab" model and the branch-bound model fail in opposite directions. Same job, very different blast radius.
The difference isn't a feature — it's where the schedule lives. A file the branch carries, vs a row in a shared table the branch borrows from.
Per-container Cron is a real REST surface — three endpoint families, standard cron syntax, full per-user isolation. Numbers from the Cron API spec, not invented benchmarks.
Each branch container has its own per-user crontab. PUT the whole file, GET it back, replace it atomically. No shared schedule table behind the scenes.
Raw crontab (GET/PUT), managed entries (POST/PATCH/DELETE with UUIDs and `expires_at`), and per-user listing. Pick whichever your deploy script needs.
Standard `min hour day month dow` plus macros: `@hourly`, `@daily`, `@weekly`, `@monthly`, `@yearly`. Same syntax your `.hoody/crontab` already uses.
Per the Hoody Cron API: GET/PUT /users/[user]/crontab and POST/PATCH/DELETE /users/[user]/entries on each container's cron service URL.
The schedule lives next to the code that runs on it, in the same container, on the same branch.
Six places the cron schedule used to live, none of them next to the code. The branch-bound crontab makes them all redundant.
Stop synchronizing schedules across systems. Check the crontab in. Let the branch carry it.