Skip to content
use-cases / crontab-per-branch / hero
CRON · GIT · PER-BRANCH SCHEDULES

A crontab per branch, deployed with the code

Check `.hoody/crontab` into the repo next to your jobs. When the deploy script spins up a container for `main`, `feature/billing-v2`, or any preview branch, it PUTs that file to the new container's Cron API. The schedule ships with the branch — and disappears when the branch does.

Read the Cron API
use-cases / crontab-per-branch / mechanism

How `.hoody/crontab` becomes a real schedule

Every branch container runs Hoody Cron. The deploy script reads the checked-in crontab and PUTs it to the new container's raw-crontab endpoint. The container runs the schedule the file describes — nothing more, nothing less.

deploy.sh · pushed by CI
shell · client
#!/bin/sh
# Provision a fresh container for this branch.
BRANCH=$(git branch --show-current)
CTR=$(hoody containers create --from main-snapshot)

# Replace the container's crontab with the one in the repo.
curl -X PUT --data-binary @.hoody/crontab \
  -H "Content-Type: text/plain" \
  https://$CTR-cron-1.hoody.com/users/root/crontab

# Done. The branch's schedule lives in its container.
echo "deployed $BRANCH → $CTR"
PUT /users/root/crontab
cron · server
# Hoody Cron raw-crontab endpoint — replaces the entire file atomically.
PUT /users/root/crontab HTTP/1.1
Host: ctr_4d72b9-cron-1.hoody.com
Content-Type: text/plain

0 2 * * * /srv/jobs/billing-rollup-v2.sh
*/15 * * * * /srv/jobs/sync-stripe.py
@hourly curl -fsS http://localhost/healthz
*/5 * * * * /srv/jobs/diff-v1-v2.sh

HTTP/1.1 200 OK
# 200 OK: cron daemon reloads, schedule active in under a second.

The crontab is data the branch ships, not state the cron server remembers. Delete the container, and there is no entry left to clean up — the file went with the disk.

use-cases / crontab-per-branch / powers

Three things you stop doing

Once the schedule is a file in the repo, three categories of work disappear.

VERSIONING

Schedules ride on the same diff as the code

When you change `billing-rollup.sh` to v2, the new schedule lands in the same pull request. Reviewer sees the cron line right next to the script. Revert one commit and the schedule reverts with it.

TEARDOWN

Delete the branch, the cron goes too

Branch containers are ephemeral. When you merge or close the branch, you tear down the container. The crontab lived inside it, so the schedule disappears without a janitor — there's no shared cron server holding stale entries.

ISOLATION

The experiment can't fire in staging

An hourly experimental job on `experiment/llm-rollups` runs in its own container with its own filesystem. Staging's cron daemon never sees it; production's cron daemon never sees it. There are no `if BRANCH_ENV` guards inside the jobs themselves.

use-cases / crontab-per-branch / compare

Shared crontab vs branch crontab

The standard "one ops-managed crontab" model and the branch-bound model fail in opposite directions. Same job, very different blast radius.

AXISSHARED CRONTABBRANCH CRONTAB
Source of truth
ops wiki + ansible roleschedule lives in a different repo than the script
.hoody/crontab in the reponext to the script the cron line invokes
Adding a job
merge code → ping ops → SSH cron hosttwo systems must agree, manually
edit the file, push, deployone diff, one merge, one deploy
Branch isolation
if [ "$ENV" = staging ]; then …every job knows about every environment
container per branchno env flag inside the script
Cleanup
remember to remove the linestale entries pile up for years
branch deleted = cron deletedfilesystem is gone, schedule is gone
Experiments
production crontab is the only oneany test risks firing in prod
spike branch = spike crontabfires only in its container

The difference isn't a feature — it's where the schedule lives. A file the branch carries, vs a row in a shared table the branch borrows from.

use-cases / crontab-per-branch / capacity

What Hoody Cron actually gives you

Per-container Cron is a real REST surface — three endpoint families, standard cron syntax, full per-user isolation. Numbers from the Cron API spec, not invented benchmarks.

  1. CRONTAB PER CONTAINER1

    Each branch container has its own per-user crontab. PUT the whole file, GET it back, replace it atomically. No shared schedule table behind the scenes.

  2. ENDPOINT FAMILIES3

    Raw crontab (GET/PUT), managed entries (POST/PATCH/DELETE with UUIDs and `expires_at`), and per-user listing. Pick whichever your deploy script needs.

  3. FIELD CRON EXPRESSIONS5

    Standard `min hour day month dow` plus macros: `@hourly`, `@daily`, `@weekly`, `@monthly`, `@yearly`. Same syntax your `.hoody/crontab` already uses.

Per the Hoody Cron API: GET/PUT /users/[user]/crontab and POST/PATCH/DELETE /users/[user]/entries on each container's cron service URL.

use-cases / crontab-per-branch / punchline

The schedule lives next to the code that runs on it, in the same container, on the same branch.

before · two systems to rememberafter · one file in the repo
WHAT YOU USED TO HAVEschedule in ops/ansible · code in app/ · they never agreedmerge the PR, then file a ticket to update the cron host
WHAT YOU HAVE NOWPUT @.hoody/crontab → cron-1.[branch].hoody.comone PUT, one container, one schedule, one branch
Read the Cron API
use-cases / crontab-per-branch / replaces

What this replaces

Six places the cron schedule used to live, none of them next to the code. The branch-bound crontab makes them all redundant.

  • shared production crontabOne file on a cron host that every team had to coordinate around
  • manual cron config syncAnsible role / Puppet manifest applied out-of-band from your code merges
  • GitHub Actions schedules pinned to mainSchedules tied to default branch, invisible to feature work and previews
  • "remember to update cron when you merge"A human checklist item — the only thing standing between you and a stale entry
  • separate cron config repoA second repo whose only job was to lag behind the one with the actual code
  • Atlas/Liquibase scheduled migrationsMigration tools doing schedule duty because there was nowhere better for it to live
use-cases / crontab-per-branch / cta

Stop synchronizing schedules across systems. Check the crontab in. Let the branch carry it.

Read the Cron docs
use-cases / crontab-per-branch / related

Read the others