
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Every night at 1am, a cron entry curls an exec URL. The script runs your rollup SQL on the sqlite URL and writes the daily table back. No Airflow Postgres, no DAG file, no scheduler dashboard with 14 widgets, no on-call for the orchestrator itself.
SELECT date_trunc('day', created_at) AS d,
count(*) AS n
FROM events
WHERE created_at >= 'today'
GROUP BY 1 -- the whole pipelineTHE DASHBOARD IS THE ORCHESTRATOR. ONE URL. ONE SCHEDULE.
The whole pipeline is one cron entry pointing at one exec URL. The cron entry is a POST to /users/root/entries. The exec URL is a small script that opens the sqlite URL, runs the rollup SQL, and returns the new rows. That is the entire DAG.
// every night at 01:00 UTC
POST /users/root/entries
{
"schedule": "0 1 * * *",
"command": "curl -fsS https://exec.containers.hoody.com/scripts/rollup/run"
}// the entire pipeline body
import { Database } from "bun:sqlite";
const db = new Database("events.db");
db.run(`INSERT INTO rollup_daily
SELECT date_trunc('day', created_at), count(*)
FROM events GROUP BY 1;`);
return { ok: true, rows: db.query("SELECT * FROM rollup_daily").all() };If the rollup fails, the cron logs say so. If you need to backfill yesterday, you curl the exec URL by hand with a date parameter. There is no second system to learn, no scheduler database to keep alive, no DAG file to commit. The orchestrator is a cron entry pointing at a URL.
The Hoody Kit ships the scheduler, the runtime, and the storage as plain HTTP services. The pipeline is the curl call between them — nothing else.
hoody-cron stores schedules as resources at /users/root/entries. No Postgres metadata DB to back up, no scheduler container to keep healthy, no DAG repository to deploy. POST a row, and the run fires.
hoody-exec runs the rollup script on demand at exec.containers.hoody.com/scripts/rollup/run. cron curls it, gets a 200, logs the response. No worker queue, no broker, no pickled task graph.
Every exec call returns the new rows as JSON and is logged by cron with status, timestamp, and stdout. Backfills, failures, and reruns all live in the same two URLs — nothing extra to ship to a log aggregator.
The pipeline is two URLs and a date param. Rerunning yesterday is the same shape as the nightly run, just with ?date=2026-04-30 on the exec URL. No replay UI, no scheduler quirks.
If the 1am run returned a non-2xx, the entry's last-run record on hoody-cron shows the exit code and the captured response body. No separate alerting service to wire up — GET the entry and read it.
The script accepts a date parameter. Pass yesterday's date and it recomputes that day's rollup row, replacing the broken one with an INSERT OR REPLACE. One command, no DAG re-trigger UI.
exec returns the freshly written rollup row as JSON. Diff it against what you expected, then move on. Nothing else to check — the dashboard URL serves the same table you just wrote.
Three numbers describe the entire system. Compare them with what an Airflow deployment looks like in your repo today.
minute, hour, day-of-month, month, day-of-week. That is the full configuration surface for when a run fires.
one POST to register the schedule, one GET that runs the script. That is the entire deployable pipeline.
no scheduler process to keep alive, no metadata database, no worker pool. Hoody Kit holds the schedules and runs the script.
Numbers describe the cron + exec model on Hoody Kit. Your existing pipeline likely has more moving parts; that is the point of the comparison.
The orchestrator is a cron entry pointing at a URL.
The orchestration layer collapses into a one-line cron. The DAG lives in your script.
Stop running an orchestrator. Run a cron entry.