Skip to content
use-cases / daily-rollups-no-orchestrator / hero
DAILY ROLLUPS · NO ORCHESTRATOR

Daily rollups without an orchestrator

Every night at 1am, a cron entry curls an exec URL. The script runs your rollup SQL on the sqlite URL and writes the daily table back. No Airflow Postgres, no DAG file, no scheduler dashboard with 14 widgets, no on-call for the orchestrator itself.

Read the docs
use-cases / daily-rollups-no-orchestrator / mechanism

Two URLs and a five-field schedule

The whole pipeline is one cron entry pointing at one exec URL. The cron entry is a POST to /users/root/entries. The exec URL is a small script that opens the sqlite URL, runs the rollup SQL, and returns the new rows. That is the entire DAG.

POST /users/root/entries
THE SCHEDULE
// every night at 01:00 UTC
POST /users/root/entries
{
  "schedule": "0 1 * * *",
  "command":  "curl -fsS https://exec.containers.hoody.com/scripts/rollup/run"
}
1AM · CURL
exec.containers.hoody.com/scripts/rollup/run
THE PIPELINE
// the entire pipeline body
import { Database } from "bun:sqlite";

const db = new Database("events.db");
db.run(`INSERT INTO rollup_daily
         SELECT date_trunc('day', created_at), count(*)
         FROM   events GROUP BY 1;`);

return { ok: true, rows: db.query("SELECT * FROM rollup_daily").all() };

If the rollup fails, the cron logs say so. If you need to backfill yesterday, you curl the exec URL by hand with a date parameter. There is no second system to learn, no scheduler database to keep alive, no DAG file to commit. The orchestrator is a cron entry pointing at a URL.

use-cases / daily-rollups-no-orchestrator / powers

What you didn't have to build

The Hoody Kit ships the scheduler, the runtime, and the storage as plain HTTP services. The pipeline is the curl call between them — nothing else.

NO ORCHESTRATOR DAEMON

The scheduler is an HTTP entry

hoody-cron stores schedules as resources at /users/root/entries. No Postgres metadata DB to back up, no scheduler container to keep healthy, no DAG repository to deploy. POST a row, and the run fires.

NO RUNTIME WIRING

The script is a URL

hoody-exec runs the rollup script on demand at exec.containers.hoody.com/scripts/rollup/run. cron curls it, gets a 200, logs the response. No worker queue, no broker, no pickled task graph.

NO LOGS PIPELINE

Run history is the response body

Every exec call returns the new rows as JSON and is logged by cron with status, timestamp, and stdout. Backfills, failures, and reruns all live in the same two URLs — nothing extra to ship to a log aggregator.

use-cases / daily-rollups-no-orchestrator / backfill

If a run fails — or you need to backfill

The pipeline is two URLs and a date param. Rerunning yesterday is the same shape as the nightly run, just with ?date=2026-04-30 on the exec URL. No replay UI, no scheduler quirks.

  1. 01 · DETECT

    cron logs say it failed

    If the 1am run returned a non-2xx, the entry's last-run record on hoody-cron shows the exit code and the captured response body. No separate alerting service to wire up — GET the entry and read it.

  2. 02 · BACKFILL

    curl the exec URL with ?date=

    The script accepts a date parameter. Pass yesterday's date and it recomputes that day's rollup row, replacing the broken one with an INSERT OR REPLACE. One command, no DAG re-trigger UI.

  3. 03 · VERIFY

    the response is the new row

    exec returns the freshly written rollup row as JSON. Diff it against what you expected, then move on. Nothing else to check — the dashboard URL serves the same table you just wrote.

// rerun yesterday's rollup by handcurl -fsS https://exec.containers.hoody.com/scripts/rollup/run?date=2026-04-30
use-cases / daily-rollups-no-orchestrator / capacity

The shape of the pipeline

Three numbers describe the entire system. Compare them with what an Airflow deployment looks like in your repo today.

  1. FIELDS · CRON SCHEDULE5

    minute, hour, day-of-month, month, day-of-week. That is the full configuration surface for when a run fires.

  2. URLS · ENTRY + EXEC2

    one POST to register the schedule, one GET that runs the script. That is the entire deployable pipeline.

  3. DAEMONS · NO ORCHESTRATOR0

    no scheduler process to keep alive, no metadata database, no worker pool. Hoody Kit holds the schedules and runs the script.

Numbers describe the cron + exec model on Hoody Kit. Your existing pipeline likely has more moving parts; that is the point of the comparison.

use-cases / daily-rollups-no-orchestrator / punchline

The orchestrator is a cron entry pointing at a URL.

BEFORE · A SCHEDULER STACKAFTER · ONE LINE OF CRON
BEFOREairflow webserver + scheduler + worker + postgres + dags/rollup_daily.pyFIVE PROCESSES, A METADATA DB, AND A REPO OF DAG FILES
NOW0 1 * * * curl -fsS https://exec.containers.hoody.com/scripts/rollup/runONE LINE. ONE URL. ONE SCHEDULE.
Read the docs
use-cases / daily-rollups-no-orchestrator / replaces

What this replaces

The orchestration layer collapses into a one-line cron. The DAG lives in your script.

  • Apache AirflowA Postgres + Redis + scheduler + worker for one query
  • PrefectCloud account, agent install, flow registration
  • DagsterPythonic, but still a service you run
  • LuigiA graph engine to schedule your nightly SQL
  • GitHub Actions schedulesPinned to main, no per-tenant context
  • dbt CloudA SaaS to wrap a CLI to wrap a SELECT
  • Custom Python schedulersA while-loop and a try/except, called "robust"
use-cases / daily-rollups-no-orchestrator / cta

Stop running an orchestrator. Run a cron entry.

Read the docs
use-cases / daily-rollups-no-orchestrator / related

Read the others