Skip to content
use-cases / three-cadences-one-container / hero
CRON · ONE CRONTAB · ONE CONTAINER

Three cadences, one container, on a flat-rate server

Hourly browser scrape, daily SQLite digest, weekly file archive. Three rhythms nest cleanly in one crontab — they're just three rows of `* * * * *` pointing at three scripts. No scheduler service, no job queue, no worker pool.

Read the cron docs
billing
this month
flat-rateserver / month
  • runs inside one server
  • three cadences
  • zero scheduler services
container cpu · 24h
−24h → now
idle baseline~4%
spikes only at the 3 trigger times — flat between them
use-cases / three-cadences-one-container / mechanism

One PUT writes the whole schedule

The Hoody Cron service exposes the raw crontab as a REST resource. PUT the file once and the kernel runs it forever. Three lines, three scripts — each a one-liner that already speaks HTTP.

request · /users/root/crontab
PUT · raw crontab
# Replace the entire crontab in one call.
PUT /users/root/crontab
Content-Type: text/plain

@hourly  bash /scripts/scrape.sh
0 9 * * * bash /scripts/digest.sh
0 0 * * 0 bash /scripts/archive.sh

HTTP/1.1 204 No Content
the kernel does the rest
scripts · /scripts/*.sh
exec · three bodies
# scrape.sh — every hour, fan a screenshot into sqlite
curl -sS https://browser.containers.hoody.com/screenshot \
  --data-urlencode "url=https://store.hoody.com/p/123" | sqlite3 /data/prices.db \
  "INSERT INTO rows VALUES (?, ?, ?)"

# digest.sh — at 9am, compute deltas and pipe the digest
sqlite3 /data/prices.db < /scripts/digest.sql \
  > /tmp/digest.txt && curl -T /tmp/digest.txt \
  https://pipe.hoody.com/api/v1/pipe/digest

# archive.sh — sunday at midnight, dump and store
sqlite3 /data/prices.db ".dump" | curl -T - \
  https://files.containers.hoody.com/archives/$(date +%Y-w%V).sql

Three scripts. Three URLs they already know how to call. One PUT request to install the schedule. There is no scheduler service in front of this — the kernel's crond reads the file you wrote and runs it.

use-cases / three-cadences-one-container / cadences

Three rhythms, three scripts

Each cadence has a single 5-field expression and a single shell line behind it. None of them needs to know about the other two — they just share a disk and a clock.

HOURLY · SCRAPE

Pull competitor pages into SQLite

hoody-browser screenshots a list of product URLs. Each row goes straight into a SQLite table on the container's volume. No scrape worker pool — the cron line is the worker pool.

@hourly bash /scripts/scrape.sh
DAILY · DIGEST

Compute deltas, push a digest

At 9am the digest script reads the last 24 hours of rows, computes price deltas, and curls the digest to a pipe URL. Your inbox / dashboard reads from the same pipe.

0 9 * * * bash /scripts/digest.sh
WEEKLY · ARCHIVE

Dump the week to a files URL

Sunday at midnight the archive script `.dump`s SQLite, names the file by ISO week, and PUTs it to hoody-files. Old rows get pruned. The volume stays small forever.

0 0 * * 0 bash /scripts/archive.sh
use-cases / three-cadences-one-container / powers

What three lines unlock

Three cadences in one container is not a hack — it's the natural shape of cron. The platform already gave you a scheduler; you just stopped paying three times for it.

STORAGE

All three scripts share one disk

The hourly scrape writes the rows the daily digest reads. The daily digest writes the deltas the weekly archive dumps. There is no IPC between them — they're just three processes on the same volume.

OPERATIONS

One container to restart, not three

When you redeploy, you redeploy one image. When you check logs, you tail one log file. When the disk fills, it fills once. The blast radius of any cadence is the same as any other.

ECONOMICS

One flat-rate server, no scheduler tier

Lambda/EventBridge bill per invocation; ECS Scheduled Tasks bills the always-on cluster. On Hoody, this runs inside the flat-rate server you already pay for. Three cadences cost no more than one.

use-cases / three-cadences-one-container / operations

How you actually use it

The crontab is a file. The file has a URL. Anything you'd do to the file, you can do over HTTP.

  1. ADD A FOURTH

    POST /users/root/entries

    Create a managed entry with a UUID and an optional comment. The API injects the line into the crontab for you and gives you a handle to enable, disable, or delete it later.

  2. DISABLE WITHOUT DELETING

    PATCH enabled: false

    Pause a cadence during an incident without losing its definition. Flip it back on when the incident closes. The line stays in the file, commented as managed-disabled.

  3. READ THE FILE

    GET /users/root/crontab

    Get the raw crontab back at any time, including all managed entries. Diff it against your repo. Pipe it into version control. Cron is a file, and now the file is a URL.

Endpoints from the Hoody Cron API: managed-entry CRUD plus full raw-crontab read/write per user. Standard 5-field expressions and macros (@hourly, @daily, @weekly).

use-cases / three-cadences-one-container / economics

What you're not paying for

Three numbers from the actual mechanic. Numbers come from the Hoody Cron API guarantees and the flat-rate server model — not invented benchmarks.

  1. SERVER

    All three cadences run inside the same flat-rate server. Entry server starts at $29/month; extra cron lines add no extra charge.

  2. CRON LINES

    One @hourly, one daily-at-9, one weekly-on-Sunday. Three lines in /users/root/crontab. The whole orchestrator fits in one PUT request.

  3. EXTRA SERVICES0

    No Lambda, no EventBridge, no Sidekiq, no Airflow scheduler, no ECS scheduled task definition. The HTTP API for cron IS the scheduler.

Per the Hoody Cron API: managed entries via JSON CRUD, raw-crontab read/write, auto-expiration via expires_at, and per-user crontab isolation. Macros @hourly / @daily / @weekly accepted alongside 5-field expressions.

use-cases / three-cadences-one-container / punchline

Three cadences, three cron lines, one container on a flat-rate server starting at $29/month.

HOURLY@hourly bash /scripts/scrape.shcompetitor prices → sqlite
DAILY0 9 * * * bash /scripts/digest.sh9am — roll up deltas
WEEKLY0 0 * * 0 bash /scripts/archive.shsunday — parquet to files
before · three lambdas, three billsafter · one flat-rate server, many cron lines
Read the cron docs
use-cases / three-cadences-one-container / replaces

What this replaces

Three Lambdas, three GitHub Actions, three ECS scheduled tasks — the standard reach-for-it stacks for three cadences. Each one charges you per cadence or invocation; Hoody charges for the server.

  • three AWS Lambda functionsPer-invocation billing for what is just three shell scripts on a disk
  • three GitHub Actions schedulesA whole CI runner spun up for a 5-second SQLite query
  • three Sidekiq workersA Redis-backed worker pool for jobs that share no state but disk
  • three serverless functions / three billsThree deploys, three logs, three pricing meters for the same logic
  • multi-service orchestration (Airflow, Step Functions)A DAG engine for a graph that has zero edges between its three nodes
  • three ECS scheduled tasksThree task definitions, three IAM roles, three CloudWatch rules
use-cases / three-cadences-one-container / cta

Stop renting a scheduler. Write the schedule into a file. The container already runs cron — three lines later, you've shipped the whole pipeline.

Read the cron docs
use-cases / three-cadences-one-container / related

Read the others