
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Hourly browser scrape, daily SQLite digest, weekly file archive. Three rhythms nest cleanly in one crontab — they're just three rows of `* * * * *` pointing at three scripts. No scheduler service, no job queue, no worker pool.
one crontab · three cadences · same container
The Hoody Cron service exposes the raw crontab as a REST resource. PUT the file once and the kernel runs it forever. Three lines, three scripts — each a one-liner that already speaks HTTP.
# Replace the entire crontab in one call.
PUT /users/root/crontab
Content-Type: text/plain
@hourly bash /scripts/scrape.sh
0 9 * * * bash /scripts/digest.sh
0 0 * * 0 bash /scripts/archive.sh
HTTP/1.1 204 No Content# scrape.sh — every hour, fan a screenshot into sqlite
curl -sS https://browser.containers.hoody.com/screenshot \
--data-urlencode "url=https://store.hoody.com/p/123" | sqlite3 /data/prices.db \
"INSERT INTO rows VALUES (?, ?, ?)"
# digest.sh — at 9am, compute deltas and pipe the digest
sqlite3 /data/prices.db < /scripts/digest.sql \
> /tmp/digest.txt && curl -T /tmp/digest.txt \
https://pipe.hoody.com/api/v1/pipe/digest
# archive.sh — sunday at midnight, dump and store
sqlite3 /data/prices.db ".dump" | curl -T - \
https://files.containers.hoody.com/archives/$(date +%Y-w%V).sqlThree scripts. Three URLs they already know how to call. One PUT request to install the schedule. There is no scheduler service in front of this — the kernel's crond reads the file you wrote and runs it.
Each cadence has a single 5-field expression and a single shell line behind it. None of them needs to know about the other two — they just share a disk and a clock.
hoody-browser screenshots a list of product URLs. Each row goes straight into a SQLite table on the container's volume. No scrape worker pool — the cron line is the worker pool.
At 9am the digest script reads the last 24 hours of rows, computes price deltas, and curls the digest to a pipe URL. Your inbox / dashboard reads from the same pipe.
Sunday at midnight the archive script `.dump`s SQLite, names the file by ISO week, and PUTs it to hoody-files. Old rows get pruned. The volume stays small forever.
Three cadences in one container is not a hack — it's the natural shape of cron. The platform already gave you a scheduler; you just stopped paying three times for it.
The hourly scrape writes the rows the daily digest reads. The daily digest writes the deltas the weekly archive dumps. There is no IPC between them — they're just three processes on the same volume.
When you redeploy, you redeploy one image. When you check logs, you tail one log file. When the disk fills, it fills once. The blast radius of any cadence is the same as any other.
Lambda/EventBridge bill per invocation; ECS Scheduled Tasks bills the always-on cluster. On Hoody, this runs inside the flat-rate server you already pay for. Three cadences cost no more than one.
The crontab is a file. The file has a URL. Anything you'd do to the file, you can do over HTTP.
Create a managed entry with a UUID and an optional comment. The API injects the line into the crontab for you and gives you a handle to enable, disable, or delete it later.
Pause a cadence during an incident without losing its definition. Flip it back on when the incident closes. The line stays in the file, commented as managed-disabled.
Get the raw crontab back at any time, including all managed entries. Diff it against your repo. Pipe it into version control. Cron is a file, and now the file is a URL.
Endpoints from the Hoody Cron API: managed-entry CRUD plus full raw-crontab read/write per user. Standard 5-field expressions and macros (@hourly, @daily, @weekly).
Three numbers from the actual mechanic. Numbers come from the Hoody Cron API guarantees and the flat-rate server model — not invented benchmarks.
All three cadences run inside the same flat-rate server. Entry server starts at $29/month; extra cron lines add no extra charge.
One @hourly, one daily-at-9, one weekly-on-Sunday. Three lines in /users/root/crontab. The whole orchestrator fits in one PUT request.
No Lambda, no EventBridge, no Sidekiq, no Airflow scheduler, no ECS scheduled task definition. The HTTP API for cron IS the scheduler.
Per the Hoody Cron API: managed entries via JSON CRUD, raw-crontab read/write, auto-expiration via expires_at, and per-user crontab isolation. Macros @hourly / @daily / @weekly accepted alongside 5-field expressions.
Three cadences, three cron lines, one container on a flat-rate server starting at $29/month.
Three Lambdas, three GitHub Actions, three ECS scheduled tasks — the standard reach-for-it stacks for three cadences. Each one charges you per cadence or invocation; Hoody charges for the server.
Stop renting a scheduler. Write the schedule into a file. The container already runs cron — three lines later, you've shipped the whole pipeline.