
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Every Monday at 9am, one cron entry wakes a single container. The script renders the digest once and writes it to a pipe URL with ?n=200. Two hundred curl loops — one per subscriber — pull the same bytes in parallel and hand them to SMTP. The fan-out lives in the substrate, not in your code.
Your Monday digest — 4 things worth opening this week
Bonds rallied on softer payrolls; the curve un-inverted at the front end. We flag two names with earnings the market is mispricing.
Two charts: weekly net-flows into AI infra ETFs, and a CPI breakdown that disagrees with the headline.
Reading list: a long piece on private credit, and a sharp note on why the rate cycle is shorter than 2008's.
→ Open the full digest
0 9 * * 1 bash /scripts/digest.shONE WAKE · ONE RENDER · TWO HUNDRED PARALLEL PULLS
The Hoody Cron API drops a 5-field crontab line into a managed entry. The line runs an exec script that renders the digest once and pushes it onto a pipe path with n=200. Two hundred subscriber loops pull the same path in parallel — the server holds nothing, and a slow reader can't block the rest.
The cron didn't get more complex. The fan-out got moved into the substrate — the pipe holds nothing, the script renders once, and the loop is just SMTP at the edge. No queue, no retry table, no campaign-tool seat.
The naive design loops 200 SMTP sends in series, takes 11 minutes, and double-delivers when it crashes halfway. The pipe shape gets you parallelism, idempotency, and a smaller container — for free.
The digest is built exactly once. Two hundred curl loops pull the same bytes simultaneously. A 4-second run replaces an 11-minute serial loop — the pipe applies backpressure to slow readers without blocking the rest.
There is no campaign-state table to consult. If the run dies before all 200 connect, the pipe TTL evicts the unfinished half and the next cron tick re-renders. No double-delivery, no half-sent batch to reconcile.
The script wakes once a week, runs four seconds, and the container goes back to idle. You pay for the four seconds — not for an always-on campaign service, not for a per-recipient SES bill, not for a Mailchimp seat.
Same 200 recipients, same digest body. The shape of the run is what moves — from minutes-of-serial-SMTP to seconds-of-parallel-HTTP.
Wall-clock time from cron tick to last delivery. The pipe streams to all 200 receivers in parallel; the bottleneck becomes the slowest subscriber's SMTP, not the loop.
The digest body is computed once. The pipe forwards the same bytes to every receiver — no template re-render per recipient, no per-recipient billing, no per-recipient cache.
The Hoody Pipe API caps n at 256. A weekly digest at 200 sits comfortably under the ceiling — and a slow reader applies backpressure but doesn't block the others.
Limits per the Hoody Pipe API: receiver count 1–256, 5-minute pipe TTL waiting for connections, 1000 active transfers server-wide. The cron entry itself is one row in /users/root/entries with schedule, command, and an optional expires_at.
Four moments. Each one is a single HTTP call you'd be making by hand. Cron is the alarm clock; exec is the renderer; pipe is the wire; the loop is the only thing the agent writes.
The managed entry on /users/root/entries fires. Schedule: 0 9 * * 1. Command: bash /scripts/digest.sh. The crontab itself is a single JSON record — not an Airflow DAG, not a workflow service.
The exec script pulls the week's data, renders the markdown, converts to HTML, and writes the body to stdout. One render, one payload — no per-recipient mail-merge loop.
The script pipes stdout into curl -T - against pipe/digest-monday?n=200. The pipe holds the upload until 200 receivers connect, then streams the body to all of them in parallel.
Two hundred loops curl the same path and hand the body to their subscriber's SMTP. The slow ones get backpressure. The fast ones finish in milliseconds. The whole run is over in seconds.
One cron entry, one container, two hundred recipients.
The standard reach-for-it tools when you want to send the same email to a list. Each one charges you a service tier for what is, in the end, one render and a fan-out HTTP loop.
Monday at 9 used to mean a worker grinding through SMTP. Now it means one cron tick, one container, and a pipe that does the rest.