Skip to content
use-cases / daily-digest-fan-out / hero
CRON · EXEC · PIPE FAN-OUT

A scheduled digest that fans out to 200 inboxes

Every Monday at 9am, one cron entry wakes a single container. The script renders the digest once and writes it to a pipe URL with ?n=200. Two hundred curl loops — one per subscriber — pull the same bytes in parallel and hand them to SMTP. The fan-out lives in the substrate, not in your code.

Read the cron docs
use-cases / daily-digest-fan-out / mechanism

Cron, exec, pipe — three calls and you're done

The Hoody Cron API drops a 5-field crontab line into a managed entry. The line runs an exec script that renders the digest once and pushes it onto a pipe path with n=200. Two hundred subscriber loops pull the same path in parallel — the server holds nothing, and a slow reader can't block the rest.

cron · entries
POST · schedule
# Monday 09:00 — managed cron entryPOST /users/root/entries# Body sent to /users/root/entries{schedule: "0 9 * * 1",command: "bash /scripts/digest.sh"}
exec · digest.sh
PUT · sender
# Render once — markdown → HTMLdigest=$(render-digest.py)# Push the bytes onto the pipe pathecho "$digest" | curl -T - https://pipe.hoody.com/api/v1/pipe/digest-monday?n=200# Pipe blocks until 200 receivers connect, then streams
pipe · subscribers
GET · receivers
# 200 lightweight curl loops, one per subscriberwhile read addr; docurl -s https://pipe.hoody.com/api/v1/pipe/digest-monday?n=200 \| smtp-send "$addr" &done < subscribers.txt# All 200 streamed in parallel — backpressure handles the slow ones[INFO] Transfer complete.

The cron didn't get more complex. The fan-out got moved into the substrate — the pipe holds nothing, the script renders once, and the loop is just SMTP at the edge. No queue, no retry table, no campaign-tool seat.

use-cases / daily-digest-fan-out / powers

Why HTTP fan-out beats SMTP fan-out

The naive design loops 200 SMTP sends in series, takes 11 minutes, and double-delivers when it crashes halfway. The pipe shape gets you parallelism, idempotency, and a smaller container — for free.

PARALLELISM

Two hundred receivers, one render

The digest is built exactly once. Two hundred curl loops pull the same bytes simultaneously. A 4-second run replaces an 11-minute serial loop — the pipe applies backpressure to slow readers without blocking the rest.

IDEMPOTENCY

No mid-flight crash to clean up

There is no campaign-state table to consult. If the run dies before all 200 connect, the pipe TTL evicts the unfinished half and the next cron tick re-renders. No double-delivery, no half-sent batch to reconcile.

ECONOMICS

One container, 23 hours asleep

The script wakes once a week, runs four seconds, and the container goes back to idle. You pay for the four seconds — not for an always-on campaign service, not for a per-recipient SES bill, not for a Mailchimp seat.

use-cases / daily-digest-fan-out / timing

What changes when the wire does the fan-out

Same 200 recipients, same digest body. The shape of the run is what moves — from minutes-of-serial-SMTP to seconds-of-parallel-HTTP.

  1. RUN DURATION4.2s

    Wall-clock time from cron tick to last delivery. The pipe streams to all 200 receivers in parallel; the bottleneck becomes the slowest subscriber's SMTP, not the loop.

  2. RENDER COUNT

    The digest body is computed once. The pipe forwards the same bytes to every receiver — no template re-render per recipient, no per-recipient billing, no per-recipient cache.

  3. RECEIVERS PER PATH200

    The Hoody Pipe API caps n at 256. A weekly digest at 200 sits comfortably under the ceiling — and a slow reader applies backpressure but doesn't block the others.

Limits per the Hoody Pipe API: receiver count 1–256, 5-minute pipe TTL waiting for connections, 1000 active transfers server-wide. The cron entry itself is one row in /users/root/entries with schedule, command, and an optional expires_at.

use-cases / daily-digest-fan-out / steps

How the run unfolds, Monday at 9am

Four moments. Each one is a single HTTP call you'd be making by hand. Cron is the alarm clock; exec is the renderer; pipe is the wire; the loop is the only thing the agent writes.

    01
    09:00:00

    Cron tick

    The managed entry on /users/root/entries fires. Schedule: 0 9 * * 1. Command: bash /scripts/digest.sh. The crontab itself is a single JSON record — not an Airflow DAG, not a workflow service.

    02
    09:00:00

    Render once

    The exec script pulls the week's data, renders the markdown, converts to HTML, and writes the body to stdout. One render, one payload — no per-recipient mail-merge loop.

    03
    09:00:00

    PUT pipe ?n=200

    The script pipes stdout into curl -T - against pipe/digest-monday?n=200. The pipe holds the upload until 200 receivers connect, then streams the body to all of them in parallel.

    04
    09:00:04

    200 SMTPs

    Two hundred loops curl the same path and hand the body to their subscriber's SMTP. The slow ones get backpressure. The fast ones finish in milliseconds. The whole run is over in seconds.

use-cases / daily-digest-fan-out / punchline

One cron entry, one container, two hundred recipients.

the way you used to do itthe way the substrate does it
BEFORE · SERIAL SMTP WORKERfor sub in 200: smtp.send(render(sub))11 minutes · half-delivered on crash · per-recipient bill
AFTER · ONE PIPE PATHrender | curl -T - pipe/digest?n=2004 seconds · idempotent · one wake-up bill
Read the pipe spec
use-cases / daily-digest-fan-out / replaces

What this replaces

The standard reach-for-it tools when you want to send the same email to a list. Each one charges you a service tier for what is, in the end, one render and a fan-out HTTP loop.

  • SendGrid scheduled campaignsPer-email pricing for a payload your script already produced
  • Mailchimp daily digestsA whole campaign UI and an audience seat for one weekly send
  • Custom mail-merge cron jobsA serial loop, a retry table, and a half-sent-batch postmortem
  • AWS SES + Lambda scheduled batchA queue, a worker, an IAM role, and a CloudWatch alarm to babysit
  • Resend with batched API callsPer-recipient API spend for a body that didn't change between sends
  • Customer.io drip campaignsA segmentation engine for a list you already keep in a text file
use-cases / daily-digest-fan-out / cta

Monday at 9 used to mean a worker grinding through SMTP. Now it means one cron tick, one container, and a pipe that does the rest.

Read the cron guide
use-cases / daily-digest-fan-out / related

Read the others