Skip to content
use-cases / recurring-webhook-replay / hero
CRON · FILES · EXEC

Replay this morning's webhooks at the same time tomorrow

Point your real Stripe webhook at a hoody-files URL for thirty minutes. The directory now holds fourteen JSON files — every payload that hit production, byte for byte. One cron entry runs an exec script that POSTs them back at staging at 9am, monday to friday. The schedule expires next saturday and deletes itself.

Read the Cron API
use-cases / recurring-webhook-replay / pipeline

Capture once, persist as files, replay on a schedule

The whole flow is three URLs. Production traffic arrives on the capture URL. Files land in hoody-files. Cron walks the folder and POSTs the bodies back at staging. There is no broker, no queue, no replay service — only a directory and a schedule.

PIPELINEno broker · no queue
01 · CAPTURE

Production webhook becomes a PUT

Set your Stripe / Intercom / GitHub webhook URL to a hoody-files path. Each event arrives as a PUT and lands as a JSON file named with its timestamp. The directory is the recording.

02 · PERSIST

A folder per day, addressable as URL

Files persist on disk; every file has its own URL. Browse the directory in a browser, list it via the API, or shell into it with curl. The recording is a fact you can cat, scp, or version.

03 · REPLAY

One cron entry walks the folder

POST a managed cron entry with schedule 0 9 * * 1-5 and command bash /scripts/replay.sh /webhooks/2026-05-03. The script lists the directory and POSTs each file back to staging in timestamp order.

Capture and replay are the same protocol on different days. The thing that recorded the bytes is the thing that plays them back. There is no JSONL parser, no sidecar, no recording format you have to learn — files in a folder, in time order.

use-cases / recurring-webhook-replay / mechanism

Two POSTs and a folder

Capture is one PUT per webhook event. Replay is one POST to the cron API. Hoody Files holds the recording; Hoody Cron walks it on a schedule; hoody-exec runs the bash script that does the POSTing. Three services, no glue between them.

stripe → hoody-files
PUT · capture
# point your real webhook at hoody-files
curl -X PUT \
  https://files.containers.hoody.com/webhooks/2026-05-03/stripe-08-15-22.json \
  --data-binary @-

# 30 minutes later, the directory holds 14 files
HTTP/1.1 201 Created
webhooks/2026-05-03/stripe-08-15-22.json
schedule the replay
POST · cron entry
# one cron entry replays the morning at 9am, mon-fri
curl -X POST \
  https://cron.containers.hoody.com/api/v1/cron/users/me/entries \
  -d '["schedule":"0 9 * * 1-5","command":"bash /scripts/replay.sh /webhooks/2026-05-03","expires_at":"2026-05-10T09:00:00Z"]'

HTTP/1.1 201 Created
{ "id":"f0a8", "schedule":"0 9 * * 1-5", "expires_at":"2026-05-10T09:00:00Z" }

The capture side runs once on a friday morning. The replay side runs every weekday until next saturday, when the cron entry's expires_at field deletes the schedule. You wrote one PUT URL into your webhook config, and one POST into the cron API — that's the whole load test.

use-cases / recurring-webhook-replay / powers

What this gives you that a load-test fixture can't

Synthetic traffic is whatever you imagined the request looked like. Captured traffic is what actually arrived. Same field names, same edge cases, same surprises.

FIDELITY

Real payload shapes, not your guess

The recording captures the exact JSON Stripe sent — including every nullable field, every unexpected event type, every customer_id format you forgot. Your handler meets the same payloads it failed on yesterday.

TIMING

Same time-of-day pressure as production

Cron expression 0 9 * * 1-5 lands the replay at the hour your real users actually use the system. The handler under test sees morning rush against the same caches, the same cron neighbors, the same noisy DB.

REPEATABILITY

Replay until the bug is gone

The folder is immutable; the cron runs every weekday until expires_at. If the handler still breaks on tuesday's run, you fix it and let wednesday's run prove it. Same input every time — the handler is the only thing that changes.

use-cases / recurring-webhook-replay / capacity

What the schedule promises

Numbers come from the Hoody Cron managed entries API and the standard cron expression spec — not from invented benchmarks.

  1. FIELDS PER ENTRY5

    Standard 5-field cron expression — minute, hour, day-of-month, month, day-of-week. The same syntax you used in 1985 still schedules the replay in 2026.

  2. ENTRIES PER PAGE200

    GET /users/[user]/entries pages up to 200 managed entries at a time. Sixty-three replay schedules per environment is well within budget.

  3. POST TO CREATE1

    Create the recurring replay with one POST /users/me/entries — schedule, command, expires_at. PATCH later to mute it; DELETE to retire it; expires_at retires it for you.

Limits per the Hoody Cron Managed Entries API: standard 5-field cron expressions plus @daily / @hourly macros, pagination up to 200 entries per page, expires_at is optional and auto-disables the entry past the deadline.

use-cases / recurring-webhook-replay / punchline

Production traffic, recorded once, replayed on a schedule.

captured · friday 08:00replayed · monday through friday 09:00
WHAT THE OLD LOAD TEST LOOKED LIKEk6 script · faker · imagined payload shapesyour guess at what stripe sends · re-guessed every quarter
WHAT IT LOOKS LIKE NOWPUT files/webhooks/[day] · POST cron/entries 0 9 * * 1-5real bytes that hit production · replayed by one schedule
Read the Cron API
use-cases / recurring-webhook-replay / replaces

What this replaces

The standard tools for replaying webhook traffic — recorders, replay services, scheduled mocks. Each one is a SaaS, a sidecar, or a script you babysit. The hoody-files + hoody-cron pair is none of those.

  • ngrok webhook recordingA SaaS plan to record requests you could write to disk
  • Hookdeck event replaysAn event-routing service for what is just files in a folder
  • custom replay scriptsGlue you write once and forget how to schedule
  • Postman scheduled mocksA monitor seat to fire HTTP at your own staging
  • mocking servers in test envsA second backend to maintain alongside the real one
  • manual webhook fuzzingYou sitting in a terminal at 9am, pasting payloads
use-cases / recurring-webhook-replay / cta

Capture friday's traffic. Schedule next week's replay. Let the cron entry expire itself when the experiment is done.

Read the Cron API
use-cases / recurring-webhook-replay / related

Read the others