
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Point your real Stripe webhook at a hoody-files URL for thirty minutes. The directory now holds fourteen JSON files — every payload that hit production, byte for byte. One cron entry runs an exec script that POSTs them back at staging at 9am, monday to friday. The schedule expires next saturday and deletes itself.
{ "id": "evt_3OHk8ZJs2k9aXq1vQ", "type": "payment_intent.succeeded", "created": 1714723522, "data": { "object": { "id": "pi_3OHk8ZJs2k9aXq1v0K7rT4mB", "amount": 4900, "currency": "usd", "status": "succeeded" } } }
captured by your real webhook · replayed by one cron entry
The whole flow is three URLs. Production traffic arrives on the capture URL. Files land in hoody-files. Cron walks the folder and POSTs the bodies back at staging. There is no broker, no queue, no replay service — only a directory and a schedule.
Set your Stripe / Intercom / GitHub webhook URL to a hoody-files path. Each event arrives as a PUT and lands as a JSON file named with its timestamp. The directory is the recording.
Files persist on disk; every file has its own URL. Browse the directory in a browser, list it via the API, or shell into it with curl. The recording is a fact you can cat, scp, or version.
POST a managed cron entry with schedule 0 9 * * 1-5 and command bash /scripts/replay.sh /webhooks/2026-05-03. The script lists the directory and POSTs each file back to staging in timestamp order.
Capture and replay are the same protocol on different days. The thing that recorded the bytes is the thing that plays them back. There is no JSONL parser, no sidecar, no recording format you have to learn — files in a folder, in time order.
Capture is one PUT per webhook event. Replay is one POST to the cron API. Hoody Files holds the recording; Hoody Cron walks it on a schedule; hoody-exec runs the bash script that does the POSTing. Three services, no glue between them.
# point your real webhook at hoody-files curl -X PUT \ https://files.containers.hoody.com/webhooks/2026-05-03/stripe-08-15-22.json \ --data-binary @- # 30 minutes later, the directory holds 14 files HTTP/1.1 201 Created webhooks/2026-05-03/stripe-08-15-22.json
# one cron entry replays the morning at 9am, mon-fri curl -X POST \ https://cron.containers.hoody.com/api/v1/cron/users/me/entries \ -d '["schedule":"0 9 * * 1-5","command":"bash /scripts/replay.sh /webhooks/2026-05-03","expires_at":"2026-05-10T09:00:00Z"]' HTTP/1.1 201 Created { "id":"f0a8", "schedule":"0 9 * * 1-5", "expires_at":"2026-05-10T09:00:00Z" }
The capture side runs once on a friday morning. The replay side runs every weekday until next saturday, when the cron entry's expires_at field deletes the schedule. You wrote one PUT URL into your webhook config, and one POST into the cron API — that's the whole load test.
Synthetic traffic is whatever you imagined the request looked like. Captured traffic is what actually arrived. Same field names, same edge cases, same surprises.
The recording captures the exact JSON Stripe sent — including every nullable field, every unexpected event type, every customer_id format you forgot. Your handler meets the same payloads it failed on yesterday.
Cron expression 0 9 * * 1-5 lands the replay at the hour your real users actually use the system. The handler under test sees morning rush against the same caches, the same cron neighbors, the same noisy DB.
The folder is immutable; the cron runs every weekday until expires_at. If the handler still breaks on tuesday's run, you fix it and let wednesday's run prove it. Same input every time — the handler is the only thing that changes.
Numbers come from the Hoody Cron managed entries API and the standard cron expression spec — not from invented benchmarks.
Standard 5-field cron expression — minute, hour, day-of-month, month, day-of-week. The same syntax you used in 1985 still schedules the replay in 2026.
GET /users/[user]/entries pages up to 200 managed entries at a time. Sixty-three replay schedules per environment is well within budget.
Create the recurring replay with one POST /users/me/entries — schedule, command, expires_at. PATCH later to mute it; DELETE to retire it; expires_at retires it for you.
Limits per the Hoody Cron Managed Entries API: standard 5-field cron expressions plus @daily / @hourly macros, pagination up to 200 entries per page, expires_at is optional and auto-disables the entry past the deadline.
Production traffic, recorded once, replayed on a schedule.
The standard tools for replaying webhook traffic — recorders, replay services, scheduled mocks. Each one is a SaaS, a sidecar, or a script you babysit. The hoody-files + hoody-cron pair is none of those.
Capture friday's traffic. Schedule next week's replay. Let the cron entry expire itself when the experiment is done.