
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
It's 2pm. The 7am incident is being post-mortemed. Six engineers want to walk through the exact log sequence the on-call SRE saw at the time. You stream the snapshot through one Hoody Pipe URL with ?n=8. Everyone watches the cascade fire on their own terminal at the same moment — no screenshots, no scrolling out of sync, no Zoom recording.
Take the morning's incident-time log file from your hoody-files snapshot. Stream it through a Hoody Pipe path with ?n=8. Eight engineers curl the same path. The pipe waits until everyone is connected, then the bytes move through once at the rate you set — every reader sees the same line at the same moment.
# The 7am incident is captured in incident-2026-05-04.log
# (snapshotted from /var/log/app at 07:25 by the on-call SRE).
# Replay it through a pipe path with ?n=8 — the server waits
# until eight readers connect, then the bytes move through once.
# pv -L 50k rate-limits the replay to a readable 50KB/s.
cat incident-2026-05-04.log \
| pv -L 50k \
| curl -T - \
"https://prod-pipe.containers.hoody.com/api/v1/pipe/replay?n=8"
# [INFO] Waiting for 8 receiver(s) to connect...
# [INFO] Streaming to 8 receiver(s) at 50.0 KB/s# Each engineer in the post-mortem call runs the same line.
# They block until everyone has joined, then the cascade scrolls
# past their terminal at the exact rate the SRE saw at 07:23.
curl "https://prod-pipe.containers.hoody.com/api/v1/pipe/replay?n=8"
# 07:23:14 INFO POST /v1/checkout u_28f
# 07:23:15 WARN stripe latency 2.4s
# 07:23:16 ERR 500 stripe timeout
# 07:23:17 ··· auto-rollback armed
# ...the whole cascade, in order, on every terminal at once.Two pieces of the documented Pipe API: PUT /api/v1/pipe/[path] on the sender, GET /api/v1/pipe/[path] on every reader, both keyed by the same n. The server forwards the sender's Content-Type, holds the connection up to a 5-minute TTL while it waits for readers, and applies backpressure if any single reader is slow. The replay rate is set entirely by the sender — pv, dd, or any rate-limiter you trust.
A scrolling stream changes the conversation. People stop arguing about what happened and start watching what happened. Three properties of the pipe make this work.
n=N is documented in the Pipe API: every reader joining the same path with the same n receives an identical fan-out copy. Eight engineers all see the same line scroll past at the same instant — no one is ahead, no one is squinting at someone else's screenshare.
Real prod logs scroll faster than humans can absorb. pv -L 50k throttles the replay to a readable pace; the pipe carries whatever rate the sender chooses. You can pause the post-mortem by ctrl-Z'ing the sender and resume by fg — every reader's terminal pauses with you.
The pipe stores zero bytes. When the cat finishes or the SRE ctrl-C's the sender, the path closes — no leftover endpoint exposed to the internet, no transcript to retention-manage. Run it again from the snapshot for whoever joined the call late.
Four beats from incident-time log to shared post-mortem playback. Nothing here is custom infrastructure — the snapshot lives in hoody-files, the replay rides one Pipe URL.
On-call SRE copies /var/log/app at 07:25 into a hoody-files bucket. The file is the source of truth for everything that happened in the cascade window.
Lead writes a Hoody Pipe URL with ?n=8 (six engineers + two latecomers' headroom) and pastes it into the post-mortem channel. Receivers can connect first — the pipe holds the slot for up to 5 minutes.
The SRE pipes the snapshot through pv -L 50k into the URL. The server waits until eight curls are connected, then the bytes move through once in lockstep — the cascade fires on six terminals at the same instant.
The director joins late. Re-run the same line. The pipe is a path, not a place — there's nothing to seek, nothing to rewind, nothing stored on the server. Just press play again.
From the documented Pipe API spec. Limits and behaviors that turn a single URL into a post-mortem theater.
Documented cap on n. Your post-mortem call will not run out of seats — the pipe scales to a whole org.
The pipe is direct-streamed end-to-end. The replay leaves no trace on the server when the sender disconnects.
Receivers can connect before the sender; the pipe holds the slot up to 5 minutes for late-joiners.
Source: Hoody Pipe API — limits documented for /api/v1/pipe/[path], n parameter (1–256), and unestablished pipe TTL.
The post-mortem isn't a doc. It's a stream everyone watches together.
The set of tools and rituals you currently invoke to walk a team through an incident timeline. Each one stores something, charges per seat, or loses the timing. The pipe is one URL with a shared playhead.
The cascade fires on six terminals at once. The conversation changes. People stop arguing about what happened and start watching what happened.