Skip to content
use-cases / replay-this-mornings-incident / hero
PIPE · INCIDENT REPLAY

Replay this morning's incident to the whole team

It's 2pm. The 7am incident is being post-mortemed. Six engineers want to walk through the exact log sequence the on-call SRE saw at the time. You stream the snapshot through one Hoody Pipe URL with ?n=8. Everyone watches the cascade fire on their own terminal at the same moment — no screenshots, no scrolling out of sync, no Zoom recording.

Read the pipe API
use-cases / replay-this-mornings-incident / mechanism

One snapshot. One pipe. Six terminals in lockstep.

Take the morning's incident-time log file from your hoody-files snapshot. Stream it through a Hoody Pipe path with ?n=8. Eight engineers curl the same path. The pipe waits until everyone is connected, then the bytes move through once at the rate you set — every reader sees the same line at the same moment.

post-mortem.host · sender
PUT · replay
# The 7am incident is captured in incident-2026-05-04.log
# (snapshotted from /var/log/app at 07:25 by the on-call SRE).
# Replay it through a pipe path with ?n=8 — the server waits
# until eight readers connect, then the bytes move through once.
# pv -L 50k rate-limits the replay to a readable 50KB/s.

cat incident-2026-05-04.log \
  | pv -L 50k \
  | curl -T - \
      "https://prod-pipe.containers.hoody.com/api/v1/pipe/replay?n=8"

# [INFO] Waiting for 8 receiver(s) to connect...
# [INFO] Streaming to 8 receiver(s) at 50.0 KB/s
all 8 readers connected · pipe holds nothing · bytes move through once
alex / ben / chen / dani / ev / fox / two more · readers
GET · lockstep
# Each engineer in the post-mortem call runs the same line.
# They block until everyone has joined, then the cascade scrolls
# past their terminal at the exact rate the SRE saw at 07:23.

curl "https://prod-pipe.containers.hoody.com/api/v1/pipe/replay?n=8"

# 07:23:14 INFO POST /v1/checkout u_28f
# 07:23:15 WARN stripe latency 2.4s
# 07:23:16 ERR  500 stripe timeout
# 07:23:17 ···  auto-rollback armed
# ...the whole cascade, in order, on every terminal at once.

Two pieces of the documented Pipe API: PUT /api/v1/pipe/[path] on the sender, GET /api/v1/pipe/[path] on every reader, both keyed by the same n. The server forwards the sender's Content-Type, holds the connection up to a 5-minute TTL while it waits for readers, and applies backpressure if any single reader is slow. The replay rate is set entirely by the sender — pv, dd, or any rate-limiter you trust.

use-cases / replay-this-mornings-incident / powers

What watching together does that a doc can't

A scrolling stream changes the conversation. People stop arguing about what happened and start watching what happened. Three properties of the pipe make this work.

MULTIPLAYER BY DEFAULT

Six terminals on the same playhead

n=N is documented in the Pipe API: every reader joining the same path with the same n receives an identical fan-out copy. Eight engineers all see the same line scroll past at the same instant — no one is ahead, no one is squinting at someone else's screenshare.

RATE-LIMITED BY THE SENDER

Replay slow enough to read

Real prod logs scroll faster than humans can absorb. pv -L 50k throttles the replay to a readable pace; the pipe carries whatever rate the sender chooses. You can pause the post-mortem by ctrl-Z'ing the sender and resume by fg — every reader's terminal pauses with you.

EPHEMERAL BY DEFAULT

Replay ends, pipe vanishes

The pipe stores zero bytes. When the cat finishes or the SRE ctrl-C's the sender, the path closes — no leftover endpoint exposed to the internet, no transcript to retention-manage. Run it again from the snapshot for whoever joined the call late.

use-cases / replay-this-mornings-incident / session

How a replay session runs

Four beats from incident-time log to shared post-mortem playback. Nothing here is custom infrastructure — the snapshot lives in hoody-files, the replay rides one Pipe URL.

  1. 0107:25

    Snapshot the logs

    On-call SRE copies /var/log/app at 07:25 into a hoody-files bucket. The file is the source of truth for everything that happened in the cascade window.

  2. 0213:55

    Open the meeting room

    Lead writes a Hoody Pipe URL with ?n=8 (six engineers + two latecomers' headroom) and pastes it into the post-mortem channel. Receivers can connect first — the pipe holds the slot for up to 5 minutes.

  3. 0314:00

    Press play

    The SRE pipes the snapshot through pv -L 50k into the URL. The server waits until eight curls are connected, then the bytes move through once in lockstep — the cascade fires on six terminals at the same instant.

  4. 0414:18

    Replay for latecomers

    The director joins late. Re-run the same line. The pipe is a path, not a place — there's nothing to seek, nothing to rewind, nothing stored on the server. Just press play again.

use-cases / replay-this-mornings-incident / capacity

How big does the room get?

From the documented Pipe API spec. Limits and behaviors that turn a single URL into a post-mortem theater.

  1. READERS PER PATH256

    Documented cap on n. Your post-mortem call will not run out of seats — the pipe scales to a whole org.

  2. BYTES STORED0

    The pipe is direct-streamed end-to-end. The replay leaves no trace on the server when the sender disconnects.

  3. JOIN WINDOW5 min

    Receivers can connect before the sender; the pipe holds the slot up to 5 minutes for late-joiners.

Source: Hoody Pipe API — limits documented for /api/v1/pipe/[path], n parameter (1–256), and unestablished pipe TTL.

use-cases / replay-this-mornings-incident / punchline

The post-mortem isn't a doc. It's a stream everyone watches together.

no scrolling out of sync · no screenshare squintingone URL · eight curls · one playhead
POST-MORTEMyesterdaytoday
  • ARTIFACTConfluence doc with screenshotshttps://prod-pipe.../pipe/replay
  • SYNCeveryone scrolls at their own speedlockstep playback for n=8
  • RETENTIONPDF lives on for yearsctrl-C and the pipe is gone
  • REPLAYschedule a second Zoomrerun the same curl
Read the pipe API
use-cases / replay-this-mornings-incident / replaces

What this replaces

The set of tools and rituals you currently invoke to walk a team through an incident timeline. Each one stores something, charges per seat, or loses the timing. The pipe is one URL with a shared playhead.

  • Confluence post-mortem docStatic page with stale screenshots
  • Notion incident reportBullet list, no synchronized timing
  • Screenshot pasting in SlackLoses the cascade order and timing
  • Manual replay-the-logs sessionsOne person scrolls, the rest squint
  • Recorded Zoom timelineEveryone scrubs at their own pace
  • Datadog notebook annotationsPer-seat license, no shared playhead
use-cases / replay-this-mornings-incident / cta

The cascade fires on six terminals at once. The conversation changes. People stop arguing about what happened and start watching what happened.

Read the pipe API
use-cases / replay-this-mornings-incident / related

Read the others