
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
You're running an eight-hour migration. Five people want a status without consuming a receiver slot or interrupting the stream. Append ?progress to the pipe URL. Anyone who opens it gets a live HTML dashboard — bytes transferred, current speed, ETA, state transitions. The migration runs at full speed regardless of how many eyes are watching.
?progress is a side-channel read. It never claims a receiver slot, never creates backpressure, never touches the bytes. The migration runs at full bandwidth regardless of how many people are watching.
She pastes the pipe URL with ?progress into her phone browser. The HTML dashboard appears instantly — state: waiting, 0% — no install, no login, no receiver slot consumed.
SSE pushes a state: streaming event. The progress bar snaps to 22%, bytes tick up, MB/s settles at 118. The dashboard updates itself every 250 ms without a single page reload.
She closes the tab. Her spectator connection drops. The migration doesn't notice — it was never in the data path. The sender and its one real receiver carry on.
She reopens the URL at sunrise. The dashboard shows a done event: 7.6 GB transferred, 8h 2m, no errors. Server-side state survives the refresh — latecomers always see the final line.
She forwards the URL to the team Slack. Three engineers open it and see the same done state. No status thread to close, no Grafana panel to un-star. One URL, five witnesses, zero interruptions.
# 1. Sender — eight-hour migration. Same as always.
tar czf - /var/lib/postgres | curl -T - "$PIPE/api/v1/pipe/migration"
# 2. Receiver — the only client that matters for backpressure.
curl "$PIPE/api/v1/pipe/migration" | tar xzf - -C /restore
# 3. Boss opens the URL on her phone. HTML dashboard. No setup.
# => https://pipe.hoody.com/api/v1/pipe/migration?progress
# 4. You want SSE for a Slack bot? Same URL, different Accept.
curl -N -H "Accept: text/event-stream" \
"$PIPE/api/v1/pipe/migration?progress" \
| grep -E '^event: (progress|state|done)'
# event: state \n data: '{'"state":"streaming","receivers":1'}'
# event: progress \n data: '{'"bytes":5046464512,"mbps":118,"etaSec":840'}'
# event: done \n data: '{'"bytes":8160000000,"durationSec":28800'}'Three SSE event types. state for transitions (idle → waiting → streaming → complete), progress every 250 ms while bytes flow (bytesTransferred, speed, ETA), done once at the end with final stats. Up to fifty spectators per path, each with a five-minute connect window.
?progress is a side-channel. Boss, coworker, external client, on-call — they all open the same URL. None of them affect the transfer. All of them see the same live state.
Bookmarks the URL on her phone. Checks twice a day.
HTML dashboard, no loginCurl SSE into a Slack webhook. One-liner status bot.
text/event-stream, same URLEmbedded in a public status page. Polls every 30 s.
0 receiver slots, full live stateWired to PagerDuty via the done event. Pings when finished.
event: done, one-shot triggerWatching the migration is its own URL. The migration doesn't notice.
Every team has a way to answer 'how far along is it?' Most of those ways cost a service to run, a dashboard to wire, or a chat channel to babysit. A query parameter on the pipe URL costs none of that.
Send the URL. Stop sending updates.