
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
A fire happens. Three engineers want the same logs at the same time. One sender pipes tail -F into a Hoody Pipe URL. Anyone with the link runs curl and sees the bytes stream past in real time. No bastion, no agent install, no dashboard seat.
On the prod container, one line pipes tail straight into a Hoody Pipe URL. Receivers GET the same path. The pipe holds nothing — bytes stream through. The first receiver to connect unblocks the sender; up to 256 readers can join the same path.
# On the production container — one line.
# tail -F follows new lines forever; curl -T - PUTs stdin
# straight into a pipe path. ?n=4 says "wait for 4 readers".
tail -F /var/log/app/*.log \
| curl -T - \
"https://prod-pipe.containers.hoody.com/api/v1/pipe/live?n=4"
# [INFO] Waiting for 4 receiver(s) to connect...
# [INFO] Streaming to 4 receiver(s)...# Any engineer with the URL — same command, same path.
# The response body IS the live stdout of the sender.
# Up to 256 readers can join. SSE progress is available too.
curl "https://prod-pipe.containers.hoody.com/api/v1/pipe/live?n=4"
# 200 GET /v1/orders/8421 · 18ms
# POST /v1/checkout user=u_28f payload=ok
# 500 POST /v1/checkout · stripe timeout
# retrying charge attempt=2/3Two pieces of the documented Pipe API: PUT /api/v1/pipe/[path] on the sender, GET /api/v1/pipe/[path] on every reader, both keyed by the same n. The server forwards the sender's Content-Type, holds the connection up to a 5-minute TTL while it waits for readers, and applies backpressure if any single reader is slow.
A logs URL behaves differently from a Datadog seat. It's read by URL, not by login. It vanishes when the sender stops. And it scales to a whole incident channel.
n=N is documented in the Pipe API: every reader joining the same path with the same n receives an identical fan-out copy. SREs, on-call, the founder watching from a phone — all tail the same stream at once.
There is nothing to install on the readers' side. Anything that speaks HTTP — curl, fetch, a browser tab, a Slack incident channel previewing the URL — is a valid log tail. The bytes are the response body.
When the sender disconnects, the pipe vanishes. No retention to configure, no log volume to clean up, no leftover endpoint exposed to the internet. The URL was a path, not a place — and the path closes when the fire ends.
From the Pipe API spec. Limits and behaviors that make a URL feel like infrastructure instead of a toy.
The documented cap on n. Your incident channel will not run out of seats.
The pipe is direct-streamed end-to-end. No intermediate disk, no retention to manage.
Receivers can connect before the sender; the server holds the slot for up to 5 minutes.
Source: Hoody Pipe API — limits documented for /api/v1/pipe/[path], n parameter, and unestablished pipe TTL.
Logs aren't a place anymore. They're a path.
The set of tools and rituals you currently invoke to get three engineers staring at the same prod log. Each one charges per seat, per agent, or per dashboard. The pipe is one URL.
The fire ends. You ctrl-C the sender. The pipe vanishes. There is nothing to clean up.