
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Your matrix CI fans out across thirty test runners. Each one needs the same 800 MB image. Stream the tarball into one pipe path with ?n=30. All thirty workers curl the same URL. The bytes go through once, the server holds nothing, and no registry credentials get rotated.
# stream the image oncetar c ./image.tar | curl -T - https://pipe.hoody.com/build-12af?n=30ONE SENDER ON THE LEFT. THIRTY GET RECEIVERS ON THE RIGHT. THE PIPE WAITS UNTIL EVERYONE IS LISTENING, THEN THE BYTES MOVE THROUGH ONCE.
The pipe is a fan-out router with no disk. The sender's POST to /api/v1/pipe/[path]?n=30 blocks until thirty receivers connect to the same URL with the same n. Then the bytes flow from the build container straight through to every runner, simultaneously, at the speed of the slowest receiver.
tar c | curl -T - https://pipe/.../build?n=30The build container pipes the tarball straight into curl. No file written, no registry pushed.
POST /api/v1/pipe/[path]?n=30The server holds the sender until all thirty receivers connect. Mismatched n returns 400. Pre-connected receivers are fine.
curl https://pipe/.../build?n=30 | tar xEach runner gets identical bytes. Backpressure flows from the slowest receiver, not from the sender's bandwidth.
Nothing persists. Nothing is cached. The pipe brokers the connection, then steps out of the way. When the slowest runner finishes, the transfer finishes — and the URL is gone.
Naive: thirty registry pulls of the same 800 MB tarball, thirty cold caches, thirty network round-trips. Pipe: one egress, one transfer, the slowest receiver sets the pace.
12s
One egress at line rate. Slowest receiver sets the pace, but no one re-downloads.
1× / build
Bytes leave the builder once, fan out at the pipe. No S3 GET fees, no Docker Hub pulls.
0 bytes
The pipe holds nothing on disk. No registry to clean up, no cache key to invalidate.
Wall-time figure assumes a 30-way matrix on the same regional network as the build container; cross-region transfers gate on inter-region bandwidth, not the pipe.
Once the build is one URL and thirty curls, a stack of CI scaffolding goes away. No artifact storage to age out. No registry credentials to rotate. No cache action to debug.
Backpressure is built into the pipe. The fast workers don't waste a registry round-trip waiting for the slow one — they wait at the pipe, then drink at the same rate. No one re-downloads.
Nothing is pushed to a registry, so nothing has to authenticate to one. The URL itself is the credential — short-lived, scoped to one transfer, evicted when the build finishes.
Bytes leave the builder once. The pipe broadcasts. You pay one egress per build instead of thirty registry pulls per matrix run.
The pipe is per-build, not per-key. There is no GitHub Actions cache to mis-hit, no buildx layer mystery, no stale tarball from last week's main.
Same pattern handles node_modules, .pnpm-store, target/, the wheel cache, the dataset shard. If it streams, it fans out.
One sender. Thirty receivers. Zero S3 bills.
A 30-way push that took ninety seconds and an S3 hit takes twelve seconds and a single egress. No one re-downloads. No registry credentials get rotated. The URL evicts itself when the matrix finishes.
The pieces a matrix-CI flow normally has to assemble — registry, cache action, mirror, custom upload step. The pipe folds them into one URL.
Stop pushing the same tarball thirty times. Push it once. Let thirty curls share the stream.