
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
A 200GB Postgres backup in Frankfurt. A fresh box in Singapore. You skip the S3 round-trip. One curl pipes pg_dump to a Hoody pipe URL; one curl on the other side streams it straight into psql. Bytes are in flight, never at rest.
200GB · zero hops · zero hold
Hoody Pipe is a named path on an HTTP server. The sender PUTs a stream to it, the receiver GETs the same path, and the server splices the two together. Nothing is written to disk; the pipe holds zero bytes by design.
From the source box, pg_dump | gzip | curl -T - to the pipe URL. The PUT body streams as fast as TCP backpressure allows. The server holds the connection until a receiver shows up on the same path.
PUT /api/v1/pipe/migrationWhen the receiver's GET lands on the same path, Hoody splices the upload's bytes directly into the download's response. No buffer, no on-disk staging, no async commit — just a direct stream between two HTTP sockets.
0 bytes on diskFrom the destination box, curl GETs the path and pipes the response through gunzip | psql. The receiver-side stream finishes the second the sender's last byte lands. No retry, no manifest, no cleanup.
GET /api/v1/pipe/migrationConnection ordering doesn't matter — the receiver can curl first and block until the sender connects (or vice-versa), up to the 5-minute pipe TTL. Backpressure flows end-to-end: a slow psql throttles the curl on the source. There's no queue to overflow because there's no queue.
These are not pseudocode. Open two terminals on the two servers, run one each, and watch a 200GB backup leave one cloud and land in another.
PUT (curl -T) is preferred because it's how curl wants to upload a stream. POST works identically — same path, same status messages. Use ?n=N on both sides if you need to fan out the same dump to many receivers.
A third laptop opens the same pipe URL with ?progress and gets a real-time SSE feed of bytes-per-second, ETA, and connected receivers. Spectating doesn't consume a receiver slot — fifty teammates can watch the migration without changing the n value or interfering with the transfer.
The S3 round-trip looks simple on a whiteboard. In production it's a stack of moving parts that all charge by the second. The pipe collapses the entire stack into the transport itself.
S3, GCS, Azure Blob — the round-trip exists only because there was no other place to park the bytes. The pipe is the path. There is no bucket to provision, lifecycle-rule, or scrub afterwards.
Egress on the upload, egress on the download — twice. With the pipe the bytes leave Frankfurt and arrive in Singapore in one hop. You're paying for the seconds the connection was open, not for storage you'll delete tomorrow.
Your monitoring already understands HTTP. So does your VPN, your firewall, your audit log. No new IAM identity, no new SDK, no new failure mode — it's a curl command.
Speed is bounded by the slower link end-to-end (Frankfurt egress, Singapore ingress, your TCP window). Hoody's pipe holds zero bytes — there is no server-side storage; backpressure flows directly between the two endpoints.
Two terminals, one URL, no third storage layer.
The whole migration is the same shape as cat file | wc -l. The fact that the two pipes happen to live in different data centres is an implementation detail of the URL.
Anything that exists only because nobody had an HTTP path that streams. The pipe collapses the entire data-migration stack into one curl on each side.
Skip the bucket. The transport is the URL.