
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Service A pipes its output to a path. Service B pulls from the same path. The pipe routes bytes between the two containers in real time. No Redis, no consumer group, no broker daemon to babysit.
POST in. GET out. The middle hop is a URL.
Two HTTP verbs and one path. Either side can connect first; the pipe waits up to five minutes for the counterpart and then streams the bytes through. No queue, no offset, no group.
Producer offline? Consumer GETs and waits. Consumer offline? Producer PUTs and waits. The pipe holds the connection up to five minutes until the counterpart arrives.
When the consumer is slow, the kernel slows the producer's stream. No queue depth to monitor, no high-watermark to tune. The socket is the buffer.
The honest trade-off: nothing is persisted. For durable queueing, this isn't it. For fast container-to-container fan-out, the broker just disappeared.
# Container A — producer streams jobs to the pipe
service-a | curl -T - https://api.hoody.com/api/v1/pipe/jobs
# Container B — consumer reads from the same path
curl https://api.hoody.com/api/v1/pipe/jobs | service-b
# Either side can connect first.
# The pipe holds the connection up to 5 minutes
# until the counterpart arrives, then streams through.PUT (or POST) sends. GET receives. Bytes do not land on disk anywhere — they cross the wire from producer to consumer with the pipe forwarding in flight.
When a logger or a metrics collector needs the same events, raise ?n and add a curl. No broker config, no consumer group, no auth secret to rotate. The new reader just exists.
One slow reader applies backpressure to the producer; it does not block the others. Up to 256 readers per path.
The broker existed because two containers couldn't talk directly. With the pipe, they can. Everything the broker added — auth, clients, ops — drops away.
Nothing to deploy, monitor, or upgrade. The pipe is the platform; the URL is the only API.
One Hoody token, scoped per project. No per-broker username, password, or ACL file.
Anything that speaks HTTP can produce or consume — bash, Python, Go, a phone, a browser. No client library.
Messages live in flight, not at rest. No disk to fill, no retention policy, no GDPR question about queued data.
TCP slows the producer when the slowest consumer falls behind. No lag dashboards because there is no lag — just stream rate.
Producer and consumer never see each other's IPs. They share a URL. Move either to another host without rewiring.
The broker is the URL. The URL is the broker.
The middle layer collapses. What used to be a stateful daemon with credentials, clients, and a runbook is now a path. The architecture diagram has one fewer box.
one URL, one curl, one HTTP verb
The infrastructure teams reach for when one container needs to hand bytes to another. Each one adds a daemon, a config, and an on-call rotation. The pipe charges none of that.
Stand up Redis to talk between two containers? Or share a URL.