Skip to content
use-cases / inter-container-ipc-without-the-broker / hero
PIPE · CONTAINER-TO-CONTAINER

Inter-container IPC without the broker

Service A pipes its output to a path. Service B pulls from the same path. The pipe routes bytes between the two containers in real time. No Redis, no consumer group, no broker daemon to babysit.

Read the pipe docs
use-cases / inter-container-ipc-without-the-broker / mechanism

How the pipe replaces the broker

Two HTTP verbs and one path. Either side can connect first; the pipe waits up to five minutes for the counterpart and then streams the bytes through. No queue, no offset, no group.

01 · HANDSHAKE

Either side can connect first

Producer offline? Consumer GETs and waits. Consumer offline? Producer PUTs and waits. The pipe holds the connection up to five minutes until the counterpart arrives.

02 · BACKPRESSURE

The TCP connection is the queue

When the consumer is slow, the kernel slows the producer's stream. No queue depth to monitor, no high-watermark to tune. The socket is the buffer.

03 · STATELESS

Forgets messages once delivered

The honest trade-off: nothing is persisted. For durable queueing, this isn't it. For fast container-to-container fan-out, the broker just disappeared.

producer.sh · consumer.sh
# Container A — producer streams jobs to the pipe
service-a | curl -T - https://api.hoody.com/api/v1/pipe/jobs

# Container B — consumer reads from the same path
curl https://api.hoody.com/api/v1/pipe/jobs | service-b

# Either side can connect first.
# The pipe holds the connection up to 5 minutes
# until the counterpart arrives, then streams through.

PUT (or POST) sends. GET receives. Bytes do not land on disk anywhere — they cross the wire from producer to consumer with the pipe forwarding in flight.

use-cases / inter-container-ipc-without-the-broker / fanout

Add a third reader. Then a fourth.

When a logger or a metrics collector needs the same events, raise ?n and add a curl. No broker config, no consumer group, no auth secret to rotate. The new reader just exists.

pipe/jobs · readers?n=4 · ALL FAN-OUT
  • + ADDED
    consumer-aloggermetricsaudit
    curl ?n=4 — joins the fan-out
  • = STABLE
    producer
    service-a | curl -T - ?n=4
  • − REMOVED
    redisconsumer-group.ymlbroker.conf
    no broker, no config

One slow reader applies backpressure to the producer; it does not block the others. Up to 256 readers per path.

use-cases / inter-container-ipc-without-the-broker / advantages

What the URL gives you that the broker took

The broker existed because two containers couldn't talk directly. With the pipe, they can. Everything the broker added — auth, clients, ops — drops away.

  • No daemon to operate

    Nothing to deploy, monitor, or upgrade. The pipe is the platform; the URL is the only API.

  • Bearer auth, not broker creds

    One Hoody token, scoped per project. No per-broker username, password, or ACL file.

  • Same wire, any language

    Anything that speaks HTTP can produce or consume — bash, Python, Go, a phone, a browser. No client library.

  • No persisted state

    Messages live in flight, not at rest. No disk to fill, no retention policy, no GDPR question about queued data.

  • Backpressure is the protocol

    TCP slows the producer when the slowest consumer falls behind. No lag dashboards because there is no lag — just stream rate.

  • Containers stay decoupled

    Producer and consumer never see each other's IPs. They share a URL. Move either to another host without rewiring.

use-cases / inter-container-ipc-without-the-broker / punchline

The broker is the URL. The URL is the broker.

The middle layer collapses. What used to be a stateful daemon with credentials, clients, and a runbook is now a path. The architecture diagram has one fewer box.

WAS THERE
  • redis clusterauth, replicas, failover
  • consumer-group.ymloffsets, partitions
  • broker SDK in every serviceclient lib per language
IS THERE
/api/v1/pipe/jobs

one URL, one curl, one HTTP verb

Read the pipe docs
use-cases / inter-container-ipc-without-the-broker / replaces

What this replaces

The infrastructure teams reach for when one container needs to hand bytes to another. Each one adds a daemon, a config, and an on-call rotation. The pipe charges none of that.

  • Redis pub/subDaemon, auth, clients per language
  • RabbitMQCluster to operate for in-flight bytes
  • NATSAnother protocol, another sidecar
  • ZeroMQLibrary in every service, no URL
  • Custom routing daemonsBespoke service to keep alive
  • gRPC streaming servicesSchema, codegen, mutual TLS overhead
  • Apache Kafka brokersStorage layer for messages you don't keep
use-cases / inter-container-ipc-without-the-broker / cta

Stand up Redis to talk between two containers? Or share a URL.

See the pipe API
use-cases / inter-container-ipc-without-the-broker / related

Read the others