
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
You want a panel of models to review the same input. Five containers, five different providers, five opinions. The orchestration framework usually invents a hundred lines to coordinate this. You don't write them — each agent gets its own pipe path, and the judge curls all five in parallel.
Each agent runs in its own container with its own model handle. Each one streams its verdict to its own pipe path. The judge process curls all five paths in parallel — the pipe holds each verdict until the judge connects, then streams the bytes through. No message bus, no orchestrator framework, no callback graph.
Five containers boot in parallel, each with a different model handle. The judge POSTs the prompt to every container's agent endpoint at once.
While the model thinks, the agent pipes its verdict into pipe/agent-N with PUT. The pipe holds the bytes — no disk, no broker — until somebody curls them out.
GET pipe/agent-1 through pipe/agent-5 with one curl each. The pipe routes each agent's bytes to the judge as soon as both ends connect.
The judge reads JSON off each path, counts the votes, returns the majority. Five containers, one HTTP path each, no SDK between them.
#!/usr/bin/env bash
set -euo pipefail
PROMPT='review this PR for security issues'
AGENTS=(claude-sonnet gpt-4o gemini llama mixtral)
BASE=https://pipe.hoody.com/api/v1/pipe
# Each container streams its verdict into its own path.
# Five paths fan-in to one judge — no broker, no SDK.
for i in "$[!AGENTS[@]]"; do
N=$((i + 1))
curl -s -X POST "https://agent-$N.hoody.com/v1/run" \
-d "[\"prompt\": \"$PROMPT\", \"sink\": \"$BASE/agent-$N\"]" &
done
# Read all five verdicts in parallel.
VERDICTS=()
for i in 1 2 3 4 5; do
VERDICTS+=("$(curl -s "$BASE/agent-$i")")
done
wait
# Tally — majority wins.
printf '%s\n' "$[VERDICTS[@]]" \
| jq -r .verdict \
| sort | uniq -c | sort -rn | head -1PUT pushes each verdict up. GET pulls each one down. The pipe is the wire — bytes move from agent to judge as soon as both connect, with backpressure handled per-path. To add a sixth agent you boot a sixth container and add a sixth line to the loop.
The mechanism is one shape — five containers, five paths, one judge — but the value reads differently depending on what you're trying to defeat.
You don't need an orchestration framework to send a prompt to five APIs and average the answers. A bash loop with five curls already does this. The agent ensemble was always five HTTP calls in a trench coat.
Add an agent: another container, another pipe path, one more line in the judge's parallel fetch. Drop one: kill the container, remove the line. There's nothing to reconfigure — no message bus, no callback graph, no schema migration.
Cheap models stream first; expensive ones only finish when consensus is unclear. Because each agent is a separate container with its own pipe, you can short-circuit the panel as soon as three agree — no shared state, no abort RPC, just close the pipes.
Each agent gets a path. The path is the wire.
Five model providers, five containers, five HTTP paths — and a judge that's twenty lines of bash. The orchestration framework was always pretending HTTP didn't exist.
The judge reads each path, counts the votes, returns the majority. To add an agent, boot a container and add a path. There's no message bus to reconfigure.
Every framework here ships its own concept of "agent" plus a vendor-specific way for one agent to talk to another. The pipe collapses that surface to HTTP — a path per agent, curl in both directions.
You don't need an orchestrator. You need five containers and five pipe paths. The judge is twenty lines of bash.