
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Twenty agents push their metrics to one pipe URL with curl -T. Your dashboard reads the same URL with ?progress and renders the SSE stream straight into the page. No InfluxDB, no Prometheus, no scrape interval. Just a wire.
no Prometheus, no InfluxDB, no metrics service — just SSE on a pipe
Every agent curls its line into the same pipe path. The dashboard's browser opens an EventSource on that path with ?progress. The Hoody Pipe server holds nothing — bytes that arrive on one side leave on the other.
#!/bin/sh
# Agent monitor loop — one line per second.
while true; do
cpu=$(top -bn1 | awk '/Cpu/ [print $2]')
mem=$(free | awk '/Mem:/ [printf "%.0f", $3/$2*100]')
line="cpu=$cpu mem=$mem qps=$(cat /tmp/qps) ts=$(date +%s)"
echo "$line" | curl -T - https://pipe.hoody.com/api/v1/pipe/metrics-$AGENT_ID
sleep 1
done// One <script> in one HTML file. No backend.
const tiles = document.querySelectorAll('[data-agent]');
tiles.forEach((tile) => [
const id = tile.dataset.agent;
// ?progress turns the pipe path into an SSE stream.
const sse = new EventSource(
`https://pipe.hoody.com/api/v1/pipe/metrics-$[id]?progress`,
);
sse.addEventListener('metric', (e) => [
const [ cpu, mem, qps ] = JSON.parse(e.data);
tile.querySelector('.cpu').textContent = `$[cpu]%`;
tile.querySelector('.mem').textContent = `$[mem]%`;
tile.querySelector('.qps').textContent = qps;
]);
]);Agents curl. The browser EventSources. The pipe forwards. There is nothing in between to scale, restart, or pay for. Close the dashboard and the streams end. Open it again and you see live data within the second.
What you give up by deleting the backend, you get back as something simpler.
There is no scrape interval to wait for. The agent's last write is the dashboard's current frame. The pipe forwards directly — no intermediate flush.
No retention policy because there is no storage. No disk to fill, no compaction window, no time-series index to corrupt. The metric exists while a reader is watching.
The dashboard is an HTML file you can host anywhere — or open from disk. There is no agent to install, no daemon to run, no DataDog seat to provision. The pipe URL is the entire stack.
The standard agent-to-dashboard stack has four moving parts. The pipe model has zero. Same wire, half a screen of curl.
When you skip the database, the things you used to manage stop existing. There is no retention policy on a wire.
A pipe path is small but real infrastructure. Numbers come from the Hoody Pipe API guarantees, not from invented benchmarks.
Up to 256 dashboards or curl tails can subscribe to the same path with ?n. The slowest reader applies backpressure but never blocks the others.
Up to 50 ?progress SSE viewers per path. They don't consume a receiver slot — your dashboard tabs and your terminal can watch in parallel.
The server doesn't write to disk. Bytes that arrive on the sender side leave on the reader side. There is no flush window between them.
Limits per the Hoody Pipe API: receiver count 1–256, progress spectators capped at 50 per path, 30-minute progress connection TTL, 30-second post-transfer linger.
The dashboard didn't query a database. The bytes just arrived.
The standard reach-for-it tools when you want a metrics dashboard. Each one charges you a database and a daemon. The pipe charges you neither.
Stop scraping. Stop storing. Watch the wire — and when you stop watching, the wire is empty.