LLMs already know HTTP. That's the whole substrate.
Every Hoody capability is an HTTP call. An LLM trained on the public internet already knows curl, JSON, and REST conventions. No SDK bridge. No MCP adapter to write. The agent can operate the platform on its first reasoning turn.
HTTP everywhere · 100+ endpoints per container · peer architecture · agents spawning agents
The protocol LLMs were already trained on.
Every GPT, Claude, Gemini, and Llama has read millions of HTTP examples during pretraining — curl commands, REST endpoints, JSON bodies, status codes. They know the grammar natively. Hoody exposes the platform through that grammar so nothing has to be bridged.
No SDK generation step
Hoody clients don't need an SDK. The LLM reads the OpenAPI spec or just the docs, then emits curl. Works with every model — today and tomorrow.
MCP adapters become optional
The MCP client (/platform/mcp) extends the agent's toolset with third-party servers. But the Hoody platform itself is reachable via plain HTTP — no translation layer.
OpenAPI is the contract
Every Kit service exposes /openapi.json. Agents read the spec, know the schemas, and call the endpoints. Self-documenting by construction.
Human-readable debugging
Agent made a mistake? Look at the curl it generated. The trace is the same format a human operator would write by hand.
Every container gives an agent a hundred tools on arrival.
The Kit services expose ~100+ documented HTTP endpoints per container. Terminal execution, file operations, SQL queries, browser automation, script deployment, display screenshots, notifications. An agent that connects to a Hoody container has immediate programmatic access to everything a human developer has.
Run commands, stream output, capture sessions, export history
Read, write, search, compress, transfer across 60+ cloud backends
Deploy scripts as HTTP APIs, execute with auth, cache results
Query, mutate, introspect schemas, concurrent-safe writes
Navigate, screenshot, interact, scrape, fill forms
Screenshot X11, list windows, observe GUI state
Containers are peers. Agents talk to each other directly.
There is no central orchestrator. An agent in container A can open container B's URL, read its files, run its terminal, query its SQLite — as long as it has the URL and the auth. Multi-agent systems emerge from HTTP, not from a coordination layer on top.
Agent cascade
Agent A triggers agent B via HTTP. B does specialized work, returns. Hierarchy forms from function, not from architecture.
Fan-out parallelism
Agent A spawns 10 copies of a task across 10 containers. Each is an independent agent operating in isolation.
Review chain
Agent A drafts code. Agent B reviews by opening A's files. Agent C runs tests by POSTing to exec.
Specialist agents
One agent dedicated to filesystem work. Another to database. Another to browser. They talk over HTTP, not in-process.
The agent doesn't suggest. It ships.
Legacy agent setups stop at 'tell the human what to run.' Hoody agents run the command themselves. The failure path is rollback via snapshot, not fear of making changes — which changes what an agent is actually useful for.
Reason
Agent reads, thinks, plans — same as any LLM loop.
Act
Agent POSTs to terminal / exec / files / sqlite. The command runs in production.
Observe
Agent reads the response, the exit code, the filesystem change, the stderr. Real-world feedback.
Recover
Something broke? PATCH to a snapshot (/methods/data-state) rolls back. Safe to take risks.
Agents create containers. Containers run agents.
Because container creation is a single HTTP POST, an agent can create new containers on demand. The multi-agent system is emergent — no orchestrator needed. Containers can be short-lived scratch workers, long-running specialists, or anything in between.
Agent decides to parallelize
Task is hypothesis-testing 10 branches. Agent recognizes the fan-out opportunity.
POST /projects/ID/containers × 10
Ten new containers, each with its own agent instance.
Dispatch
Each worker gets a different branch. Agents work in parallel, reporting back to the parent container.
Teardown
Winner keeps running. Losers get DELETEd. Cost: near zero per throwaway container.
Every HTTP call is future training data.
When all operations are HTTP requests, they are all captured as proxy logs by default. A user's workflow becomes a structured dataset of (prompt, action, outcome) tuples — without any instrumentation. This is what makes the platform AI-native, not just AI-compatible.
Every action in proxy logs
Proxy logs capture method, path, body (optional), response. Every operation is traceable.
Structured by default
JSON requests and responses are already the right shape for supervised fine-tuning or RLHF pipelines.
Your data, your training
Logs stay on your bare metal. Hoody sees none of it unless you export.
Self-building infrastructure
MITM scripts can generate documentation, optimize endpoints, or predict next-most-common requests from the log history.
Give an LLM curl and it already knows how to use it.
Hand your agent a Hoody container URL and a token. It can now execute commands, write files, query databases, spawn more containers — all from the protocol it was trained on.
See also — /platform/ai-gateway for the model side, /platform/mcp for external tool integration, /kit/agent for the agent runtime.