Your agent, plugged into any MCP server.
Hoody Agent ships with a production-ready Model Context Protocol client. Connect GitHub, Slack, Jira, custom APIs, or any MCP-compatible server — tools are discovered at runtime and merged with the agent's built-in capabilities.
Local stdio + remote StreamableHTTP / SSE · OAuth for authenticated servers · destructive operations require confirmation
# POST against your agent container
$ curl -X POST https://PROJECT-CONTAINER-agent-1.SERVER.containers.hoody.com/api/v1/agent/mcp/servers \
-H "Content-Type: application/json" \
-d '{ "name": "linear", "enabled": true }'
<< 201 Created
{ "id": "srv_...", "name": "linear", "status": "connecting" }— The agent calls listTools() on every connected server at startup. New tools appear in the next reasoning turn.
One POST. Tools appear.
The agent calls listTools() on every connected MCP server. Whatever the server returns — filesystem, Slack messaging, Jira tickets, custom APIs — merges with the agent's native HTTP capabilities for the next reasoning step.
1. Register a server
POST to /api/v1/agent/mcp/servers with a name and enabled flag. Hoody resolves and starts the connection.
2. Discover tools
The MCP client calls listTools() on the server. The schema is dynamic — no pre-declared manifest needed.
3. Merge with native tools
Discovered tools join the agent's built-in toolset. The same prompt can hit a GitHub tool, a filesystem tool, and the agent's own container-control tools.
4. Invoke with safety
Destructive operations — file writes, external API mutations — require explicit confirmation. The agent never silently ruins state.
POST/api/v1/agent/mcp/servers
Four capabilities that matter.
The MCP client is small but opinionated. Every feature exists because the agent has to stay reliable while the tool surface keeps expanding.
Dynamic tool discovery
No fixed list. The agent asks each connected server what it offers, so the tool catalogue evolves without redeploying anything.
Local + remote protocols
stdio for local servers that run next to the agent, StreamableHTTP + SSE for remote servers across the network. One client, both worlds.
OAuth authentication
Full OAuth flow for authenticated remote servers — connect a GitHub MCP server, a Google Drive MCP server, anything that gates tools behind a user identity.
Safety-first invocation
Destructive tool calls require confirmation before execution. The agent pauses, surfaces the action, and waits for a signal before touching state.
Any MCP client plugs in.
Hoody Agent speaks MCP to external servers — drop a server URL or stdio command into the agent and its tools are yours on the next turn. Connect via the official MCP registry, the community catalog, or your own internal tooling.
Claude Desktop
Anthropic's reference MCP client — drop the server config and Hoody's tools appear in Claude's tool picker.
Cursor
AI-native IDE with full MCP support — access files, terminal, exec, everything through one config.
Cline
Autonomous coding agent in VS Code — reads and writes across Hoody containers via MCP.
Windsurf
Codeium's agentic IDE — MCP tools join its Cascade flows directly.
Continue.dev
Open-source AI coding assistant — MCP-first integration, bring your own model.
Hoody Agent + DIY
Write your own MCP client against the same protocol — or use hoody-agent's built-in client.
{
"mcpServers": {
"hoody": {
"url": "https://CONTAINER.SERVER.containers.hoody.com/mcp",
"auth": { "type": "oauth" }
}
}
}The protocol is open — every client above reaches the same tool surface, no vendor adapters.
One tool call, end to end.
An agent asks for tools, picks one, calls it, reads the result. All four steps are the MCP protocol. Nothing else.
Discover
Agent calls listTools() on every connected MCP server. Each server returns its tool schema — no client-side catalogue to maintain.
Choose
The model reasons over tool descriptions and picks one that fits the current step. No manual routing logic.
Invoke
Call goes out as an MCP message. Hoody's client handles transport (stdio / HTTP / SSE) and auth transparently.
Reflect
Typed JSON response comes back in. The agent reads it and continues — successful tool call became just another reasoning input.
→ listTools()
← [ { name: "read_file", schema: {...} },
{ name: "write_file", schema: {...} },
{ name: "run_shell", schema: {...} }, ... ]
→ callTool("read_file", { path: "/hoody/storage/readme.md" })
← { content: "# Project readme\n\nSee ..." }
→ callTool("run_shell", { cmd: "pytest tests/" })
⟳ requires confirmation — destructive-op gate
← { exitCode: 0, stdout: "... 42 passed ..." }Agents don't ship destructive ops silently.
MCP exposes sharp tools — file writes, shell execution, API mutations. Hoody's client enforces three gates so the agent has capability without the whole cannon.
Destructive-op confirmation
Any tool call that writes, deletes, or executes external side effects pauses for explicit approval. Agent sees a clear prompt; user signs it off or sees the diff first.
OAuth for remote servers
Remote MCP servers authenticate via full OAuth flow — GitHub, Slack, Jira, any identity-gated tool. Tokens stay on your server; agents never see raw credentials.
Per-server allowlist
Every MCP server is explicitly registered via POST /agent/mcp/servers. Unapproved servers can't connect. Revoke with DELETE and its tools vanish on the next agent turn.
Complete audit trail
Every tool call (discover, invoke, result) is logged through Hoody Proxy. Reviewable, exportable, tamper-evident.
Plug into the MCP world.
Any MCP-compatible server — from the official registry, the community, or your own internal tooling — connects the same way. Drop a server URL or stdio command into the agent and its tools are yours on the next turn.
Messages, channels, DM the team
Tickets, sprints, status updates
Official reference server (local files)
Read-only queries over a real DB
Web search without scraping
Headless browser automation
Write an MCP server around any REST / gRPC surface you already run
Examples you can connect today. New MCP servers appear every week; none of this list is hard-coded on Hoody's side.
One POST away from more tools.
Spin up a Hoody Agent container, POST the MCP server you want, and it's live on the next reasoning turn. No SDK install, no agent restart, no extra infrastructure.
See also — /platform/ai-gateway for the model layer, /platform/control-plane for the governance API, /kit/agent for the Agent runtime.