Skip to content
use-cases / agents-spawn-agents / hero
AGENT · EXEC · CONTAINERS

AI agents that spawn other AI agents

A research agent receives a complex prompt. Instead of doing everything itself, it posts to the containers API and gets back three sub-agent URLs. Each sub-agent can spawn its own sub-agents. None of them know about an orchestrator. They just call each other over HTTP.

Read the agent docs
research-agentscrapesummarizefact-checkfetch-arxivfetch-newssummary-1summary-2verify-claim-1verify-claim-2verify-claim-3
use-cases / agents-spawn-agents / layers

Three layers, one protocol

Every node in the tree is a Hoody container with a hoody-agent service. Parents and children only know each other by URL. There is no central planner watching from above.

PARENT

The research agent

Receives the prompt. Decides which sub-tasks to fan out. POSTs to the containers API to spawn one new container per sub-agent, then calls each one's /api/v1/agent/tasks endpoint with the slice of work it owns.

SUB-AGENT

Scrape, summarize, fact-check

Each runs in its own container with its own filesystem, terminal, and SQLite. None can corrupt the others. Any of them can decide it needs help and spawn its own children the same way the parent did. The protocol is symmetric.

GRANDCHILD

Per-claim verifiers

The fact-checker sees five claims, spawns five containers in parallel, fires five HTTP requests, awaits five responses. Same pattern, one level deeper. Idle containers cost nothing, so deep trees are economically free.

use-cases / agents-spawn-agents / mechanism

Spawning a sub-agent is one HTTP call

There is no agent runtime to install, no message bus to configure, no shared state to synchronise. The parent agent uses the same Containers API a human would: POST to create, then call the child's URL.

research-agent / spawn-children.ts
// Parent agent (running in its own Hoody container).
// Spawn three sub-agents, hand each one a slice of work,
// await their results in parallel.

const HOODY = 'https://api.hoody.com';

async function spawnChild(name) [
  // 1. Create a fresh container with the agent service running.
  const res = await fetch(HOODY + '/api/v1/projects/' + PROJECT + '/containers', [
    method: 'POST',
    headers: [ authorization: 'Bearer ' + TOKEN ],
    body: JSON.stringify([
      name,
      server_id: SERVER,
      container_image: 'debian-12',
      hoody_kit: true,
      ai: true,
    ]),
  ]);
  const child = (await res.json()).data;
  // 2. Hit the child's agent URL like any other HTTP service.
  const agentUrl = 'https://' + PROJECT + '-' + child.id + '-agent-1.' + SERVER + '.containers.hoody.com';
  return [ id: child.id, agentUrl ];
]

const [scrape, summarize, factCheck] = await Promise.all([
  spawnChild('scrape'),
  spawnChild('summarize'),
  spawnChild('fact-check'),
]);

// 3. Dispatch tasks. Each child runs autonomously,
//    can spawn its own children the same way.
const results = await Promise.all([
  fetch(scrape.agentUrl + '/api/v1/agent/tasks', [ method: 'POST', body: JSON.stringify([ description: scrapeBrief ]) ]),
  fetch(summarize.agentUrl + '/api/v1/agent/tasks', [ method: 'POST', body: JSON.stringify([ description: summarizeBrief ]) ]),
  fetch(factCheck.agentUrl + '/api/v1/agent/tasks', [ method: 'POST', body: JSON.stringify([ description: factCheckBrief ]) ]),
]);

Both endpoints are documented Hoody APIs: POST /api/v1/projects/$PID/containers spawns a new container with the agent service preinstalled; POST $CHILD_URL/api/v1/agent/tasks dispatches a task to it. The fact-checker can run the exact same code to spawn its own per-claim verifiers — the recursion is just HTTP.

use-cases / agents-spawn-agents / compare

No orchestrator framework needed

Every agent framework is a thin layer on top of in-process function calls. They all assume agents share memory, share Python, share a single runtime. Hoody assumes the opposite: agents are separate computers, and HTTP is already the protocol they speak.

WHAT YOU INSTALL TODAYWHAT REPLACES IT
  • LangGraphDAG you wire in Python
  • CrewAIin-process “crew” objects
  • AutoGenmessage bus + role config
  • LangChain agentstool wrappers + executors
  • Haystack pipelinesgraph spec per workflow
  • Custom orchestratorqueue, retries, glue you maintain
ONE PRIMITIVE

POST /api/v1/projects/$PID/containers

fetch(child.url + '/api/v1/agent/tasks')

Two HTTP calls per child. The parent doesn’t import a framework, doesn’t register a tool, doesn’t share state. Every sub-agent is a peer with a URL — same shape as the parent, same shape as you would call any other HTTP service.

use-cases / agents-spawn-agents / punchline

Each agent is a peer with a URL. They call each other the way any HTTP service calls any other HTTP service.

There is no orchestrator. There is no message bus. There is no shared state to synchronise. The only thing that limits how many agents you run in parallel is how much you want to spend — and idle containers cost nothing.

no frameworkjust HTTP
use-cases / agents-spawn-agents / replaces

What this replaces

The orchestration frameworks and ephemeral-runtime services people reach for when one agent isn’t enough. Each one solves a slice of the problem. The HTTP-native primitive replaces the slice and the framework wrapped around it.

  • LangChainTool wrappers, executors, glue per provider
  • LangGraphDAG of agents you assemble in Python
  • CrewAIIn-process crew objects, single runtime
  • AutoGenMessage bus + role configuration
  • E2B sandboxesEphemeral exec without a persistent home
  • Modal / Replicate runnersPer-call cold start, no long-lived URL
  • Custom orchestratorsQueues, retries, observability you maintain
use-cases / agents-spawn-agents / cta

Stop wiring agent graphs. Spawn one. Let it spawn the rest.

Read the agent docs
use-cases / agents-spawn-agents / related

Read the others