The lifecycle layer. One API.
api.hoody.com is where containers come into existence. Create, pause, snapshot, copy across servers, realm-isolate, bill — all HTTP.
100+ endpoints. Per-service OpenAPI specs. Realm-scoped tokens. Pause, snapshot, copy, sync — all on the same surface.
# create a container
$ curl -X POST https://api.hoody.com/api/v1/projects/ID/containers \
-H "Authorization: Bearer hdy_..." \
-H "Content-Type: application/json" \
-d '{"name":"dev-box","container_image":"debian/13","cpu":2,"memory":4}'
<< 201 Created
{ "id": "67e89abc…", "status": "creating", "urls": ["terminal-1…", "files…", "display-1…", …] }
Six states. Explicit transitions.
Containers move through a defined state machine. Every transition has an endpoint, and each transition is a recorded event in the container's history.
Provisioning filesystem and resources
Live and serving its URLs
Powered off · storage billed
RAM frozen · instant resume
Errored during an operation
Async duplicate in progress
POST/containers/ID/start
POST/containers/ID/stop
POST/containers/ID/force-stop
POST/containers/ID/restart
POST/containers/ID/pause
POST/containers/ID/resume
POST/containers/ID/network/start
POST/containers/ID/network/stop
Pause running work. Snapshot entire machines.
Freeze RAM with pause/resume. Capture filesystem, processes, and memory with stateful snapshots. Restore a container to a prior moment without re-initializing anything.
Container sleep, not shutdown
- —RAM state frozen in place; open files and DB connection pools kept
- —Pause an LLM mid-inference — resume without re-warming context
- —Pause an automation job overnight — resume in the morning
- —POST /containers/ID/pause · POST /containers/ID/resume
Git for computers
- —Create while running = stateful (filesystem + processes + memory)
- —Create while stopped = stateless (filesystem only)
- —Snapshot alias up to 100 chars, optional auto-expiry in days
- —POST /containers/ID/snapshots · PATCH /containers/ID/snapshots/NAME to restore
Branch a container. Keep it in sync.
Copy the full state of a container — snapshot history included — to another server or region. Sync propagates incremental changes from source to copy.
container-67e89abc…
server: node-us-east-1
container-abc1234d…
server: node-eu-west-1
target_project_id for cross-project copy
source_snapshot to copy a specific state
Typical time: 3–5 min same-server (50GB), 10–15 min cross-server (50GB)
Sync overwrites local changes on the copy. It is not bidirectional — source changes flow to copy, never back.
Isolation and billing are API primitives
Realms scope the API host itself. The wallet split prevents compromise from eating your infrastructure budget.
API-level multi-tenancy
- —AI agents in realm A literally cannot see realm B resources
- —Tokens are scoped to specific realms via realm_ids
- —Realm isolation sits above project RBAC — it changes the API host
General and AI balances
- —Compromised AI workload cannot drain infra budget
- —Stripe (PCI-compliant), 100+ cryptos (NOWPayments), bank transfer
- —Auto-generated PDF invoices · paginated transaction history
The full API surface
Grouped by resource. Every endpoint has an OpenAPI spec.
Auth & Tokens
25+ endpoints
OAuth, signup, JWT login/refresh/logout, 2FA with backup codes, long-lived hdy_ tokens with CIDR IP whitelist and realm scoping
Containers & Lifecycle
20+ endpoints
CRUD, stats, start/stop/pause/resume/restart, firewall rules, network start/stop
Snapshots, Copy & Sync
10+ endpoints
Snapshot CRUD with alias and expiry, async cross-server copy, incremental one-way sync
Wallet & Billing
20+ endpoints
Balances, transactions, payment methods (Stripe + crypto + bank), payments, invoices with PDF, general→AI transfers
Realms & Projects
5+ endpoints
Realm listing with optional usage stats, project CRUD, project-scoped container listing
Proxy aliases · permissions · logs
31 endpoints
Served by the Control Plane but documented on /platform/proxy — aliases (6), permissions (17), logs (8)
Each service publishes its own OpenAPI spec (e.g. Hoody SQLite, Hoody Terminal, Hoody Cron) — generate typed clients in any language.
What traditional orchestration asks you to stitch
Terraform, kubectl, AWS SDK, and Docker daemon APIs each cover part of what the Control Plane covers in one surface. Here is an honest split.
| Capability | Control Plane | Traditional stack |
|---|---|---|
| Container CRUD | POST /projects/ID/containers | Terraform provider + module + apply |
| Pause RAM state | POST /containers/ID/pause · /resume | No direct analog (VMware suspend) |
| Snapshot running + RAM | POST /containers/ID/snapshots | Custom scripts + VM snapshots |
| Cross-server copy | POST /containers/ID/copy + /sync | rsync + manual bootstrap + re-register |
| Multi-tenant API isolation | realm_ids[] · REALM.api.hoody.com | RBAC layers + namespace discipline |
| Unified billing + LLM credits | general → AI transfer (one-way) | Stripe + separate LLM provider bill |
| Signed API responses | X-Hoody-Signature (ED25519) | TLS only (content unsigned) |
| Typed client from spec | GET /api/v1/openapi.json | Write your own SDK |
When you need Terraform's plan/apply model, kubectl's scheduling, or AWS's managed services, use those. The Control Plane earns its place when containers are the primitive and you want the entire lifecycle — including pause and snapshot — in one API.
Your first container takes one POST.
Get an API token, choose an image, and your container has every Kit service URL before the HTTP 201 lands.
See also — /platform/proxy for how those URLs route and authenticate.