Skip to content
container / hero
CONTAINERS

A Computer. Not an Image.

Full Debian 13 machines that boot in seconds, persist by default, and speak HTTP natively. Every capability of Linux, addressable by URL.

Not a packaging artifact. A machine you live in.

DEBIAN 13 TRIXIESYSTEMD INIT14 HTTP SERVICESKERNEL 7.0.2-HOODYCoW SNAPSHOTSSECONDS TO BOOT
Read the docs
container / urls
ONE POST

One POST. Fourteen live URLs.

Every Hoody container comes with 14 built-in HTTP services. The moment the container spawns, you have 14 live endpoints — no setup, no deployment, no glue.

CONTAINERS LIVE INSIDE PROJECTS · ONE POST → 14 HTTP URLS + AN SSH HOST

api.hoody.com
$ curl -X POST \
https://api.hoody.com/api/v1/projects/$PROJECT/containers \
-H "Authorization: Bearer $HOODY_TOKEN" \
-d '{"server_id": "node-us", "hoody_kit": true, "dev_kit": true}'
response
{
"statusCode": 201,
"message": "Container created successfully",
"data": {
"id": "890abcdef12345678901cdef",
"status": "provisioning",
"hoody_kit": true,
"dev_kit": true
}
}
# 30 seconds later — 14 live URLs:
https://…890abc-terminal-1.node-us.containers.hoody.com
https://…890abc-display-1.node-us.containers.hoody.com
https://…890abc-files.node-us.containers.hoody.com
https://…890abc-exec-1.node-us.containers.hoody.com
https://…890abc-sqlite-1.node-us.containers.hoody.com
https://…890abc-browser-1.node-us.containers.hoody.com
https://…890abc-agent-1.node-us.containers.hoody.com
https://…890abc-code-1.node-us.containers.hoody.com
https://…890abc-curl-1.node-us.containers.hoody.com
https://…890abc-n-1.node-us.containers.hoody.com
https://…890abc-daemon-1.node-us.containers.hoody.com
https://…890abc-cron-1.node-us.containers.hoody.com
https://…890abc-pipe-1.node-us.containers.hoody.com
https://…890abc-workspaces.node-us.containers.hoody.com
Want two? Add -2, -3 to any service slug. Same container, different sessions. All isolated. All at their own URL. Multiplayer by architecture.
container / computer
DIFFERENT CATEGORY

Not even the same tool.

Docker and VMs solved different problems. Hoody is a third thing — a computer you address with a URL.

Docker is packaging. Hoody is computing.

Docker containers ship software. Hoody containers run software and persist the computer it runs on — systemd, filesystem, network stack, 14 HTTP services.

VMs are pets. Containers are URLs.

VMs assume SSH and manual config. Hoody containers expose every capability over HTTPS — a URL your team, your curl, and your AI all speak natively.

One kernel, hundreds of machines.

Hypervisor VMs: 5-20 per server. Hoody containers: hundreds. Kernel-level isolation with VM-grade boundaries, no VM-grade overhead.

Docker solved packaging. Hoody solved computing.

(You can still run Docker inside a Hoody container.)

container / isolation
ISOLATION

Kernel-enforced. Not application-level.

Each container is a security boundary, not a process boundary. Breaking out requires a kernel exploit, not an application bug.

Own filesystem

Containers cannot see each other's files. No shared volumes by default. Each has its own root, its own /home, its own /etc.

Own network

Own IP address, own routing table, own DNS. Containers talk over HTTP URLs — the same protocol any two computers on the internet use.

Own processes

PID namespaces ensure a compromised container cannot see or signal its neighbors. No kill -9 across the boundary.

Hardened kernel 7.0.2-hoody

Seccomp filters plus restricted syscalls. VM-grade isolation at kernel level. Firecracker micro-VMs available when you need even more.

FOUNDATION

Bare metal you actually own.

Every container runs on hardware you rent from the marketplace — not a cloud provider's shared infrastructure. No shared hypervisor, no noisy neighbors, no tenant-of-a-tenant. The isolation model only works when the machine underneath is yours.

FIREWALL

Ingress + egress. Per container.

Every container ships with its own firewall. Allow or drop by port, protocol, IP and source range. No shared bridge, no noisy neighbors, no surprises.

ingress.json — allow HTTPS inbound
POST /api/v1/containers/CONTAINER_ID/firewall/ingress
{
"action": "allow",
"protocol": "tcp",
"description": "Allow HTTPS traffic",
"destination_port": "443",
"source": "0.0.0.0/0"
}
container / lifecycle
LIFECYCLE

Operate them like code.

Containers transition between states via HTTP. Start, pause, resume, snapshot, copy, sync — every verb is a POST.

CREATING

Being provisioned

RUNNING

All services available

PAUSED

Frozen in RAM, ~1s resume

STOPPED

Halted, filesystem preserved

COPYING

Duplicating, new ID, new URLs

FAILED

Diagnose or delete

state transition endpoints

POST/api/v1/containers/CONTAINER_ID/start

POST/api/v1/containers/CONTAINER_ID/stop

POST/api/v1/containers/CONTAINER_ID/force-stop

POST/api/v1/containers/CONTAINER_ID/restart

POST/api/v1/containers/CONTAINER_ID/pause

POST/api/v1/containers/CONTAINER_ID/resume

POST/api/v1/containers/CONTAINER_ID/copy

POST/api/v1/containers/CONTAINER_ID/sync

POST/api/v1/containers/CONTAINER_ID/snapshots

AUTOSTART

Back on after any reboot.

Containers restart automatically on server boot, after maintenance, and after power recovery. On by default — your machine comes back on its own.

COPY · SYNC

Clone it. Sync it.

Copy creates an independent duplicate — new ID, new SSH key, new URLs, all previous snapshots included. Sync pulls changes from the source. Cross-region DR is CoW-efficient. Clone a dev environment for a new teammate in one POST.

container / economics
ECONOMICS

Infinite containers.

KSM and BTRFS share the parts that are identical. You only pay for what differs.

Traditional VMs consume fixed resources whether you use them or not. A 2-core, 4GB VM costs the same idle as under load.

Hoody containers use KSM (Kernel Samepage Merging) and BTRFS deduplication to share identical memory pages and storage blocks across containers. One hundred containers running the same base Debian do not consume 100x the memory. They share what is identical and pay only for what differs.

DEV

Development containers cost nearly nothing when idle.

TEST

Test runners share base OS memory with production.

STAGING

Environments are not a budget conversation.

PER-FEATURE

A container per branch becomes practical, not extravagant.

You are not rationing computers. You are spawning them.

SNAPSHOTS

Git gave version control to code. Hoody gives version control to entire computers.

1-5 seconds to capture, regardless of container size. Copy-on-write at the filesystem level — ten snapshots of a 50 GB container cost 50 GB plus the changes, not 500 GB. Branch like git. Unlimited.

BASE IMAGES

DEBIANUBUNTUALPINEFEDORAAMD64 · ARM64 · ARMHF

Base OS templates only — the image gives you the distribution. hoody_kit and dev_kit add the 14 HTTP services on top (both default true; set either to false for a stock OS).

container / use-cases
USE CASES

Spawn one for anything.

When a fresh Linux computer costs seconds and near-zero marginal resources, the question isn't 'should we use one?' — it's 'why haven't we spawned another?'

Dev environments

A fresh Debian in seconds with 14 URLs and an SSH host. Persists across reboots. Your team's laptop setup, standardised and accessible from anywhere.

AI sandboxes

Every agent gets its own. Isolated blast radius for AI-generated code. Snapshot before each run, revert in 1s if the agent trashes it.

Per-feature branches

Spin a container per PR. Share the URL. Let reviewers poke at it live. KSM keeps the cost flat no matter how many branches are open.

Staging clones

Copy production to staging in one POST — new ID, new URLs, same state. Cross-region disaster recovery is CoW-efficient by default.

E2E tests

Parallel test runners, each in its own Linux. Real browser, real filesystem, real network. No mocks, no shared bridge, no flake.

Teaching & demos

Share a URL — students are in the same terminal. Multiplayer by architecture, no VPN, no SSH keys to distribute.

container / ai
AI-NATIVE

Your AI already speaks HTTP.

Hoody containers speak HTTP. The integration is the protocol — no SDK to install, no adapter, no wrapper. Give the agent a URL and it can operate a Linux machine.

agent.hoody.com
# give Claude the container URL
$ curl -X POST \
https://890abc-agent-1.node-us.containers.hoody.com/api/v1/agent/invoke \
-d '{"task":"install postgres and create an app database"}'
# the agent can:
# - exec in the shell
# - read/write any file
# - query SQLite, drive the browser
# - snapshot before each tool call
# - spawn sub-agents in their own containers
NO SDK REQUIRED

The integration is the protocol. Any agent that speaks HTTP can SSH, read files, exec commands, query SQLite, drive the browser — every capability is an endpoint.

SMALL BLAST RADIUS

Isolated from your host. Snapshot before each tool call. Revert in 1 second if the agent writes garbage. No root of your laptop required.

100+ TOOLS, ONE URL

The agent service exposes the whole container's capability surface. @hoody.com is a Skill any AI with a browser can pick up — zero integration work.

SPAWN SUB-AGENTS

Each sub-agent gets its own container. Branch like git, merge what works, throw away what didn't. Parallel work, zero shared state, per-agent audit trail.

HOODY · /CONTAINER/AI

The AI doesn't need to learn your API. It already speaks HTTP.

container / start
GET STARTED

Stop renting VMs. Start spawning computers.

One abstraction. One mental model. One URL pattern.

Read the docs

Transparent usage-based pricing — see /pricing.