
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
A managed cron entry fires @hourly. It POSTs a snapshot named auto-h$(date +%H). The names cycle: auto-h00 through auto-h23. After a day, each new snapshot overwrites the one from yesterday at the same hour — and you always have the last 24 hours of state, retained at hourly granularity.
{ "name": "hourly-2026-05-04-13", "alias": "auto-h13", "created_at": "13:00:04Z", "size": 1331691520 }
the elevator stops at 24 floors — yesterday's hour is overwritten by today's
An @hourly managed entry curls the snapshots URL with alias auto-h$(date +%H). The alias collides intentionally: at hour 13 tomorrow, auto-h13 from today is replaced. Twenty-four named slots, rotated automatically.
# Hoody Cron — schedule one hourly snapshot. curl -X POST \ cron.containers.hoody.com/users/root/entries \ -H "Content-Type: application/json" \ -d '{ "schedule": "@hourly", "command": "curl -X POST $SNAP_URL -d '{\"alias\":\"auto-h$(date +%H)\"}'", "comment": "rolling 24h snapshot" }'
# At 13:00 the cron runs — this is the request it sends: curl -X POST \ api.hoody.com/api/v1/containers/$ID/snapshots \ -H "Authorization: Bearer $TOKEN" \ -d '{"alias": "auto-h13"}' # Response: → 200 OK · hourly-2026-05-04-13 created in 6s
There is no retention policy and no janitor — the alias auto-h13 is reused every 24 hours, which is what makes the window roll. The Hoody Snapshots API supports an optional alias field on POST /api/v1/containers/[id]/snapshots; reusing it is the entire mechanism.
Four steps, all of them inside a single curl. From cron tick to snapshot in seconds.
Each tick takes seconds. The alias is the rotation primitive — by reusing the same name 24 hours later, the snapshot at that floor is replaced in place.
What you give up by deleting the backup runbook, you get back as something cheaper and more honest.
Snapshots are stateless on disk; they don't burn CPU or RAM while sitting there. You're paying for storage of 24 copies of the container's diff, not for a backup service that runs all the time.
When something goes wrong at 14:14, you restore auto-h13 and you're back at 13:00 — a minute before the issue started. Hourly is fine-grained enough for production rollback and coarse enough not to drown the ledger.
There is no lifecycle policy to write, no S3 bucket to provision, no annual runbook review. The naming convention is the retention rule. The fixed alias set is the audit.
Twenty-four snapshots of a typical container, retained at hourly granularity. Numbers come from the Hoody Snapshots API and a representative 1.2 GB diff per hour.
Each hour is a named slot. After day one, every new snapshot overwrites yesterday's at the same hour — the count never grows.
One managed entry, schedule @hourly, command curls the snapshots URL with alias auto-h$(date +%H). That is the entire rotation.
No prune job, no expires_at policy, no lifecycle config. The alias collision rotates the window in place; nothing accumulates.
Per the Hoody Container Snapshots API: POST /api/v1/containers/[id]/snapshots accepts an optional alias (max 100 chars) and an optional expiry in days. This page assumes containers' default snapshot pricing and a representative ~1.2 GB diff per hourly capture; your sizes will vary with the workload.
Your time machine has 24 floors and the elevator is a curl.
The standard reach-for-it tools when you want hourly point-in-time recovery. Each one charges you a service or a retention policy. The cron + alias model charges you neither.
Delete the backup runbook. Schedule the @hourly. The last 24 hours of your container exist as 24 named floors — and the elevator is a single curl.