
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Provision a managed cron entry the day the cleanup begins. Set expires_at a few days past the last expected run. The script slices through the work nightly, then DELETEs itself when there's nothing left. No calendar reminder, no zombie crontab, no annual cleanup-the-cleanups review.
# tail of cleanup-stale-uploads.sh removed=$(find /uploads -mtime +90 -delete -print | wc -l) echo "$removed files removed" # self-aware tail: nothing left → retire if [ -z "$(ls /uploads)" ]; then curl -fsS -X DELETE \ "$CRON_URL/entries/$ENTRY_ID" fi
The self-aware tail. The cron entry is the last line of its own script.
Day 1 cleared 247 stale files. Day 7 cleared none — and fired DELETE on its own entry.
# tail of cleanup-stale-uploads.sh removed=$(find /uploads -mtime +90 -delete -print | wc -l) echo "$removed files removed" # self-aware tail: nothing left → retire if [ -z "$(ls /uploads)" ]; then curl -fsS -X DELETE \ "$CRON_URL/entries/$ENTRY_ID" fi
The self-aware tail. The cron entry is the last line of its own script.
Three discrete phases. Each transition is mechanical, none requires a human to remember. The entry knows when its work is done and when its calendar slot is over.
Schedule @daily, command points at a slicer script, expires_at is set just past the last expected run. The deadline is in the entry — not in a Notion doc, not in a Slack thread.
Each run deletes a slice of stale data so the database isn't hammered. Day 1 might clear 247 files; day 6 just 1. The pace is bounded by what's actually left.
The script's last block checks if the target is empty. If yes, it fires DELETE /entries/[self]. If somehow it doesn't, expires_at fires the safety net a couple of days later.
Two independent triggers — the script's own check and the API's expires_at — converge on the same outcome: a crontab line that doesn't outlive its purpose.
Hoody Cron is a JSON-CRUD wrapper around the system crontab. POST creates the entry; DELETE removes it; expires_at is the safety net. The script that runs nightly is the one that knows when it's done — so it's the one that calls DELETE.
# day 0 — provision the cleanup curl -X POST \ https://cron.containers.hoody.com/users/me/entries \ -H "Content-Type: application/json" \ -d '["schedule":"@daily","command":"/srv/jobs/cleanup-stale-uploads.sh","expires_at":"2026-05-05T00:00:00Z"]' # response HTTP/1.1 201 Created { "id":"f3a1", "expires_at":"2026-05-05T00:00:00Z", "enabled":true }
# inside the cron command itself if [ -z "$(ls /uploads)" ]; then curl -X DELETE \ "$CRON_URL/entries/$ENTRY_ID" fi # response HTTP/1.1 204 No Content # entry f3a1 was here. f3a1 deleted itself.
$ENTRY_ID is the UUID returned by the POST — the script can read it from a file the entry's command line passed in, or from $HOODY_ENTRY_ID at runtime. Either way, the cron entry deletes the cron entry.
It's not the deletion that matters. It's that nobody has to remember any of this exists three months from now.
@daily runs every 24 hours. The script deletes a slice of stale data — a few thousand files, a few thousand rows — and exits. The database stays calm; the load curve looks like nothing happened.
expires_at is in the entry as JSON. When it fires, the line is removed from the system crontab. Three engineers from now nobody is paging through 200 lines wondering what cleanup-stale-uploads-v3 still does.
The script DELETEs itself the night the work is done. If a bug skips that path, expires_at retires the entry a couple of days later. Two independent mechanisms; one of them will fire.
Each managed entry is a row of JSON the API injects into the system crontab. Scaling is bounded by what cron itself can hold, not by Hoody.
@daily is the canonical cleanup rhythm. If you need more frequent passes you can use 5-field expressions all the way down to * * * * * — minute resolution.
An ISO-8601 timestamp on the entry. When it passes, the API removes the line on the next sweep. The cleanup never lingers past its own deadline.
DELETE /users/[user]/entries/[id] from inside the running command works because the cron daemon doesn't lock its own crontab — the API sweeps the change in safely.
Standard 5-field cron expressions plus macros (@hourly, @daily, @weekly, @monthly, @yearly). Per-user isolation; each system user gets its own crontab. The Hoody Kit's Cron page documents both managed entries and raw crontab access if you need the older shape.
The cleanup runs nightly until the thing being cleaned is gone.
Anywhere a cleanup task is supposed to vanish on its own — these are the patterns it replaces:
Provision the cleanup. Set its retirement date. Walk away.