
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Chasing a flaky bug? Dump the process tree every minute for 48 hours, then never again. POST a managed cron entry with expires_at set, and the schedule has a half-life — no reminder, no cleanup PR, no stale entry six months later.
{ "schedule": "* * * * *", "command": "pgrep auth | tee -a tree.log", "expires_at": "2026-05-06T11:14:00Z" }
POST /users/me/entries with expires_at — the entry runs every minute, then removes itself at the deadline
Three moments. Create the entry with a deadline. The schedule runs on its own. At expires_at, the entry deletes itself — and your crontab is back to what it was before you started debugging.
Send a managed cron entry to the API with schedule, command, and an expires_at timestamp 48 hours out. You get back an id and a confirmation that the entry is enabled.
The entry executes at every tick of its cron expression — every minute, every hour, whatever you set. Identical behavior to a permanent entry, with one quiet difference.
When the wall clock crosses expires_at, the entry is removed. No final run, no zombie row, no manual cleanup. GET /entries returns the list it would have without you.
No cleanup script. No calendar reminder. No team-wide "who owns this?" thread six months from now. The entry knew when it was supposed to die and it did.
Create the entry with a POST. Verify it's gone with a GET 49 hours later. The whole mechanism is two HTTP calls and a timestamp — no cron daemon to SSH into, no /etc/crontab to edit.
# create a self-deleting cron entry curl -X POST \ https://cron.containers.hoody.com/api/v1/cron/users/me/entries \ -H "Content-Type: application/json" \ -d '["schedule":"* * * * *","command":"pgrep auth | tee -a tree.log","expires_at":"2026-05-06T11:14:00Z"]' # response HTTP/1.1 201 Created { "id":"e7d3", "expires_at":"2026-05-06T11:14:00Z", "enabled":true }
# 49 hours later — list is back to normal curl GET https://cron.containers.hoody.com/api/v1/cron/users/me/entries HTTP/1.1 200 OK [ { "id":"a1f2", "expires_at":null, ... }, { "id":"c4b9", "expires_at":null, ... }, { "id":"9b21", "expires_at":null, ... } ] # e7d3 was here. e7d3 deleted itself.
The expires_at field is the contract. You don't have to remember to clean up because remembering isn't part of the protocol — the deadline is.
Once the schedule has an expiration date, three things stop being your problem: drift, oversight, and audit fatigue. The crontab stays clean by default.
Every "I'll just temporarily…" entry has a deadline baked in. The crontab self-prunes — no quarterly cleanup sweep, no stale rows nobody wants to delete because nobody knows what they did.
You don't have to remember to remove the entry. You don't have to set a calendar reminder. You don't have to file a cleanup PR. The deadline is the reminder — and it always fires.
The entry is gone but the runs aren't. Every execution still has its log line, exit code, and timestamp — so the trail of "this ran for 48 hours and then stopped" is fully reconstructable after the fact.
Self-expiring entries cost the same as permanent ones. Stack as many as you need — the API was built for the case where every debugger on the team has three or four temporary jobs running at once.
Standard cron expressions, down to one-minute resolution. The expires_at field accepts any RFC 3339 timestamp.
Plenty of room for a team of debuggers each running a handful of temporary probes alongside the permanent jobs.
No DELETE call to remember. No "clean up old crons" ticket on the backlog. The entry handles its own end of life.
Limits scale with the cron service tier on your account. Logs are retained per the standard Hoody Cron retention window after the entry itself has expired.
Temporary work shouldn't leave permanent crontab entries.
The patterns developers reach for when they need a one-shot cron line. Each one leaves a trail. expires_at sweeps it.
Temporary work shouldn't leave permanent crontab entries.