
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Most monitoring watches what happened. You needed something that watches what didn't. Two cron entries — one beats, one listens for the absence of a beat — and the page that finds you on a beach.
two cron entries · zero new services · the alert finds you when nothing happens
The job you already had keeps doing its work. After it finishes, it adds one curl: a heartbeat row to a notifications endpoint. A second cron entry runs on its own cadence and checks for silence — if no fresh beat, it pages your phone. The job's success is silent. Its absence is loud.
Two POSTs to /users/root/entries with five-field expressions. The first runs after each scheduled job and posts its heartbeat. The second runs on its own cadence, asks the notifications endpoint whether the last beat is fresh enough, and triggers the page if not. No queue, no agent, no daemon — just two crontab lines that already had to exist.
Most monitoring tools watch the success path: they alert when something happens. This shape alerts when nothing does — and that's the case the silent jobs always lose.
If the worker process never starts — the box rebooted, the script was deleted, a quota expired — there is nothing to log and nothing to alert on. The watcher cron runs anyway and notices that the heartbeat row is stale. The thing that catches the silent crash is exactly the thing that doesn't depend on the silent thing.
The monitor is one extra crontab line, not a Healthchecks.io account or a CloudWatch alarm. It's bound to the same container as the work, expires with `expires_at` if you want it to, and reads from the same notifications API the rest of your stack already uses.
The notifications endpoint fans the page across push, SMS, and email — the channels you already trust. You don't watch the dashboard. The dashboard watches itself, and finds you on the beach in Bali only when the silence has gone on too long.
The mechanism is plain Hoody Cron and Hoody Notifications. The numbers come from the documented API surface, not from a demo runtime.
Standard 5-field expressions plus macros — `@hourly`, `@daily`, `@weekly`, `@monthly`, `@yearly`. The watcher and the worker can have completely different cadences.
Managed entries support `expires_at`, so a temporary heartbeat (a one-week migration window, say) cleans itself up. The watcher disappears with the work.
Each container gets its own crontab. The heartbeat for one tenant cannot mute the watcher for another, and disabling a job is a single PATCH `enabled: false`.
Limits per the Hoody Cron API: 5-field expressions plus the `@hourly`/`@daily`/`@weekly`/`@monthly`/`@yearly` macros, optional `expires_at` on managed entries, per-user crontab isolation, enable/disable via PATCH.
Silence is now an alert.
The standard reach-for-it tools when you want a cron-monitor with paging. Each one is a separate account, a separate bill, a separate API. Two crontab lines and the notifications endpoint do the same job.
Stop watching the success path. Watch the absence of success — it's the only place the silent failures live.