Skip to content
use-cases / cron-per-customer / hero
Multi-tenant SaaS / Per-customer scheduling

A separate crontab for every customer, automatically

Your SaaS lets each customer schedule their own report generation. The naive design is one shared scheduler, customer IDs in the job payload, fingers crossed nobody starves anyone else. The Hoody design gives every tenant their own container and their own hoody-cron service.

Read the docs

Three lifecycle states, one HTTP API. PROVISION adds entries, cron ticks run them, DELETE suspends. Each tenant's cron lives in its own container — no shared queue, no noisy-neighbor risk.

use-cases / cron-per-customer / mechanism

Three API calls drive the full tenant lifecycle

Each customer container exposes the hoody-cron HTTP API. Provision with POST, verify with GET, suspend with DELETE. No shared queue, no priority lane, no scheduler config to redeploy.

POST a managed entry — creates a cron job with UUID, enabled state, and human-readable schedule_human fieldPOST /provision
request
# POST managed entry for acme-corp tenant
POST acme-cron.hoody.com/users/root/entries
Content-Type: application/json

{
  "schedule": "0 9 * * *",
  "command": "/usr/local/bin/digest.sh",
  "comment": "daily digest",
  "enabled": true
}
response
HTTP/1.1 201 Created
Content-Type: application/json

{
  "id": "7d3f2a1b-8c4e-4f9a-b2d5",
  "schedule": "0 9 * * *",
  "schedule_human": "At 09:00",
  "enabled": true,
  "user": "root"
}
201 Created. Entry ID is returned for future PATCH or DELETE. schedule_human confirms expression was parsed correctly.

Each tab shows the exact API call your control plane makes. The managed entries API uses UUIDs so you can target individual jobs without replacing the whole crontab. Per-user isolation means nothing about acme-corp's schedule is visible to globex-saas.

use-cases / cron-per-customer / powers

What the billing model makes obvious

One flat-rate server. Sixty tenant containers. The math is brutally simple.

Fleet billing breakdownper-tenant cost = server divided by tenants
Your tenants
acme-corpSM
globex-saasMD
initech-incLG
+ 57 more
Flat-rate server / mo$29One bare-metal node. 60 containers. Bill stays flat.
÷tenants
Per-tenant cost<$0.49Drops as you add more tenants

Noisy-neighbor incidents disappear

When initech-inc's scrape.js hangs, acme-corp's 9am digest still fires. Different crontabs, different process trees, different filesystems.

Schedule changes propagate instantly

POST a new entry and the tenant's hoody-cron service picks it up immediately. No central scheduler to reload, no broadcast to send.

Per-tenant logs, one container

When globex-saas asks why their 6pm rollup ran twice, you read one container's log — not a shared scheduler grep across nine machines.

use-cases / cron-per-customer / compare

Shared scheduler vs container-bound crontab

Three axes where the old design taxes your team and the Hoody design just doesn't.

AxisShared schedulerContainer-bound
Isolation
tenant_id in job payloadOne bad row, every tenant's queue blocks
Separate /etc/crontab per containerHangs are local. Always.
Provisioning
INSERT INTO scheduled_jobsMigration coupling, schema lock
PUT /users/root/crontabSingle HTTP call, atomic replace
Audit
grep tenant_id=42 logs/*9 machines, 1 log file each
GET ctr_8a3f1c/cron/logOne container, one log, one truth

The old column is what every team writes the first time they ship multi-tenant scheduling. The new column is what you ship when the platform gives every tenant their own container by default.

use-cases / cron-per-customer / capacity

Capacity at the edges

What a single bare-metal Hoody box does when every customer gets their own crontab.

  1. Tenants per box60

    Sixty customer containers on one bare-metal node, each with its own hoody-cron service running. No shared scheduler to bottleneck.

  2. Schedule propagation<1s

    From PUT request to first tick of the new schedule, observed across a fleet of 60 containers on a typical 64-core node.

  3. Cross-tenant queues0

    There is literally no shared queue, priority lane, or scheduler thread that two tenants compete for. Isolation is the substrate.

Capacity numbers are typical observed values on a 64-core / 256GB bare-metal node running standard Hoody container density. Actual capacity depends on per-tenant CPU and memory budgets and the work each cron job does. The zero in cross-tenant queues is structural, not a benchmark.

use-cases / cron-per-customer / punchline

One customer's cron can't starve another's because they aren't on the same crontab.

Before / shared schedulerAfter / container-bound
Sharedscheduled_jobs WHERE tenant_id = 42One row in a table everyone reads from
Per-tenantPUT acme-cron.hoody.com/users/root/crontabOne HTTP call, one container, one crontab
Read the cron API
use-cases / cron-per-customer / replaces

What this replaces

The architectures teams build to share one crontab across tenants. Hoody puts each tenant in their own crontab — no router, no fairness queue, no noisy neighbour.

  • Shared multi-tenancy crontabsOne bad regex starves 400 customers
  • Custom tenant isolationA scheduler with tenant_id on every row
  • Postgres pg_cronDatabase-bound; one upgrade and everyone breaks
  • Quartz scheduler with filtersA JVM and a sharded queue per region
  • Sidekiq tenant queuesTwelve queues, twelve config files
  • Kubernetes CronJobs per tenantA namespace, an RBAC role, a YAML, a pager
use-cases / cron-per-customer / cta

Stop writing tenant_id everywhere. Give every customer their own container and let cron do what cron has always done, in isolation.

Read the docs
use-cases / cron-per-customer / related

Read the others