
Sixty containers on one server
One bare-metal box runs dozens to hundreds of Hoody containers. KSM and BTRFS dedup make the marginal cost near zero.
Most CI pipelines burn money on cache traffic — push artifacts to S3, pull them back next job, pay for storage, pay for egress, pay again when runners shift region. On Hoody the cache is a folder on the same bare metal that runs your build container. Push a tarball with curl. Pull it with curl. The bytes never leave the box.
the same 47.2 GB of cache · two different invoices
The whole CI cache is three commands and one cleanup job. PUT to write a tarball. GET to read it. find -atime to prune. There is no fourth piece — no IAM policy, no bucket lifecycle, no signed URL ceremony.
After install, the runner streams node_modules through tar | zstd into a single PUT against /files/cache. Hoody writes the body to disk as one binary blob. No multipart, no part-uploader, no SDK.
The next job's first step is one curl. The body comes off NVMe at line rate because the cache lives on the same physical box as the runner — no egress hop, no cross-AZ pull, no CloudFront edge.
Hoody Cron fires once a night. find /files/cache -atime +30 -delete throws out anything no job has read in a month. No retention policy, no Glacier tier, no lifecycle JSON to maintain.
PUT writes. GET reads. find prunes. The Hoody Files API is the cache server, the cleanup engine, and the audit log — all behind the same /files/[path] URL.
Pushing the cache out to a separate vendor used to make sense when storage was scarce. On a bare-metal container, it just adds a vendor.
S3 charges three meters: storage, egress, and per-request. Hoody Files is included in the flat-rate server price — the disk you already rent is the disk the cache sits on. The bytes never cross a billing boundary.
Reads come off the same physical box that runs the build. There is no S3 endpoint to resolve, no TLS handshake to a region, no rate limit on prefix throughput. A 1.4 GB Rust target unpacks in seconds.
Your runner and your cache live on the same instance, billed on the same invoice, debugged with the same SSH session. When you turn the container off, the cache is the disk image — back online the moment you boot it.
A typical mid-size CI footprint moves about 1.4 TB of cache traffic per month. Here is the line item it builds on AWS; on Hoody the cache runs inside the flat-rate server you already pay for.
When the cache lives on the box that runs the build, the meter S3 was running has nothing to read. The line item doesn't move because there is no transaction to bill.
Hoody Files isn't a thin wrapper — it's a real persistent backend with hashing, history, range reads, and an audit journal. The CI cache uses a thin slice of what's actually exposed.
PUT to write, GET to read, HEAD for ETag and Content-Length, ?hash for SHA256, ?stat for metadata. The cache is the same endpoint family that powers logs, builds, and shared artifacts.
Every write goes through the file journal. Pull yesterday's cache by timestamp or by per-path revision number — debugging a flake stops requiring a separate snapshot tool.
If the cache really does need to live in S3, B2, or a Drive folder, mount it as a backend and keep the same /files/[path] URL. The runner code never changes — the cache just moves.
Numbers reflect the published Hoody Files API surface — `GET/PUT/HEAD/PATCH /api/v1/files/[path]`, the `?hash`/`?stat`/`?at`/`?revision`/`?history` query parameters, and the file journal endpoints under `/api/v1/journal`.
Your CI cache stops being a separate vendor. It's a folder on the box you already rent.
The standard reach-for-it cache backends each charge you a vendor relationship, an egress bill, or a per-build fee. /files charges you none of those.
Stop renting a cache from a second cloud. Write the tarball to the disk you already pay for, and curl it back.