BYBOWU > Blog > Web development

GitHub Actions Cache Limits Changed: Do This Now

blog hero image
On November 20, 2025, GitHub quietly removed the hard 10 GB ceiling on Actions cache. You can now raise per‑repo cache limits and pay only for the overage. That’s great for pnpm/node_modules, Docker layers, and Bazel caches—but it can also spike your bill if you don’t tune keys, retention, and budgets. Here’s a pragmatic, production‑tested playbook for when to increase the limit, how billing actually works, and the guardrails to put in place before you flip the switch.
📅
Published
Nov 28, 2025
🏷️
Category
Web development
⏱️
Read Time
9 min

GitHub Actions cache just got a meaningful upgrade: as of November 20, 2025, repositories can exceed the long‑standing 10 GB per‑repo cap using a pay‑as‑you‑go model, with new knobs for eviction and retention. For many teams, that unlocks faster builds and fewer cache misses. For others, it’s a surprise line item unless you set budgets and policies first. Let’s walk through what changed, how billing actually works, and a safe rollout plan for your org.

Illustration of a CI usage dashboard with cache peaks

What changed on November 20, 2025?

GitHub announced that repos can raise their GitHub Actions cache cap beyond 10 GB and pay only for usage over the included amount. Admins at the enterprise, organization, or repository level can set a higher cache size and a retention limit (days). If you keep the defaults—10 GB and seven days—nothing changes. If you raise the limit, overage is billed, and if a budget you set for this SKU is hit, cache becomes read‑only until the next cycle. (github.blog)

This lands alongside a policy update GitHub delayed earlier this fall: cache eviction checks moved from roughly once every 24 hours to once every hour in November 2025. That more aggressive cadence can surface thrashing if your workflows churn lots of similar caches in a day. (github.blog)

How GitHub Actions cache billing works now

Here’s the thing: storage billing is nuanced, and this is where many teams slip. You still get 10 GB of cache per repository included with your plan. If—and only if—you configure a higher limit, you pay for the amount above 10 GB based on hourly peak usage. In other words, GitHub samples your cache footprint each hour; anything above 10 GB in that hour is metered. Budgets can cap spend, and once reached, cache over the included amount flips to read‑only until the new billing period. (docs.github.com)

Practical example: if a repo uses 8 GB most of the month and spikes to 18 GB during weekly release builds, you’re only billed for the delta (about 8 GB) during those spike hours—provided you set the repo limit above 10 GB. The included 10 GB remains free. (docs.github.com)

Yes, you can exceed 10 GB—should you?

It depends on your workload. Monorepos with large dependency trees (Bazel, Gradle, pnpm) and CI pipelines that rebuild Docker images frequently tend to benefit. Raising the cap reduces eviction churn between parallel branches and cuts cold‑start times for weekly release trains. Conversely, small services with modest dependencies rarely see a payoff; keeping the default avoids accidental spend and encourages lean caches.

Two signals that raising the limit is smart: frequent cache misses on mainline builds right after hotfixes, and recurring job logs showing “restoring cache… miss” followed by long install steps even though nothing changed. If you see either pattern, controlled experimentation is worth it.

The new controls: where to set them

You now have two policies to tune per repo (and optionally enforce at org/enterprise): a cache size eviction limit (GB) and a cache retention limit (days). These settings cascade—enterprise caps become org maximums, which become repo maximums. From a repo, navigate to Settings → Actions → General to set them. (docs.github.com)

If you’re rolling this out across many repos, start with a conservative enterprise cap (for example, 15 GB, 14 days), then permit repos to request higher limits case‑by‑case with a short form and a success metric (e.g., median build time reduction).

How the hourly eviction change affects you

In November 2025, GitHub shifted eviction checks to hourly. If your repos constantly hit the limit, you may see more aggressive eviction, which can feel like “flapping” caches. That’s another situation where a modest limit increase steadies the system—and where retuning keys to reduce duplication pays off. (github.blog)

Billing and reporting got cleaner, too

GitHub has been consolidating billing views and APIs. Reporting for metered products (including Actions cache) now lives in a unified usage view and exportable reports. If your finance scripts still hit the old product‑specific billing endpoints for Actions or storage, you’ll need to migrate—they’ve been closed in favor of the consolidated usage endpoint. (docs.github.com)

Playbook: increase cache safely in 7 steps

Let’s get practical. Here’s the rollout we use with clients when we bump the GitHub Actions cache limit.

1) Find your hotspots

Open the Metered Usage view for your org and sort by Actions Storage. Pull a 30‑day report to see which repos spike and when (release day vs. daily builds). Note any hour‑by‑hour peaks. (docs.github.com)

2) Set a conservative cap and retention

At the org level, set a cache size eviction limit just above your observed peak (e.g., 15 GB) and a retention limit that matches your branch cadence (e.g., 7–14 days). Then set repo‑level caps for your top three offenders. Document the rationale in each repo’s README under a “CI strategy” section. (docs.github.com)

3) Retune keys to avoid duplicate caches

Use tighter cache keys so you’re not persisting megabytes of unchanged content for minor lockfile bumps. For Node, key off the package manager lock and OS; for multi‑arch Docker, consider separating layer caches by base image tag. Watch for key patterns that create a new cache every branch push—especially risky with hourly eviction checks in place. (github.com)

4) Split monolith caches

Break a single 12 GB cache into focused caches by tool or layer (e.g., pnpm store vs. Cypress binaries). Smaller caches are less likely to be evicted wholesale and restore faster across jobs.

5) Put budgets and alerts on the new SKU

Set a budget for Actions cache overage at the enterprise level. Start low, watch the read‑only behavior on budget hit in staging repos, and tune. Add a Slack alert when spend crosses 50% for the period so engineering can respond before you throttle production repos. (docs.github.com)

6) Measure the payoff

Track median and p95 job duration for the impacted workflows for two weeks. If a 5–10% speedup saves more developer hours than the incremental storage cost, keep the higher cap. If not, roll back.

7) Have a rollback switch

Keep a one‑liner runbook: reset the repo cache cap to 10 GB, purge with a maintenance workflow, and re‑evaluate keys. If you need a deeper refresh on cost controls across your stack, our cost optimization sprints cover CI, hosting, and data transfer holistically.

Diagram of CI pipeline stages with caching

People also ask

Does raising the cache limit always speed up builds?

No. If your builds rarely miss cache, raising the limit won’t matter. Gains show up when parallel branches and release trains evict each other’s caches, or when you cache large, expensive artifacts (think Android SDKs, Chrome headless, prebuilt toolchains). Run an A/B test for a sprint to validate.

What happens if we hit our cache budget mid‑month?

GitHub marks cache over the included amount as read‑only until the next billing cycle. Restores work, but new writes above your configured cap aren’t allowed—so plan for a small buffer in budgets to avoid surprises near release day. (github.blog)

How high can we set the per‑repo cap?

In Enterprise Cloud, the UI allows large per‑repo limits and enforces any enterprise or org caps you set; document guidance for teams so they don’t request 100 GB “just in case.” Keep caps close to observed peaks and revisit quarterly. (docs.github.com)

Are public repositories ever charged?

Standard GitHub‑hosted runners are free for public repos, and your included 10 GB cache per repo is included. If you explicitly raise the limit, the owner is billed for overage above 10 GB, measured hourly. Most public OSS projects should keep the default cap. (docs.github.com)

Cost/benefit math you can explain to finance

Actions cache is billed on hourly peak usage above the included 10 GB per repo. That means short, predictable spikes are cheaper than sustained high usage. If a repo’s median job drops from 14 minutes to 11 minutes after raising the cap, and you run that workflow 2,000 times a month, you’ve reclaimed roughly 100 developer hours. Compare that to the incremental storage overage during spikes; if your peak hours sit around 14–18 GB and everything else is under 10 GB, the cost is modest relative to time saved. Keep your usage exports handy for month‑end reviews with finance. (docs.github.com)

Edge cases and gotchas

Hourly eviction can punish “chatty” cache strategies—like including timestamps in keys. If you see avalanche evictions, simplify keys and use restore‑keys for minor bumps. (github.blog)

Don’t confuse artifacts and caches. Artifacts have separate retention defaults and billing; caches are meant for dependencies you plan to reuse, not long‑term logs or reports. Review artifact retention (90 days by default; public repos max 90) separately. (docs.github.com)

Legacy billing scripts that query deprecated Actions endpoints won’t show cache overage correctly. Move to the consolidated usage endpoint and update internal dashboards now. (github.blog)

What to do next

Here’s a quick plan you can run this week:

  • Export 30 days of metered usage; flag repos with repeated cache spikes above 10 GB.
  • Set an org cap (15 GB) and retention (7–14 days); create a request template for higher caps. (docs.github.com)
  • Pick two repos, raise caps, and retune keys; track p50/p95 job durations for two weeks.
  • Enable a budget and alert at 50%/80% for the Actions cache SKU to test read‑only behavior safely. (docs.github.com)
  • Decide: keep the higher caps where ROI is proven; roll back where it’s not.

If you need a deeper review, we’ve helped teams cut CI costs 20–40% while shaving minutes off builds. Start with our GitHub Actions billing playbook, then explore adjacent levers like runner sizing and artifact strategy. For platform‑wide savings, our engineering services team can pair with you on a focused engagement, and our write‑ups on Cloudflare container CPU controls and Vercel Pro pricing tuning show how we think about cost, speed, and risk together.

Bottom line

Raising the GitHub Actions cache limit is a real lever for faster CI if you do it intentionally. Treat it like any other performance tweak: instrument, test, budget, and revisit. Do that, and you’ll get the speed without the surprise bill.

Photo of a 7-step rollout checklist on sticky notes
Written by Viktoria Sulzhyk · BYBOWU
3,758 views

Work with a Phoenix-based web & app team

If this article resonated with your goals, our Phoenix, AZ team can help turn it into a real project for your business.

Explore Phoenix Web & App Services Get a Free Phoenix Web Development Quote

Get in Touch

Ready to start your next project? Let's discuss how we can help bring your vision to life

Email Us

[email protected]

We typically respond within 5 minutes – 4 hours (America/Phoenix time), wherever you are

Call Us

+1 (602) 748-9530

Available Mon–Fri, 9AM–6PM (America/Phoenix)

Live Chat

Start a conversation

Get instant answers

Visit Us

Phoenix, AZ / Spain / Ukraine

Digital Innovation Hub

Send us a message

Tell us about your project and we'll get back to you from Phoenix HQ within a few business hours. You can also ask for a free website/app audit.

💻
🎯
🚀
💎
🔥