GitHub Actions billing now follows a first‑of‑month cycle for many enterprise customers, and the product added a new paid option to extend the default cache. If you own a software delivery pipeline, the phrase you care about is predictable GitHub Actions billing. A cleaner billing date helps finance, but the expanded cache can quietly raise costs if you don’t set guardrails. Let’s unpack the changes and turn them into an advantage.
What exactly changed on December 1, 2025?
Starting December 1, 2025, self‑serve Enterprise Cloud accounts that pay by credit card are billed on the first day of each month for usage in the prior month. The billing period is still the calendar month; the charge date is what moved. Practically, this standardization means finance teams get a single predictable charge window for Actions minutes and storage, Packages, Codespaces, Advanced Security seats, Copilot metered overages, and shared storage.
One easily missed detail: the payment you saw in November covered October usage. The December 1 charge includes November usage. It’s not a duplicate—just a new schedule. If your PO, accruals, or spend alerts were tied to mid‑month or end‑month runs, update them now to avoid false positives.
New: pay‑as‑you‑go cache beyond 10 GB per repo
Days before the billing date shift, GitHub lifted the longstanding 10 GB per‑repository cache ceiling. Every repo still gets 10 GB of cache and a seven‑day retention at no extra cost. But admins can now dial the cache size above 10 GB and extend retention, and any usage beyond those included limits is billed. Two new policy levers ship with this:
- Cache size eviction limit (GB) per repository: increase if your dependency sets are huge.
- Cache retention limit (days): extend if your builds run infrequently or you rely on long‑lived cache keys.
These policies cascade: enterprise caps trickle down to orgs, and org caps trickle down to repos. If you hit a budget you set for cache overages, cache switches to read‑only for the repos using the higher limits until the next billing cycle—no sudden hard failures, but slower builds if misses spike.
Why this matters to engineering leaders and SREs
Billing date alignment is housekeeping; the cache change is where costs and performance move. Expanded caches can slash install times for giant monorepos, mobile builds with heavy toolchains, or big C++/Rust trees. But it also creates a foot‑gun: a mis‑keyed cache or long retention can inflate storage spend while reducing hit rate—a lose‑lose. Treat cache like any other infra resource with quotas, alerts, and periodic audits.
Primary impacts at a glance
Here’s the thing—most teams won’t overspend on minutes; they leak money on storage and mis‑configured workflows:
- Minutes: predictable settlements on the first. Good for forecasting; no direct rate change.
- Cache: defaults remain free (10 GB, seven days). Costs only start if you raise limits.
- Artifacts/logs: separate retention knobs—don’t confuse them with cache policies.
- Budgets and usage: the newer billing platform exposes summarized usage by SKU and supports budgets and alerts. Detailed workflow‑level usage lives in downloadable reports.
GitHub Actions billing: what’s included vs. billed
Quick refresher so your team speaks the same language:
- Included each month resets on the first: standard runner minutes, artifact storage, and 10 GB of cache (tiers vary by plan).
- Always billed: larger runners, minutes on private repos beyond your included pool, over‑cache beyond 10 GB if you opt in, long retention you set above defaults, and any premium storage or compute SKUs.
If your org previously relied on product‑specific billing endpoints, know that GitHub has been consolidating billing into a single usage view and report. Build your internal dashboards around the consolidated usage endpoint and the downloadable detailed report rather than the older per‑product APIs.
The cost–performance tradeoff of bigger caches
Think of cache size as a multiplier on build efficiency—but only if your keys are stable and distinct. Bigger caches help when:
- Dependency graphs are large but change slowly (e.g., Gradle, CocoaPods, pnpm workspaces).
- The same runners hit the same keys daily (high temporal locality).
- You separate hot and cold caches so big assets don’t evict hot dependencies.
Bigger caches hurt when:
- You use broad keys that collect too much junk and cause frequent invalidations.
- Retention is long but builds are frequent, leading to cache thrash and storage creep.
- Monorepo cache paths aren’t scoped per package, so one change nukes cache usefulness across the tree.
A 60‑minute audit to avoid surprise bills
Here’s a pragmatic checklist you can run before lunch.
1) Align finance and engineering on dates
Update your forecast model: charges run on the first for prior‑month usage. Move any finance alerts, budget exports, or accrual scripts to run on the last day and first day of the month.
2) Set cache guardrails
At the enterprise level, cap the maximum cache size per repo and retention. Example: 15 GB and 14 days for most orgs, higher only for mobile or game repos that justify it. At the org and repo level, set stricter defaults where needed.
3) Create a budget for cache overages
Use budgets for the new cache SKU. Start with a small ceiling to surface where you actually benefit from more cache. If a team hits it, the system flips cache to read‑only for those repos—good friction that prompts a review.
4) Fix cache keys and paths
Use hash‑based keys and scope paths to the smallest meaningful unit. For monorepos, store per‑package caches (e.g., apps/web, apps/mobile, libs/*). Avoid catch‑all keys like ubuntu-latest-node that balloon without value.
5) Separate hot and cold caches
Keep frequently used, small dependencies in one cache and big toolchains or SDKs in another. Different keys, different eviction lifecycles.
6) Right‑size retention
Default seven days is generous for daily builds. Weekly or monthly builds can justify 14–30 days, but monitor hit rate monthly. If your hit rate is flat while storage climbs, you’re over‑retaining.
7) Artifacts aren’t cache
Audit artifact retention separately (logs and build outputs). Many teams pay to store artifacts they never download. Reduce retention for ephemeral builds; pin long retention only for regulated projects.
People also ask
Does this change my free minutes or prices?
No rate change was announced alongside the billing schedule shift. Your included minutes and the 10 GB cache remain. Costs change only when you use non‑included SKUs (e.g., larger runners, or cache and retention you raise).
Will public repos pay for cache now?
Public repos still benefit from included usage. You only pay if you explicitly push cache or retention beyond the defaults.
Can I see which repos are consuming cache overages?
Yes—use the consolidated usage view by SKU and the downloadable usage report for drill‑downs. Pair that with a simple spreadsheet or internal dashboard to rank repos by incremental cost per minute saved.
What about the old workflow usage APIs?
They’ve been phased out in favor of the enhanced billing platform. Plan your data pipeline around the consolidated usage endpoint and monthly reports, not the retired per‑product endpoints.
Let’s get practical: a template policy set
Use this as a starting policy set for most engineering orgs, then tune per portfolio:
- Enterprise max cache size per repo: 15 GB.
- Enterprise max retention: 14 days (90 for private repos with infrequent builds and heavy toolchains).
- Org default cache size: 10 GB (free); raise only via pull request reviewed by a platform owner.
- Org default retention: 7 days; bump to 14 if hit rate is consistently above 80% and builds are weekly.
- Budgets: set a modest monthly cache budget per org; alert at 50% and 80%.
- Keys: require at least one hash of lockfiles or manifest per build system (e.g.,
hashFiles('**/pnpm-lock.yaml')).
Document exceptions. If iOS builds need 40 GB and 30‑day retention to avoid 20‑minute toolchain installs, approve it with a clear SLA and a monthly review.
Playbook: speed up builds without paying more
These tactics routinely save teams 20–50% on Actions spend while making pipelines noticeably faster:
- Use matrix fan‑out with targeted caches: split work by package or platform so each job hits a small, hot cache.
- Promote stable base images: pre‑bake compilers and SDKs to reduce cache churn.
- Pin dependency managers: consistent versions produce consistent cache keys.
- Trim artifact uploads: store only what humans or downstream systems actually fetch.
- Review monthly: deprecate caches with zero hits in the last 14–30 days.
When to consider self‑hosted or larger runners
If you’re routinely bumping cache beyond 10 GB and still hitting long build phases, the problem may be compute, not storage. Larger runners, ARM64 options, or self‑hosted autoscaling can cut wall‑clock time more cost‑effectively. Run a one‑week A/B: keep cache at 10–15 GB, then compare total cost of minutes on larger runners versus storage overages plus standard runners. Your winner is the lower cost per successful minute saved.
We’ve helped teams do similar math while tuning platform spend on hosting and edge services. If you’re re‑evaluating CI and hosting at the same time, our analysis of Vercel Pro price changes and unit costs is a good companion read.
Policy gotchas and edge cases
Two common traps:
- Retention inflation: extending from 7 to 30 days feels harmless, but it silently grows storage. If your build cadence is daily, you’ll rarely benefit from the 23 extra days.
- Monorepo megacache: one cache path at the repo root is easy—but guarantees churn. Use per‑workspace cache paths and keys. If you must share, separate “tooling” and “deps” caches.
Also, beware cache thrashing: if eviction runs frequently and your keys are broad, the system will create and delete caches constantly. Shrink the cache path, tighten keys, and reduce retention until thrashing stops.
Data timeline you can share with finance
Give stakeholders a crisp timeline:
- February–April 2025: GitHub consolidated billing onto the enhanced platform and retired product‑specific billing endpoints. Usage is summarized via the consolidated endpoint; detailed workflow usage is available in monthly reports.
- September 2025: cache eviction checks increased in frequency to reduce bloat, with communications urging teams to right‑size cache usage.
- November 17, 2025: billing date standardized to the first of the month for self‑serve enterprise credit‑card accounts; applies to Actions, Codespaces, Packages, Advanced Security, Copilot overages, and shared storage.
- November 20, 2025: cache can exceed 10 GB per repo with pay‑as‑you‑go pricing; new cache size and retention policies plus budgets introduced.
What to do next (developers)
- Run the 60‑minute audit above.
- Add cache size and retention to your
codeowners/platform review checklist. - Instrument hit rate: log cache hits/misses per job to a dashboard.
- Pilot a larger runner for your slowest workflow and compare cost per minute saved.
What to do next (business owners)
- Move accruals and alerts to the first of the month.
- Turn on budgets and notifications for cache overages and larger runners.
- Ask for a monthly pipeline cost review that includes wall‑clock time and developer wait time—not just cloud dollars.
Tools and deeper dives
If you’re rolling out Copilot agents or platform changes alongside CI improvements, our GitHub Agent HQ 90‑day adoption playbook outlines how to stage rollouts without spiking spend. For broader platform cost control, compare your CI savings with edge and container hosting options—our guides on Cloudflare container pricing to cut CPU costs and Cloudflare + Replicate for ML inference workflows show how to rebalance spend across the stack.
And if you want a deeper step‑by‑step for the earlier billing platform changes around Actions, see our prior breakdown: GitHub Actions Billing Changes: Your Playbook.
A short, opinionated take
The billing date change is unambiguously positive. The expanded cache is a power tool: great in expert hands, risky in a rush. Most teams should start by improving cache keys and retention before paying for more storage. If your hit rate is already high and builds are still slow, move up to larger or self‑hosted runners and keep cache modest—compute, not storage, probably gates you.
Need a second pair of eyes?
We’ve tuned CI for teams shipping web, mobile, and ML workloads. If you’d like a focused review of your Actions usage, budget setup, and cache policies, reach out to our team. Or browse what we do and recent wins in our services overview and portfolio.
