Node.js 20 EOL: Your 90‑Day Migration Playbook
Node.js 20 EOL is April 30, 2026. That’s the date community security fixes stop. Cloud platforms are keying off the same timeline, with runtime blocks following in quick succession (for example, AWS Lambda begins blocking new Node 20 function creates in early June and updates in early July). If you’re still on 20.x today, you’ve got a short, decisive window to migrate—without breaking customer flows or developer velocity.

What Node.js 20 EOL means for your stack
When a Node line reaches end of life, community security updates and bug fixes stop. That alone should push any production team to upgrade. But there’s more: many cloud runtimes and SDKs tie their support and deprecation schedules directly to Node’s official calendar. After EOL, you’ll see tooling and platform cracks—package updates that skip your version, CI images that quietly move on, and serverless runtimes that block deploys.
Two supported landing zones are in play now: Node 22 LTS and Node 24 LTS. Node 22 offers a stable, lower‑friction jump from 20 with excellent ecosystem coverage. Node 24 buys you a longer runway for long‑lived services, with security support stretching years beyond 2026. Which you choose depends on your release tolerance, native module exposure, and infrastructure baselines (containers, build images, and OS libraries).
Should you move to Node 22 or Node 24?
Here’s the quick decision tree I use with engineering leads:
- If you need the fewest surprises and a straightforward lift from 20.x, go Node 22. Your dependency graph will likely have broad coverage, and build images across CI/CD vendors are already standardized here.
- If you want the longest runway and you can budget a bit more time for validation, go Node 24. It reduces how often you have to re‑platform in 2026–2028, which matters for regulated environments and slow‑cadence backends.
Either way, build once and test both: it’s common for large repos to pick 22 for some services and 24 for others. Just make that a deliberate choice, not an accident of version drift.
Node.js 20 EOL: dates that matter
Mark these on your wall—and in your release calendar:
- April 30, 2026: Node 20 community EOL. No more security fixes.
- Early June 2026: Common serverless providers begin blocking creation of new Node 20 functions.
- Early July 2026: Update blocks kick in for existing Node 20 functions on major clouds; you’ll still be able to upgrade to a supported runtime, but rolling back to 20 tends to be blocked.
Treat April 30 as a hard security line; the June/July milestones are operational cliffs you don’t want to discover mid‑incident.
The 90‑day migration plan (that actually ships)
This is the sequence we’ve used on large multi‑service estates without overtime and without weekend fire drills. Adapt the cadence to your org, but keep the order.
Days 1–7: Inventory, risk triage, and a freeze
Start with facts, not vibes:
- Inventory every service and job with runtime, base image, and deployment target. Tag anything on 20.x.
- Flag native dependencies (node‑gyp, prebuilds, or custom C/C++). These drive most surprises.
- Lock your deploy surface: no new Node major versions allowed to ship until they pass the new test matrix.
Decide your landing zone per service (22 or 24) and write it down in a simple one‑pager the team can rally around. If you need a security‑first checklist while you’re here, our 2026 software supply chain playbook pairs well with this migration.
Weeks 2–3: Upgrade toolchain and dependencies
Make your CI/CD tell the truth:
- Update your CI matrix to include Node 22 and 24. Keep 20 for comparison until you cut over.
- In each package.json, set the engines field to the target major and fail the build if mismatched.
- Upgrade package managers to current (npm 10+, Yarn modern, pnpm 9+) and regenerate lockfiles.
- Bump first‑order dependencies (frameworks, DB clients, auth libraries) and run their migration steps. Resist bulk‑upgrading everything in one PR; stage high‑risk areas first.
For mono‑repos, enable a batched build flow and surface cross‑package breakages quickly. If you don’t have one, now is the moment to adopt a straightforward changeset process.
Weeks 3–4: Breakages to test intentionally
Most teams spend time debugging symptoms that are predictable. Test these on day one of your branch:
- Native addons: Ensure modules have Node‑ABI‑compatible prebuilds for your target. If you compile, use consistent compilers across dev and CI (Docker helps). Consider migrating hot spots to N‑API to decouple from V8 churn.
- Crypto and TLS: OpenSSL policy changes surface as handshake failures, certificate chain issues, or broken mutual TLS in staging. Test outbound to banks, payment gateways, and internal services with older intermediates.
- HTTP and fetch: If you rely on the global fetch and Web Streams, verify backpressure and timeouts under load. Match production proxy behavior during tests.
- ESM vs. CJS: Reconfirm package.json type, exports, and resolution for test runners and bundlers. Mixed ESM/CJS apps break in subtle ways only under real bundling.
- Test runner differences: If you use the built‑in Node test runner, re‑check coverage thresholds and watch mode; if you’re on Jest/Vitest, align their Node support tables before you blame the runtime.
Weeks 5–6: Platform alignment (containers, serverless, build images)
Upgrade the ground beneath your code:
- Containers: Move to base images matched to your Node target and OS family (for example, Amazon Linux 2023 or current distroless images). Rebuild any native layers.
- Serverless: Switch your function runtimes to Node 22 or 24 in dev/test accounts now. Validate IAM, layers, and extensions. Expect create/update blocks for Node 20 as summer approaches.
- Build images: Your CI runners may silently pin older Node. Make the version explicit and cache node_modules reliably to keep build times sane.
Document every change in release notes that your on‑call engineers can actually read at 2 a.m. No novel required; bullet points plus links.
Weeks 7–8: Performance, memory, and reliability checks
Runtime upgrades are a fantastic time to pay off perf debt, but keep the bar practical:
- Profile one hot path per service before and after the upgrade. Track latency p50/p95 under realistic traffic.
- Check event loop utilization and GC pauses in production‑like loads. Tweak heap sizes only if you can demonstrate a regression.
- Verify logging and metrics. Some telemetry libraries change default context propagation or leak descriptors on version bumps.
If you find regressions you can’t fix quickly, keep the service on Node 22 and move on. Perfection is the enemy of hitting your EOL window.
Weeks 9–10: Canary, rollback, and the flip
Cut a release candidate and run it side‑by‑side with the Node 20 build:
- Canary 5–10% of traffic for 24–48 hours. Monitor error budgets, auth flows, and payment conversions—not just CPU graphs.
- Keep feature flags for any library behavior changes you can toggle without a redeploy.
- Plan a one‑way door: after the final cutover, archive Node 20 assets. Rolling back to an EOL runtime is how lingering security risk creeps back into production.
Cloud‑specific gotchas you’ll meet
AWS Lambda
Expect a three‑step deprecation: first security updates stop around the Node 20 EOL date, then new function creation blocks roughly a month later, then updates get blocked another month after that. The runtime remains selectable for a short tail via APIs, but you shouldn’t rely on that in production. If you package native modules, rebuild on Amazon Linux 2023 to avoid libc mismatches. Layers and extensions compiled on older images are frequent culprits; rebuild them while you’re here.
Azure Functions
Language support timelines map to Node’s schedule. If you’re on durable functions or older extension bundles, check compatibility before bumping your runtime. Some customers tie Node upgrades to platform upgrades by policy—plan your change window with ops and security up front.
Vercel/Netlify and CI builders
Hosted builders regularly deprecate older Node images on predictable schedules. If your pipeline still assumes Node 18 or 20, you might already be on a “legacy” image that disappears the moment you trigger a cold build. Make Node 22 or 24 explicit in your project settings and remove hidden version managers from build scripts.
A simple, reusable checklist
Copy this into your team workspace and check it off service by service:
- Choose target per service (22 or 24) and commit engines field.
- Pin Node version in CI/CD and local dev (nvmrc/tool‑versions).
- Regenerate lockfiles on the target Node and update top‑level deps.
- Rebuild native modules and layers for the target OS/base image.
- Run crypto/TLS integration tests against external partners.
- Canary release with rollback gates and observability dashboards.
- Cut over, archive Node 20 artifacts, and update runbooks.
People also ask
Is it safe to stay on Node 20 after April 30, 2026?
Not in production. You’ll be running without community security fixes, and your cloud/runtime vendors will quickly start limiting what you can deploy. Use a feature freeze if you must, but treat the upgrade as urgent.
Can we skip straight to Node 24?
Yes—if your dependencies and platform images are ready. Many teams upgrade to 22 first for quick coverage and then hop to 24 later in the year. Both paths are valid; pick the one that lowers risk for your specific stack.
What breaks most often during Node major upgrades?
Native modules compiled against specific Node‑ABI versions, TLS and certificate chain nuances, and ESM/CJS packaging assumptions. These are solvable—test them intentionally rather than discovering them mid‑cutover.
Security and compliance: don’t let small gaps become big incidents
Auditors don’t love unsupported runtimes, and neither do attackers. If you maintain customer data, run payment flows, or report to a security framework, upgrading off an EOL runtime is table stakes. While you’re touching builds, harden your software supply chain: pin registries, verify signatures where available, and rotate secrets embedded in old build images. If you need a deeper checklist, see our Software Supply Chain Security: 2026 Playbook.
Practical tips that save hours
Want fast wins? Try these:
- Run your test suite under Node 22 and 24 locally before touching code. See what actually fails.
- Search for node-gyp, prebuild, and bindings bindings.node across the repo to map native risk.
- Add a prestart script that logs process.versions and OpenSSL info at runtime for quick triage.
- In containers, upgrade OS and Node together; don’t swap Node on a stale base image.
- For Lambda, de‑dupe layers and rebuild them with the same compiler flags as your functions.
What to do next
- Pick Node 22 or 24 for each service today and open the first PR to pin engines and CI.
- Schedule a 60‑minute risk review for native modules, TLS endpoints, and cloud runtimes.
- Stage canary rollouts by business impact: start with low‑risk jobs, finish with payment flows.
- Update your incident runbooks to remove Node 20 rollback paths post‑cutover.
- If you need extra hands or a migration partner, review our services and reach out via Contacts.
Why this matters now
EOL dates aren’t theoretical. They reshape vendor support, runtime availability, and your team’s ability to ship. The work you do in the next 90 days keeps you out of a summer of firefighting, unplanned audits, and “why did deploy just fail?” Slack threads. Make the plan visible, keep the changes small but steady, and close the books on Node 20 cleanly.
Want more practical guides like this? Browse the rest of our blog or see what we do for engineering leaders shipping at scale.
Comments
Be the first to comment.