It’s official: Ingress NGINX retirement is coming in March 2026. That means no more releases, bug fixes, or security patches. If your edge still runs on the community Ingress NGINX controller, you now have a fixed window to migrate. The right target for most teams is Gateway API, which hit a solid 1.4 release in October 2025 with backend TLS policies, route naming, and clearer conformance signals. Below, I’ll lay out a pragmatic 90‑day plan to de‑risk the move, call out the traps we’ve seen in client work, and show how to time this alongside Kubernetes 1.35 (planned for December 17, 2025).
Key dates, what changed, and why it matters
Let’s anchor on dates so planning doesn’t drift:
• October 6, 2025: Gateway API 1.4.0 GA shipped with three notable Standard-channel features: BackendTLSPolicy (TLS from gateway to backends), supportedFeatures in GatewayClass status (clearer capabilities), and named rules for Routes. It also introduced experimental Mesh, default gateways, and an externalAuth filter.
• November 11, 2025: The Kubernetes community announced the Ingress NGINX retirement. Best‑effort maintenance continues until March 2026; after that, repositories become read‑only.
• December 17, 2025 (planned): Kubernetes 1.35. Expect incremental improvements and a noteworthy enablement: image volumes likely on by default (this matters for packaging config, binaries, or ML models without bloating app images). If you’re sequencing upgrades, pencil this date into your change calendar.
• March 2026: Ingress NGINX maintenance halts. Staying put beyond this point becomes a risk decision, not a technical one.
Primary decision: Gateway API or another controller?
Most organizations should adopt Gateway API now for three reasons. First, vendors are converging on it, which reduces lock‑in relative to bespoke Ingress annotations. Second, features in 1.4.0 close long‑standing gaps in encryption between the gateway and your services. Third, conformance signals in the GatewayClass status make supportable behavior easier to reason about at scale.
Which implementation? If you’re all‑in on a service mesh (e.g., Istio), using that mesh’s Gateway controller keeps operations consistent. If you standardize on a cloud LB, the provider’s Gateway implementation simplifies day‑2. If you prefer an Envoy‑centric approach with batteries included, Envoy Gateway or Traefik (with Gateway support) are strong picks. For regulated workloads, double‑check conformance and supported features before committing.
Ingress NGINX retirement: what can break, what usually doesn’t
Nothing will suddenly stop on retirement day—existing deployments continue to run. The real risks are quiet ones: a CVE you can’t patch, a kernel change that needs a controller update, a dependency bump your pinned image can’t take. We’ve also seen clusters stuck on older Kubernetes because their Ingress controller couldn’t advance cleanly. With a fixed EOL date, technical debt turns into schedule risk. Don’t wait for a security advisory to force your hand.
Gateway API 1.4 features you should actually use
Gateway API is broad. In 1.4, a few features move the needle:
• BackendTLSPolicy (Standard): Encrypt traffic from the Gateway to your Services and validate backends via SANs or hostnames. This removes the “TLS to the edge only” anti‑pattern that leaks plaintext inside the cluster.
• Named rules for Routes (Standard): Human‑readable, auditable routing policies. Your SREs will thank you when triaging path conflicts.
• supportedFeatures in GatewayClass status (Standard): Visibility into what your controller actually supports—key for multi‑cluster fleets and regulated environments.
• externalAuth (Experimental): Handy for central auth decisions, but treat it as a feature flag until your controller marks it conformant.
The 90‑day migration blueprint
Here’s the blueprint we’ve used successfully with platform teams. It’s written for a single cluster; scale it out for fleets by running the phases in parallel.
Days 1–15: Inventory, controller choice, and a dry run
• Discover scope: enumerate Ingress resources, custom annotations, TLS termination points, and any CRDs your current controller added. Grab DNS zones and certificates as part of the inventory. If you’re unsure whether you run Ingress NGINX, query for pods with the canonical label.
• Pick your Gateway implementation: pick once, communicate broadly, and document supported features. Capture the decision in your architecture ADRs.
• Convert manifests: run a dry conversion with the community ingress‑to‑Gateway tool. Expect to hand‑edit anything built on annotation magic.
• Stand up a non‑prod Gateway: deploy Gateway API CRDs and your chosen controller into a staging namespace. Don’t change prod yet.
Days 16–35: Map policies, adopt mTLS, build observability
• Map policy behavior one‑for‑one where possible: Ingress path rules become HTTPRoute rules; hostnames and TLS secrets map cleanly. Where annotations previously injected NGINX custom directives, use Gateway filters or drop them if there’s no functional need.
• Encrypt in‑cluster hops: attach a BackendTLSPolicy to services that require confidentiality. Use SANs that match Service DNS or a known list of hostnames; store CAs in ConfigMaps or reference well‑known system CAs.
• Decide how you’ll do auth: if your current approach is external (OIDC, JWT), check whether your controller supports the new externalAuth filter (still experimental). If not, keep auth at the app or sidecar layer for now.
• Instrument the path: export Gateway metrics, Route status, and controller logs to your observability stack. Define SLOs for 4xx/5xx rates, handshake failures, and route attachment errors.
Days 36–60: Dual‑run and controlled cutover
• Spin up the new Gateway alongside the old Ingress: this is the safest pattern. Use a parallel hostname (for example, canary.example.com) or point a limited set of client traffic at the Gateway’s IP.
• Validate behaviors under load: TLS handshakes, header propagation, HSTS, redirects, gzip/brotli, and timeouts. Don’t skip WebSocket and HTTP/2 tests.
• Plan DNS cutover: lower TTLs a week ahead. For multi‑provider or geo setups, test weighted records to shift gradually. Keep both controllers running until logs show the old Ingress is idle.
Days 61–90: Decommission and tighten the screws
• Remove the old controller: disable pod autoscaling, then delete Helm releases or manifests. Archive dashboards and alert routes tied to Ingress NGINX.
• Lock policy: promote experimental features only when your vendor marks them conformant. Document Route naming conventions and certificate rotation runbooks.
• Post‑mortem the migration: what broke, what didn’t, and what process you’ll reuse for other clusters.
“People also ask” style questions
Can I keep using Ingress NGINX after March 2026?
Technically yes, but you’ll be running unmaintained software at your perimeter. That’s a tough story for risk review. Treat March as the finish line, not a suggestion.
Do I need to upgrade Kubernetes to use Gateway API 1.4?
You can adopt Gateway API 1.4 on Kubernetes 1.26 and later. If you’re planning a broader platform refresh, line up your migration with a supported minor (1.33/1.34 now, 1.35 after the December 17, 2025 release if it stays on schedule).
Which Gateway controller should I choose?
Pick the one that aligns with your stack: cloud L7 LB controllers if you sit on a single cloud and want managed operations, mesh‑native gateways if you already run a service mesh, or Envoy/Traefik when you want portable, OSS‑first routing with solid Gateway support.
Production checklists you can copy
Cutover readiness (go/no‑go)
• All HTTPRoutes have unique, named rules; conflicts resolved.
• BackendTLSPolicy attached where sensitive data flows; SANs validated.
• Access logs, metrics, and controller health wired into your dashboards.
• Synthetic checks for each hostname and path; WebSockets and HTTP/2 verified.
• DNS TTLs lowered; rollback plan documented and tested.
Security posture (day‑2)
• Certificates and CAs managed via your existing secret pipeline; rotation automated.
• externalAuth disabled unless your controller marks it conformant.
• Only required Gateway listeners opened; strict hostname matches enforced.
• Default deny policies defined at the Route level where supported.
What Kubernetes 1.35 means for your timeline
As of December 5, 2025, Kubernetes 1.35 is planned for December 17. You don’t need 1.35 to migrate, but if you’re already budgeting downtime for a minor upgrade, consider bundling the networking change so you test once and stabilize once. Two practical notes for planners:
• Gateway API is decoupled from core; you can ship 1.4 today on supported minors. That’s often the fastest path to risk reduction.
• If you distribute data artifacts (binaries, config, or models) to pods during startup, watch the image volumes behavior in 1.35. It can simplify bootstrap logic and reduce custom init container scripts, which is one less moving part during migration week.
For a deeper planning cadence around patch support windows and version skews, see our Kubernetes 1.35 upgrade playbook.
Gotchas we see in the wild
• Annotation debt: A decade of NGINX annotations doesn’t map one‑to‑one to Gateway filters. Treat this as a chance to delete accidental complexity—not to port it forever.
• TLS assumptions: Teams often terminate TLS at the edge and send plaintext to backends. BackendTLSPolicy gives you an easy upgrade path. Use it.
• DNS hygiene: If you don’t own the zones or your provider lacks weighted records, plan more time for migration. Lower TTLs early and confirm with logs, not just dig.
• Observability blind spots: Controller logs and Route status conditions are your early‑warning system. If you’re not scraping them, you’re flying without instruments.
A quick, real‑world migration pattern
Here’s the sequence we’ve used for high‑traffic properties that can’t afford surprises:
1) Deploy Gateway CRDs and your controller in a dedicated namespace. Lock RBAC to platform teams.
2) Convert Ingress to HTTPRoute with a tool, then hand‑review. Name rules explicitly.
3) Attach BackendTLSPolicy to sensitive Services; validate cert chains and SANs.
4) Expose a canary hostname that mirrors prod traffic. Run for a week at low volume.
5) Lower DNS TTLs; announce a 24‑hour change window.
6) Shift 10% traffic via weighted DNS; watch 4xx/5xx, handshake errors, and latency.
7) Move to 50%, then 100%. Keep the old Ingress up for a full TTL after 100% to catch stragglers.
8) Decommission and archive.
What to do next (developers and platform leads)
• Start the inventory this week. If you discover orphaned Ingress objects, delete them before they confuse your runbooks.
• Pick a Gateway implementation and publish a reference manifest developers can copy.
• Add BackendTLSPolicy to your “golden” service template. Make encryption between gateway and services the default, not an exception.
• Put DNS owners on the project plan now; slow DNS changes delay everything.
• Timebox your work. A 90‑day plan is healthy pressure—and still gives you room for careful testing.
Where we can help
If you want a second set of eyes on your migration plan or need additional hands for the cutover, our team has shipped these changes for startups and enterprise platforms alike. Explore how we structure engagements on what we do, browse selected outcomes in our portfolio, and see how we package work on our platform engineering services page. Ready to move? Get in touch.
FAQ for executives
Is this a rewrite or a replatform?
It’s largely a policy translation and a controller swap, not a full replatform. Apps shouldn’t change; manifests and operations do. The work is real, but contained.
What’s the business risk of not moving?
Security exposure and stalled velocity. Post‑March 2026, an urgent CVE would force either an emergency migration or a risky acceptance. Both are expensive in their own way.
How much downtime should we expect?
With dual‑run and weighted DNS, most teams cut over with zero to minimal user impact. The longest lead‑time items are DNS control and certificate logistics.
Parting guidance
Here’s the thing: networking is where small cracks widen. The Ingress NGINX retirement gives you the constraint you needed to clean up policy sprawl, encrypt internal hops, and land on an API that multiple vendors back. Move early, migrate deliberately, and write down what you learn—the next cluster will go twice as fast.
