BYBOWU > Blog > Web development

AWS Regional NAT Gateway: Simpler Egress, Lower Cost

blog hero image
AWS now lets you run one Regional NAT Gateway per VPC that automatically expands across Availability Zones. That single change can cut your NAT bill and simplify routing—especially for multi-AZ EKS clusters, batch fleets, and private subnets—without giving up high availability. Here’s what changed, who should switch first, how to migrate safely, and the gotchas to watch (like the 60‑minute AZ expansion window). Use the step-by-step cutover checklist to test, roll out, and verify in da...
📅
Published
Nov 30, 2025
🏷️
Category
Web development
⏱️
Read Time
11 min

AWS Regional NAT Gateway is here, and it meaningfully shrinks both the complexity and cost of outbound egress for private subnets. Instead of deploying one NAT per Availability Zone (and babysitting public subnets, route tables, and Elastic IPs), you can run a single Regional NAT Gateway that automatically expands across AZs based on workload presence. For most teams, that’s fewer moving pieces, fewer ways to misconfigure routing, and a cleaner path to high availability.

Illustration of a VPC using one Regional NAT Gateway across AZs

What actually changed—and when

On November 19, 2025, AWS introduced a new availability mode for NAT Gateway: Regional. In Regional mode, one NAT Gateway ID serves the entire VPC and scales to the AZs where you run workloads. You don’t need a public subnet to host it, and you can keep the same route entry across subnets. It’s available in all commercial regions except GovCloud and China at launch.

Two practical knobs matter on day one. First, capacity and limits: Regional NAT supports up to 32 Elastic IPs per Availability Zone (zonal NAT supports 8), with per‑AZ throughput that scales linearly and can reach very high aggregate bandwidth for large footprints. Second, expansion timing: when you place ENIs in a new AZ, Regional NAT typically adds capacity in 15–20 minutes, but it can take up to 60 minutes. During that window, traffic may be served via another AZ.

Who should switch to AWS Regional NAT Gateway now?

Three profiles benefit immediately:

• EKS or ECS clusters spread across 2–3 AZs. You can point node groups to a single NAT ID, simplify per‑AZ route tables, and reduce per‑AZ NAT headcount.

• Batch/analytics fleets that scale into additional AZs. Regional NAT follows your footprints automatically, so you aren’t racing to provision per‑AZ NATs and patch routes mid‑scale.

• SaaS platforms with allowlisted egress IPs. Managing a single NAT resource with a predictable set of EIPs (including BYOIP) is operationally cleaner than juggling multiple NATs and IP pools.

If you rely on private connectivity (for example, NAT used strictly with Transit Gateway or to on‑prem over private links), keep your zonal, private NAT design for now. Regional NAT launches with public connectivity; private mode remains a zonal pattern.

Will Regional NAT cut my NAT Gateway pricing?

In most multi‑AZ footprints, yes—because you’ll run fewer NAT Gateways per VPC hour. The fundamental pricing model doesn’t change: you still pay an hourly NAT charge and a per‑GB data processing fee. But instead of paying the hourly line item for 2–3 (or more) zonal NATs, you’re paying for one Regional NAT. That’s the first, immediate saving that finance will notice.

There’s more nuance: - Cross‑AZ data transfer: When workloads and NAT capacity sit in the same AZ, you avoid cross‑AZ data transfer between instances and the NAT. Regional NAT is designed to keep egress local, but if you burst into a new AZ and the gateway hasn’t yet expanded, some flows may temporarily traverse another AZ. That can create cross‑AZ data transfer charges until expansion completes. - Network Firewall bundle effect: If you run AWS Network Firewall in-line, note the long‑standing waiver where NAT hourly and data processing charges can be credited when the NAT sits next to the firewall in the same path. Regional NAT doesn’t remove that benefit, but your architecture still needs to chain the services correctly for the waiver to apply. - S3/DynamoDB endpoints: As before, routing VPC endpoints for S3 and DynamoDB around NAT traffic reduces per‑GB processing fees. Regional NAT doesn’t change this best practice.

Numbers that matter for architects

Here are the operational details we’re using in customer plans this week: - Launch date: November 19, 2025; commercial regions except GovCloud and China. - IP scale: up to 32 Elastic IPs per AZ for Regional NAT (versus 8 on zonal NAT). Each EIP adds roughly 55,000 concurrent connections to a unique destination tuple. - Throughput targets: per‑AZ bandwidth scales automatically with footprint, with headroom to support very high aggregate throughput across AZs. - Expansion window: average 15–20 minutes, maximum 60 minutes to add a new AZ after an ENI shows up. - Count per VPC: up to five Regional NAT Gateways per VPC. Practically, we use more than one when different tenants or partner networks require distinct allowlisted egress IP sets. - Routing: AWS creates a dedicated route table for the Regional NAT; you reference the NAT ID from your private subnet route tables.

Regional NAT vs. per‑AZ NAT: a quick cost model

Assume us‑east‑1 rates for illustrational math and a steady three‑AZ app:

Yesterday’s pattern: three zonal NATs. Hourly line items are 3× the NAT hourly price, plus per‑GB processing for your egress. You carried operational overhead too—public subnets, three NAT route entries, and per‑AZ EIP management.

Regional NAT: one hourly line item, same per‑GB processing, allowlisted EIPs managed in one place. If you move 5 TB/month and keep most traffic AZ‑local, the savings usually come from reducing the NAT hourly count and trimming cross‑AZ transfers that arose from misaligned NAT placement.

Reality check: This is not a free lunch. If your previous design had one NAT serving all AZs (creating cross‑AZ transfer you didn’t notice), you’ll see cost move around: less NAT hourly, but potentially more per‑GB data transfer if expansion windows or routing force cross‑AZ paths. Measure before and after.

The Regional NAT cutover playbook (safe and fast)

Use this as a checklist you can hand to an engineer. We’ve run it with EKS, ECS, and EC2 fleets.

1) Inventory and baseline

Map all route tables that point to existing NATs. Tag the subnets that should use Regional NAT. Export the last 30 days of NAT Gateway metrics (bytes processed, active connections) and VPC flow logs for cross‑AZ traffic.

2) Decide EIP strategy

Pick AWS‑provided EIPs or BYOIP. If partners or downstream APIs allowlist your egress, pre‑allocate the exact set of EIPs you’ll associate with the Regional NAT (you can scale per AZ up to 32).

3) Create the Regional NAT Gateway

Choose Regional availability. In most cases, enable automatic mode so AWS handles EIP association and AZ expansion. Confirm the dedicated route table AWS creates for the gateway.

4) Stage routes in a sandbox AZ

Pick one non‑critical AZ. Update its private subnet route tables to target the new Regional NAT ID. Roll a small canary (one EKS node group or a small ASG) and verify outbound traffic, DNS resolution, and service health.

5) Observe expansion behavior

Scale a test workload into an AZ that has no current endpoints. Watch CloudWatch and VPC flow logs for the 15–20 minute expansion. Note any temporary cross‑AZ pathing. This is dress rehearsal for your production cutover.

6) Update remaining route tables

Flip the rest of the private subnets to the Regional NAT ID. If you used per‑AZ NAT IDs in multiple route tables, bake a script to reduce toil and to ensure idempotency.

7) Trim the old NATs

Drain and delete the zonal NATs one AZ at a time. Confirm EIPs attached to retired NATs are released (unless you want to reuse them elsewhere). Remove now‑unused public subnets created solely for NAT hosting.

8) Re‑measure cost and transfer

After 3–5 days, compare NAT hourly, NAT per‑GB processing, and EC2 cross‑AZ data transfer versus your baseline. Validate that S3/DynamoDB endpoints still bypass NAT.

9) Document the new steady state

Update your runbooks and diagrams. Add an operational note about the 60‑minute worst‑case expansion window so on‑call engineers aren’t surprised during sudden multi‑AZ scale‑outs.

Before vs. after: zonal NATs compared to one Regional NAT

Patterns we like with Regional NAT

• EKS egress the simple way: a single NAT ID for all node group subnets makes cluster growth boring. Use the extra EIP headroom to assign stable allowlists per environment (dev, staging, prod) without multiplying NAT gateways.

• SaaS tenant separation: two Regional NATs in one VPC—one per tenant cohort—give you clean IP identity boundaries for third‑party allowlists while keeping route tables manageable.

• BYOIP for partner compliance: if partners require organization‑owned ranges, bring your own IPs and attach them to the Regional NAT so outbound traffic has a predictable, audit‑friendly signature.

• Firewall chaining with chargeback: chain Regional NAT with AWS Network Firewall via Transit Gateway attachments and use flexible cost allocation to push inspection costs to the right accounts while keeping the NAT waiver benefit.

Questions teams are asking

Does Regional NAT increase cross‑AZ transfer?

No in steady state; it’s built to keep egress local per AZ. During the expansion window into a new AZ, some flows may traverse another AZ. Plan scale‑outs and watch for temporary cross‑AZ traffic in your metrics.

How many EIPs can I attach?

Up to 32 per AZ with Regional NAT. That’s 4× the headroom of zonal NAT and raises your concurrent connection capacity per destination significantly.

Is private connectivity supported?

Not in Regional mode at launch. Use zonal Private NAT Gateways for private‑only egress to Transit Gateway or on‑prem routes.

Can I run more than one Regional NAT per VPC?

Yes—up to five. We use multiple when different apps require distinct allowlisted egress IP sets or when we want explicit blast‑radius boundaries.

What about S3 and DynamoDB?

Keep VPC gateway endpoints in place. They still bypass NAT and save the per‑GB NAT processing charge.

Risks, limits, and edge cases

• Expansion delay: the 15–20 minute typical (up to 60 minutes) AZ expansion means sudden scale into a fresh AZ can briefly route through another AZ. If you’re hypersensitive to cross‑AZ data transfer, pre‑warm capacity by placing small, always‑on ENIs in each AZ.

• Deterministic failure isolation: zonal NAT makes it obvious which AZ handles which traffic. Regional NAT abstracts that. For strict isolation requirements, keep zonal NAT or deploy multiple Regional NATs dedicated to subsets of subnets.

• Return routes to middleboxes: Regional NAT gets its own route table with a pre‑wired internet gateway route. If you run middleboxes (firewalls, proxies), ensure return paths are set correctly in that NAT route table.

• Quotas: watch EIP and NAT Gateway quotas (organization and account level). Regional NAT’s higher per‑AZ EIP ceiling is great, but quotas still apply.

Zooming out: where this fits in your 2026 plan

This launch joins a broader trend: simplifying network primitives while giving you better cost levers. If you also handle web delivery and bot mitigation, weigh AWS’s new flat‑rate CloudFront and security plans for predictable spend. We covered who should switch and when in our take on CloudFront flat‑rate pricing.

If you’re evaluating broader platform upgrades before year‑end freezes, stack this change alongside runtime bumps and dependency remediation. Our AWS Lambda Node.js 24 upgrade playbook shows how we time engine upgrades with infra changes to reduce test cycles.

Security remains table stakes. If supply‑chain hardening is on your Q1 list, pair Regional NAT cutovers with a review of egress control and private connectivity. Our step‑by‑step npm supply chain attack playbook walks through isolation, lockfiles, and registry policies that benefit from cleaner egress paths.

What to do next (this week)

• Run a 30‑minute discovery: list NATs per VPC, per AZ; tally hourly charges and last‑month GB processed.

• Stage a pilot in a non‑critical VPC using the cutover playbook. Measure expansion timing and cross‑AZ effects.

• Decide your EIP policy (AWS or BYOIP) and codify it in Terraform/CloudFormation modules.

• Add CloudWatch alarms for NAT expansion anomalies and cross‑AZ spikes during scale events.

• Book a 60‑minute architecture review: Regional NAT plus endpoints, Network Firewall chaining, and cross‑AZ budget caps. If you want outside help, our cloud architecture & cost reviews are designed for exactly this kind of change.

Bottom line

Regional NAT turns a fussy, per‑AZ chore into a one‑resource primitive that scales with you. If you run multi‑AZ workloads with internet egress, you likely save money and time by switching—provided you validate the expansion window and keep private NAT in zonal mode where needed. Make the move deliberately, measure it, and pocket the simplicity dividend.

Want a second set of eyes on your plan? See how we design and ship cloud changes in our what we do page, or browse similar breakdowns on the Bybowu blog.

Engineer monitoring VPC NAT and network metrics on screens
Written by Viktoria Sulzhyk · BYBOWU
4,093 views

Work with a Phoenix-based web & app team

If this article resonated with your goals, our Phoenix, AZ team can help turn it into a real project for your business.

Explore Phoenix Web & App Services Get a Free Phoenix Web Development Quote

Get in Touch

Ready to start your next project? Let's discuss how we can help bring your vision to life

Email Us

[email protected]

We typically respond within 5 minutes – 4 hours (America/Phoenix time), wherever you are

Call Us

+1 (602) 748-9530

Available Mon–Fri, 9AM–6PM (America/Phoenix)

Live Chat

Start a conversation

Get instant answers

Visit Us

Phoenix, AZ / Spain / Ukraine

Digital Innovation Hub

Send us a message

Tell us about your project and we'll get back to you from Phoenix HQ within a few business hours. You can also ask for a free website/app audit.

💻
🎯
🚀
💎
🔥