AWS Regional NAT Gateway is here, and it changes how we design internet egress on VPCs. Instead of deploying a NAT Gateway per Availability Zone (and wiring a web of public subnets, routes, and IPs), you can create a single Regional NAT Gateway (RNAT) that automatically expands across zones where your workloads live. If you’ve ever tripped on cross‑AZ routing or paid for three idle NATs just to follow best practices, this is the release you were waiting for.
AWS Regional NAT Gateway: What just changed
On November 19, 2025, AWS added a regional availability mode for NAT Gateway. You create one RNAT at the VPC level, and it automatically enables itself in the AZs where you have resources. You don’t need a public subnet per AZ just to host NAT anymore. For internet egress, the RNAT carries its own AWS‑managed route table that includes a default route to your VPC’s Internet Gateway (IGW). You still need an IGW in the VPC for public internet egress, but you no longer scatter public subnets everywhere to run the gateways.
The engineering details matter for capacity planning and safety:
- Scaling and bandwidth: baseline 5 Gbps per AZ, automatically scaling up to 100 Gbps.
- Port exhaustion protection: RNAT can add Elastic IPs automatically; each IP supports up to 55,000 concurrent connections to a single destination, scaling to as many as 32 IPs per AZ.
- IP governance: integrates with VPC IPAM so you can automatically pull addresses from approved pools, including BYOIP.
- Expansion timing: auto‑expansion to a new AZ typically completes in 15–20 minutes and can take up to 60. Until it’s active in that AZ, traffic may be routed to another AZ.
- Limits and scope: up to five RNATs per VPC. Not supported in AWS GovCloud or China Regions at launch.
Why this matters: real spend and reliability wins
Teams have long paid the “reliability tax” of one NAT per AZ. In common three‑AZ VPCs, that meant triple hourly NAT fees plus the operational burden of keeping subnets and route tables consistent. With a single RNAT, you can often cut those hourly charges by two‑thirds while simplifying the blast radius of misconfigurations. Data processing charges per GB don’t disappear, but eliminating duplicate hourly gateways can still move the needle—especially in idle‑by‑design VPCs (data science, back‑office apps, or event‑driven services where traffic spikes are rare).
There’s also a subtle security benefit. Because RNAT doesn’t require a public subnet in every AZ just to host the gateway, you reduce the chance someone deploys a sensitive workload into a routable subnet by accident. Fewer public subnets in the account means fewer places for drift to do damage.
Does this replace zonal NAT across the board?
No—and that’s by design. Regional NAT Gateway is built for internet egress simplicity. At launch, the private connectivity type (zonal NAT used for private egress through Transit Gateway/VPN without an IGW) isn’t supported on RNAT. If your pattern relies on private‑only NAT for hybrid routing, stay on zonal NAT in those VPCs until AWS adds parity.
How the Regional NAT Gateway works behind the scenes
The RNAT sits at the VPC level, not inside a specific subnet. It has its own AWS‑managed route table with a default route to the IGW. Your private subnets point their 0.0.0.0/0 default route to the RNAT ID (not an ENI). If you run inspection, you route from app subnets to AWS Network Firewall or a Gateway Load Balancer endpoint first; those fire back to the RNAT, which sends traffic to the IGW. Replies return the same way. Because RNAT is VPC‑scoped, you can apply the same route table across subnets and avoid per‑AZ drift.
On scaling, RNAT monitors per‑destination connection counts and adds IPs proactively; it scales up aggressively as you approach thresholds and scales down conservatively. With IPAM integration, you can keep egress identity stable and compliant when partners require allowlisting. If you want exact control, there’s a manual mode where you specify which AZs and which Elastic IPs are in play.
Cost thinking: where savings are real—and where they’re not
Here’s the thing: RNAT doesn’t change NAT data processing pricing. If most of your NAT bill is per‑GB traffic, you won’t see miracles. The savings show up when hourly gateway charges dominate—or when you’ve been paying cross‑AZ data transfer because a workload accidentally routed to a NAT in the wrong zone.
A practical back‑of‑the‑envelope for a three‑AZ VPC in an inexpensive region: replacing three always‑on zonal NAT Gateways with a single RNAT can cut two‑thirds of your NAT hourly fees immediately. If your NAT data processing is low to moderate, the overall NAT line item will visibly shrink. If it’s high (bulk downloads, package mirrors, telemetry), aim for a different lever: move service traffic to VPC endpoints (S3/DynamoDB) to avoid NAT per‑GB charges altogether, then evaluate RNAT for the remaining internet egress.
Migration playbook: a safe 60–90 minute change window
Here’s a step‑by‑step we’ve used in production VPCs without user‑visible impact.
- Inventory egress paths. List private subnets that send traffic to the internet, and note any inspection layers (Network Firewall or third‑party via GWLB). Confirm there’s an Internet Gateway attached to the VPC.
- Baseline metrics. Capture current NAT hourly count, NAT bytes processed, and cross‑AZ data transfer for the last 30 days. You’ll use this to verify the business case.
- Create the RNAT. Use automatic mode to start; add a name clearly indicating “regional.” In VPCs with allowlists, consider manual mode so you can pin EIPs per AZ from day one.
- Wire inspection, if used. In the RNAT route table, insert routes that send return traffic through your firewall endpoints. Confirm policies allow expected egress domains.
- Flip one subnet. Change a single private subnet’s default route (0.0.0.0/0) to the RNAT ID. Run canaries: DNS lookups, package updates, outbound web calls, webhook posts. Watch CloudWatch metrics—connection counts by AZ, error spikes, and NAT bytes processed.
- Expand to all target subnets. Roll the route change across the rest of your private subnets. If you use Terraform/CloudFormation, ensure route dependencies are correct; avoid a brief state where both NATs are in the table.
- Soak and verify. Let traffic run 30–45 minutes. Confirm that RNAT has expanded to each AZ where you have ENIs. If you see cross‑AZ data transfer during expansion, it should normalize once RNAT is active in that zone.
- Decommission old NATs. Remove routes pointing to zonal NATs, then delete the gateways and any host public subnets that are now unnecessary. Update runbooks and diagrams.
Bonus: AWS Compute Optimizer now surfaces recommendations for unused NAT Gateways. After migration, run it to catch stragglers you forgot to delete.
People also ask: common questions we’re hearing
Will Regional NAT Gateway reduce our NAT data processing charges?
No. RNAT simplifies architecture and reduces hourly gateway count; per‑GB NAT processing charges are unchanged. To reduce per‑GB costs, push eligible traffic to VPC gateway endpoints (for S3, DynamoDB) and keep NAT for true internet destinations.
Do we still need an Internet Gateway?
Yes—for public internet egress. RNAT’s AWS‑managed route table includes a default route to the VPC’s IGW. If your pattern is private‑only egress (Transit Gateway/VPN, no IGW), RNAT doesn’t support the private connectivity type at launch; stay with zonal NAT.
Can RNAT break inspection or DLP tooling?
It shouldn’t. You can insert AWS Network Firewall or GWLB in front of RNAT by sending app subnet routes to the inspection endpoints first, then onward to RNAT. Verify sequence and return paths in the RNAT route table so replies re‑enter inspection as intended.
What about cross‑AZ data transfer surprises?
Two cases to watch: first hour after enabling RNAT in a new AZ (before it expands there), and any AZ where you haven’t enabled RNAT in manual mode. In those windows, traffic may exit another AZ and incur cross‑AZ charges. Plan your rollout during low traffic and let auto‑expansion complete.
How many RNATs should we run per VPC?
Most teams will use one. Create additional RNATs if you must expose different egress IP sets to distinct partners, or if compliance dictates strict separation by environment in the same VPC. There’s a quota of five RNATs per VPC.
Architecture patterns worth bookmarking
Simplest egress (no inspection)
All private subnets share a route table with 0.0.0.0/0 to the RNAT ID. The RNAT’s route table uses the default route to the IGW. Use this for internal apps that only need package repos, web APIs, and SaaS calls with no inline inspection.
Inline inspection
App subnets forward default routes to AWS Network Firewall or GWLB endpoints. Those subnets’ route tables point to RNAT, then RNAT to IGW. Be explicit about return routes back through inspection. If you’re making a broader edge decision too, see our take on CloudFront Flat‑Rate Pricing—teams often evaluate both egress and CDN together.
Endpoint‑first design
Use VPC gateway endpoints for S3 and DynamoDB to bypass NAT data processing charges for service‑to‑service traffic. Then keep RNAT for the smaller slice of traffic that truly needs the public internet. If you’re optimizing container compute alongside this, our read on Cloudflare Containers pricing shows how network and CPU savings stack when done together.
Practical guardrails and gotchas
- Document the egress IP story. If partners allowlist specific IPs, pin RNAT to those EIPs via manual mode and IPAM. Communicate the change and schedule DNS/ACL updates.
- Watch for dangling public subnets. After migration, delete public subnets you created solely to host zonal NATs. It removes a misdeployment vector.
- Throttle “quick wins.” Don’t rip out zonal NAT in VPCs that use private connectivity mode for TGW/VPN. RNAT doesn’t cover that yet.
- Verify AZ mapping. In some Regions, AZ letters differ per account. Use AZ IDs, not letters, when you reason about expansion and cost.
- Measure cross‑AZ data during rollout. Plan a quiet window. Expansion to a new AZ can take up to an hour; keep an eye on costs and retry logic.
The 30‑minute RNAT readiness checklist
Use this at your next ops review:
- We have an IGW attached to the VPC for public egress.
- All private subnets and route tables are inventoried; we know which ones need internet egress.
- Inspection flow is mapped (NFW/GWLB) and return routes are defined in the RNAT route table.
- IP strategy decided: automatic with Amazon‑provided IPs or manual with specific EIPs from IPAM/BYOIP.
- Canaries are in place (HTTP GET to known domains, package update, outbound webhook smoke).
- CloudWatch dashboards created for RNAT connection counts, bytes processed, and per‑AZ health.
- Rollback is documented: switch default routes back to zonal NAT IDs if needed.
- Post‑cutover cleanup plan includes deleting zonal NATs and any now‑unused public subnets.
Data you can cite internally
When socializing the change with finance or security, lead with facts: launch date (Nov 19, 2025), one RNAT per VPC instead of one per AZ, auto‑scaling to 100 Gbps, per‑destination connection protection via automatic EIP scaling, IPAM integration, and a max of five RNATs per VPC. Call out that private connectivity type isn’t supported at launch and that expansion to a new AZ typically completes in 15–20 minutes, up to 60.
What to do next
- Pick one non‑critical VPC and run the migration playbook this week. Track hourly NAT savings and any cross‑AZ leakage during expansion.
- Map service traffic to VPC endpoints. Then use RNAT for what truly needs the public internet. If you’re rethinking your broader edge strategy while you’re at it, our CloudFront pricing decision guide can help frame the tradeoffs.
- Turn on Compute Optimizer recommendations for NAT Gateways to flag idle gateways you can delete post‑migration.
- Update your platform blueprints. New VPCs should default to RNAT for internet egress unless you have a private‑only requirement.
- Share results with leadership. A simple before/after graph of NAT hourly count and a one‑pager on reduced configuration complexity will help you unlock more time for roadmap work.
Zooming out, this is one of those infrastructural tweaks that compounds: fewer NATs to patch, fewer subnets to explain, and fewer places for drift. If your team owns the platform layer, it’s the kind of change that buys back time every sprint. When you’re ready to pair this with an edge and build pipeline review, our previous analyses—like AWS weekly changes spotlighted in six under‑the‑radar launches—show how small platform decisions add up to real money and happier on‑call rotations. And if you want help pressure‑testing your network design or modeling the savings, reach out via our contact page—we’ll bring diagrams and a cost diff that makes sense to finance.
Want a second set of eyes?
If you’re weighing RNAT alongside other platform changes—like CDN pricing models, container scheduling, or CI/CD usage caps—our team does this work every week for product companies and high‑growth teams. Start with an hour: we’ll scope the RNAT cutover, endpoint shifts, and a quick audit of cross‑AZ transfer patterns. See how we think on our services page and browse recent client outcomes in the portfolio.