AWS Interconnect moved into preview this week, and it finally gives teams a first-party, private way to connect AWS workloads to other clouds—starting with Google Cloud. If AWS Interconnect works the way early briefings suggest, the days of cobbling together DIY VPN meshes, partner circuits, and brittle route policies to span providers might be numbered.
Here’s the thing: this is not just another peering option. Interconnect is designed to stitch Amazon VPCs to other-cloud networks with dedicated capacity, built-in resiliency, and console- or API-driven setup measured in minutes, not months. Azure is planned to follow next year, and the preview is active in a limited set of AWS Regions right now. For teams on the hook for uptime over the holidays, the implications are immediate.
What exactly shipped—and what didn’t
Interconnect is a managed, private interconnect that links AWS networking services—think Transit Gateway (TGW), Cloud WAN, and VPCs—to networks in other cloud providers. The initial launch partner is Google Cloud, with preview available in five AWS Regions. Azure connectivity is on the roadmap for 2026. The model is API-first; cloud providers can implement against an open spec, and customers turn up cross-cloud links in their consoles.
What it is not: a magic “single VPC across clouds.” Subnets don’t stretch, security groups don’t sync, and identity remains per-cloud. You still design for segmentation, explicit routing, and shared controls—only now the underlay becomes simpler and far more repeatable.
Why this matters right now
Two realities collide here. First, the incident cadence in 2024–2025 reminded everyone that provider-level outages still happen. Second, AI workloads and data gravity pulled many orgs into real multicloud, not just slideware. Interconnect lets you build dual-home service tiers—API ingress on Google Cloud, state on AWS, or vice versa—and fail traffic over without tromboning through the public internet.
In practice, that means lower operational burden (one managed path instead of five brittle ones), predictable latency for east–west traffic, and a cleaner security posture because you can keep production off the public internet entirely. That’s a big deal for regulated workloads and internal platform teams tired of managing tunnels that silently degrade.
How AWS Interconnect fits with existing building blocks
You don’t throw out your current network. You compose Interconnect alongside it:
• Transit Gateway remains your aggregation point for VPCs. Interconnect becomes another attachment target.
• Cloud WAN continues to orchestrate global segments and routing intents; Interconnect is a spoke into a non-AWS segment.
• Direct Connect still matters for on‑prem. Interconnect addresses cloud‑to‑cloud only. Many enterprises will run both.
On the other side, you’ll map to native constructs such as Google Cloud VPCs, Cloud Router, and firewalls. Expect a control-plane handshake (policies, routes, health) and a data-plane path with dedicated bandwidth and HA baked in. You’ll still manage route advertisements, CIDR hygiene, and enforcement policies.
The catch: limits and gotchas to plan for
Preview means constraints. Region coverage is limited. Features may change. And the devil is in the network details:
• Overlapping CIDRs: Years of organic growth mean 10.0.0.0/8 is probably a mess in your estate. Interconnect won’t fix overlaps—plan NAT or renumbering for conflicting prefixes.
• MTU and MSS: Don’t assume jumbo frames end‑to‑end. Standardize MTU across both clouds or set conservative MSS clamping at choke points to avoid intermittent gRPC pain.
• Route scale limits: TGW, Cloud Router, and your firewalls each have ceilings. Summarize aggressively and segment traffic to avoid route churn.
• Egress math: Private doesn’t mean free. You’ll pay data movement on both clouds plus any port-hour or processing fees for the interconnect and attachments. Model flows, not just averages—90th/99th percentiles matter.
• Identity and policy drift: Security groups and GCP firewall rules won’t mirror each other. Keep an intent layer in code (e.g., as policy) and compile to each cloud’s primitives.
Architecture patterns you can ship this quarter
Here are three patterns we’ve implemented or reviewed with platform teams that map cleanly onto Interconnect.
1) Active/standby API tier with private failover
Run your API ingress primary in AWS behind a regional ALB/NLB and keep a warm standby in Google Cloud. Health detection cuts over DNS private zones and flips the egress path internally via route policies; no public DNS thrash. State layers (databases, caches) remain in AWS, with change data capture streaming to a read-optimized replica in Google Cloud. Use Interconnect for east–west replication and control plane signals.
2) Split-brain inference with shared feature store
Keep training pipelines and feature store in Google Cloud; run real-time inference in AWS alongside app servers. Interconnect carries private traffic for feature reads and model rollout control. This avoids dragging petabyte datasets across the public internet while letting app teams deploy in their home cloud.
3) Centralized egress and inspection
Force all cross-cloud egress through a shared inspection segment in AWS (Cloud WAN + firewall appliances), then traverse Interconnect into Google Cloud. This consolidates logging, DLP, and threat prevention without pushing that stack into every project VPC. Use segmentation to isolate tenants and keep blast radius small.
Hands-on: a 10-step rollout plan for AWS Interconnect
Run this in a sandbox first with a trivial two-tier app; then expand.
1) Inventory prefixes and flows: Export current TGW/Cloud WAN routes and Google Cloud VPC routes. Identify overlaps and high-volume flows by service and port.
2) Define segments: Choose two or three traffic classes (e.g., platform control, app east–west, data replication). Don’t start with dozens.
3) Set MTU policy: Document end-to-end MTU/MSS and enforce it at choke points. Pick conservative defaults until you can measure.
4) Prepare identity and secrets: Ensure IAM boundaries per segment. Store Interconnect credentials/keys in your standard secrets manager on both clouds.
5) Build a minimal path: Create one Interconnect link between a staging VPC (AWS) and a non-prod VPC (Google Cloud). Advertise only a /24 on each side.
6) Wire to TGW/Cloud WAN: Attach your staging VPC to TGW or Cloud WAN and propagate summarized routes. Verify route tables and blackhole behavior for non-advertised subnets.
7) Enforce guardrails: Apply baseline security groups and GCP firewall rules that match your intent layer. Deny by default; permit only the test app flows.
8) Test failure: Kill one side’s link, observe reroute and health alarms. Measure recovery time objectives.
9) Expand prefixes and services: Add the next set of routes and one stateful flow (e.g., CDC replication), then re-run failure drills.
10) Document and automate: Capture the console steps as IaC. Produce a design doc and an operational runbook with rollback steps.
People also ask: key questions, crisp answers
Does AWS Interconnect replace Direct Connect or Google’s Partner Interconnect?
No. Direct Connect remains for on‑prem to AWS; Google’s Partner Interconnect remains for on‑prem to GCP. AWS Interconnect focuses on cloud‑to‑cloud. Many enterprises will run all three.
How is this different from VPC peering, Private Service Connect, or VPC Lattice?
VPC peering is intra‑provider. Private Service Connect and Lattice help expose services privately within a provider. Interconnect gives you a first‑party, managed underlay between providers; you can still layer L7 service exposure on top.
Which regions support the preview?
A small set of AWS Regions support it today while in preview. Check your AWS console for the current list; region availability will expand over time. Plan for locality and don’t assume global coverage yet.
How much does AWS Interconnect cost?
There isn’t a single flat fee to cite. Expect charges for link capacity/ports, control-plane or attachment resources, and cross‑cloud data transfer on both providers. Model realistic traffic, include bursts, and test for a week before committing to capacity.
Security baseline for day one
Security work doesn’t vanish just because the link is managed. Bring your own guardrails:
• Encryption and mutual auth on the data path, even when the provider encrypts links—layers fail gracefully.
• Explicit route filters and tags by segment to avoid accidental broad advertisements.
• Centralized logging: flow logs on both sides, plus packet captures during incident drills.
• Least-privilege IAM: separate roles for provisioning interconnects vs. altering route policies.
• Data boundaries: classify what may traverse Interconnect. Some regulated data sets shouldn’t move, even privately.
Cost modeling without guesswork
Avoid hand‑waving. Build a 30/60/90-day traffic projection per segment. For each, capture:
• Average and P95 throughput in Mbps
• Packet size distribution (helps with MTU/MSS validation)
• Directionality (AWS→GCP, GCP→AWS)
• Burst patterns tied to batch windows or autoscaling events
Then, request not-to-exceed estimates from finance using conservative egress assumptions on both clouds. Lock budgets in your policy engine. If you’re moving mission-critical flows, run a canary migration for one service and compare bill deltas before a full cutover.
Governance: who owns it after launch?
Define one team as the Interconnect owner—the same way you assign ownership for Direct Connect. They maintain the IaC modules, route SLOs, and on-call rotation for cross-cloud incidents. Treat the other cloud provider as a peer network with its own change windows; build a shared calendar so deploys on one side don’t strand the other.
What about Azure?
Azure support is planned for next year. If you’re dual-homing into Microsoft already, design today with an abstraction: Interconnect modules with provider-specific adapters. That way you won’t refactor your route policies when Azure goes live—just add another attachment and segment.
How this plays with the AI stack (and why you care)
Teams adopting custom model development on AWS (for example, with Nova Forge) often keep data prep or feature engineering in another cloud. Interconnect can make that topology viable by keeping east–west AI traffic private and predictable. If you’re piloting organization-specific models, tightening the network path between data prep and inference reduces latency variance and eases compliance reviews. Our walkthrough on building your own models is a good companion read: Build-your-own models on Nova Forge.
Let’s get practical: a readiness checklist
Work through this list before turning up production flows:
• CIDR audit complete; conflicts documented and mitigated via NAT or renumbering.
• MTU/MSS policy set and tested across a synthetic workload (e.g., gRPC + files).
• Route summarization verified; route scale within limits on TGW/Cloud Router/firewalls.
• Security baseline enforced with deny-by-default and per-segment allow rules.
• Observability wired: flow logs, health checks, packet capture playbook, SLOs.
• Budget guardrails implemented in your policy engine; alerts on anomalies at P95.
• Runbook drafted with rollback steps and owners per action.
Common anti-patterns to avoid
• Stretching a subnet across clouds (not supported; leads to undefined behavior).
• Hairpinning through on‑prem for cloud‑to‑cloud flows—Interconnect exists to eliminate this.
• Advertising 10.0.0.0/8 everywhere. Summarize tightly to retain control and avoid accidental lateral movement.
• Ignoring DNS. Private zones, split-horizon, and deterministic failover policies are as important as BGP.
Where to go deeper
If you want a deployment walkthrough with Google Cloud specifics, we published a playbook focused on TGW, Cloud Router, and route policy patterns: AWS Interconnect with Google Cloud: The Playbook. For a broader overview and decision tree (Direct Connect vs. Interconnect vs. VPN), start with our multicloud deployment playbook. And if your team is also digesting yesterday’s GitHub Copilot policy shifts, pair your network rollout with budget guardrails using our Dec 2 Copilot guide.
What to do next (this week)
• Stand up a sandbox Interconnect link between staging VPCs and push a trivial service across it.
• Run two failure drills: link down and route withdrawal. Record recovery timings.
• Model egress and set not‑to‑exceed budgets. Alert on variance over 20% week‑over‑week.
• Document MTU/MSS, summarize routes, and lock guardrails in IaC.
• Socialize the runbook with SRE, security, and finance. Assign a single owner.
Zooming out
Interconnect doesn’t remove the need for thoughtful network architecture; it gives you a cleaner substrate to build on. If you do the unglamorous work—CIDR hygiene, segmentation, observability, budget control—you’ll finally get the resilience and flexibility multicloud promised without the duct tape.
If you want help evaluating your design or need a hands-on pilot in the next sprint, our team has shipped these patterns in production. Reach us via ByBowu contacts and we’ll bring a concrete plan for your environment.
