AWS Interconnect is the first native, jointly engineered way to create private, high‑speed connectivity between your AWS VPCs and another cloud—starting with Google Cloud. Announced on November 30, 2025 and showcased at re:Invent the week of December 1–5, it takes multicloud links from weeks of cabling, LOAs, and ticket wrangling to a few clicks and an activation key. In preview, it launches with paired regions in N. Virginia, N. California, Oregon, London, and Frankfurt. Bandwidth starts at 1 Gbps during preview, with a roadmap to 100 Gbps at GA, plus MACsec on the underlay and built‑in, quad‑redundant paths.
What’s actually new—and why it matters
Until now, “multicloud networking” meant stitching together Direct Connect, Partner Interconnect, virtual routers, HA VPNs, and a lot of careful BGP policy. AWS Interconnect collapses that work into a managed construct. You request a connection from the AWS console (or accept one initiated from Google Cloud), attach it to a Direct Connect gateway, Transit Gateway, or Cloud WAN, and you’re in business. The service provisions capacity on pre‑built infrastructure with coordinated maintenance and proactive monitoring. Translation: fewer moving parts, faster time to value, and clearer ownership when something breaks.
For teams building AI and data platforms across clouds—say, Amazon EKS training pipelines with feature stores in BigQuery, or Amazon Bedrock agents calling Vertex AI endpoints—this reduces complexity and failure domains. If your board has been asking for provider diversity after this year’s outages, this is the cleanest path to active‑active or active‑standby architectures without hairpinning traffic through on‑prem or the public internet.
Key facts about AWS Interconnect (preview)
- Launch date: November 30, 2025; highlighted during re:Invent (Dec 1–5, 2025).
- Initial partner: Google Cloud via Cross‑Cloud Interconnect; Microsoft Azure is slated for later in 2026.
- Preview regions: AWS us‑east‑1 ↔ Google us‑east4, AWS us‑west‑1 ↔ Google us‑west2, AWS us‑west‑2 ↔ Google us‑west1, AWS eu‑west‑2 ↔ Google europe‑west2, AWS eu‑central‑1 ↔ Google europe‑west3.
- Throughput: 1 Gbps during preview; up to 100 Gbps targeted at GA.
- Security and resilience: MACsec on the underlay, quad redundancy across facilities and routers, with coordinated maintenance.
How AWS Interconnect changes your network design
Here’s the thing: this isn’t a new virtual router you manage; it’s a managed transport and control plane between clouds. You don’t terminate tunnels, allocate link‑local IPs, or hand‑craft BGP sessions—route exchange and resilience are abstracted. That has three important implications:
- Operational simplicity: Fewer artifacts to provision and fewer places to drift. Your teams can focus on segmentation, identity, and traffic policy.
- Deterministic performance: Dedicated capacity with consistent latency compared to VPN or internet‑based paths.
- Ownership clarity: When links flap, you have joint support coverage from AWS and Google Cloud rather than a chain of colos and circuit vendors.
Does AWS Interconnect replace Direct Connect or VPN?
No. Think of it as a managed, cloud‑to‑cloud sibling of Direct Connect rather than a replacement. You’ll still use Direct Connect for on‑prem to AWS, and you may keep HA VPN for specific encrypted overlays, zero‑trust flows, or as a safety net. Interconnect is for private, high‑speed cloud‑to‑cloud.
How is this different from DIY with Partner Interconnect?
DIY approaches required ordering circuits, configuring Cloud Routers, BGP, VLANs, and redundancy across two providers and often a third‑party carrier. With AWS Interconnect, the providers pre‑provision capacity and expose a simple artifact you create or accept. You don’t micro‑manage the underlay, and redundancy is baked in.
Primary use cases worth piloting now
Three patterns stand out for early wins:
- Active‑active or active‑standby DR: Maintain warm replicas of critical services across AWS and Google Cloud and swing traffic on failover without the public internet in the middle.
- Cross‑cloud data pipelines: Keep data gravity where it belongs. For example, land streaming data in AWS, process in Dataproc or BigQuery, and publish results back to AWS analytics endpoints privately.
- AI/agentic workflows: Run training or orchestration where the accelerators are available, but invoke downstream inference or vector stores in the other cloud over a private link. If you’re exploring agent frameworks, see our Bedrock AgentCore adoption guide for design patterns you can port to a multicloud setup.
AWS Interconnect costs: what we know today
Because this is a preview, expect pricing to evolve. On the Google side, Cross‑Cloud Interconnect has well‑published hourly charges for connections and attachments, plus discounted data transfer out by region. During public preview of the partner integration, Google’s documentation notes the new “transport” resource isn’t billed while active; data transfer still follows standard rules. On the AWS side, expect a mix of connection capacity and per‑GB data transfer out charges similar in spirit to Direct Connect’s model in your region. Practical takeaway: budget for hourly capacity and per‑GB egress on both sides, and run a small pilot to calibrate real‑world throughput and compression ratios before committing.
Baseline estimate for a pilot: a 1–10 Gbps connection with dual attachments, plus a few tens of terabytes per month of data transfer in North America (where many published rates cluster around $0.02/GB on the Google side for interconnect egress). Your exact costs will vary by region and whether you burst, so meter early and revisit quotas.
The 5R Framework for multicloud link design
Use this lightweight checklist to keep pilots sane and production‑ready:
- Regions: Choose paired regions close to your users and data. Start with one of the five preview pairs if you need hands‑on time now.
- Routes: Document prefix ownership and summarization per domain. Even if BGP is abstracted, your segmentation, route aggregation, and blackhole detection still matter.
- Resiliency: Target failure domains, not just link counts. Validate how your app behaves if one provider induces maintenance, and bake in retry/backoff at the client tier.
- Rates: Model hourly capacity plus per‑GB egress on each side. Build dashboards for cost per workload and set budgets/alerts.
- Runbooks: Define failover triggers, who presses the button, roll‑back, and post‑incident data reconciliation. Test it.
Hands‑on: a safe pilot in one afternoon
Here’s a practical, provider‑neutral flow that’s worked well in our lab and with clients:
- Pick one workload that tolerates a brief maintenance window: a read‑only analytics path, a feature store sync, or a cache warmup job.
- Establish segmentation via separate VPCs/projects and tags. Keep the pilot narrow; don’t plumb your entire mesh.
- Provision the link from your preferred console. On the AWS side, attach to a Direct Connect gateway, Transit Gateway, or Cloud WAN, depending on how you aggregate networks today. If your data path primarily fans out across regions, Cloud WAN often simplifies reachability.
- Warm the route table: announce only the minimal prefixes needed. Validate that the app works with private endpoints on both sides—no public egress allowed.
- MTU sanity check: test 1500‑byte payloads end‑to‑end before flirting with jumbo frames. MACsec adds overhead; avoid silent fragmentation.
- Throughput and jitter: generate realistic traffic (compressed/uncompressed, small and large messages). Record p50/p95 latency over an hour, then during a controlled failover.
- Cost meter: tag resources, export billing data daily, and compute cost per GiB and cost per successful request. Adjust idle capacity if you over‑provisioned.
People also ask
Is Azure supported?
Not yet. The public statements point to Microsoft Azure support later in 2026. If Azure is part of your target architecture, use this preview to finalize patterns (routing, segmentation, failover), then replicate when Azure lands.
What workloads benefit most right now?
Anything sensitive to jitter and internet path variability—database replication, analytics backplane, near‑real‑time feature syncs, low‑latency AI inference calls. Bulk migrations can work too, but remember per‑GB egress still applies; don’t burn budget on one‑time copies you could stage with object storage and lifecycle policies.
Will AWS Interconnect break my current network?
It shouldn’t if you treat it as another private path with explicit segmentation. Don’t dump your entire RFC1918 into it; start with least‑privilege prefixes, validate, then expand. Keep your existing VPNs until you’ve proven stability under failover.
Security and compliance notes
MACsec on the underlay is a big win, but application‑layer controls remain non‑negotiable. Keep mutual TLS between services, rotate credentials, and enforce IAM per workload. Coordinate security reviews on both sides: who can create transports or accept activation keys? What guardrails block cross‑project mistakes? If you’re filtering automated traffic or building public‑facing AI endpoints, pair your private transport with a robust bot defense posture—our guide on AI bot protection with Cloudflare outlines patterns that carry over.
Architectural patterns we recommend
Two blueprints map well to the preview’s capabilities:
- Hub‑and‑spoke with Cloud WAN: Cloud WAN as the aggregation layer for AWS accounts/regions, with a dedicated spoke to the Interconnect. On the Google side, mirror with a central project/VPC and hierarchical firewall policies. This gives you coarse and fine‑grained control without hairpinning.
- Service‑to‑service with private endpoints: Keep data stores private and expose only necessary services through internal load balancers on each side. If you’re building agent workflows across clouds, start here; we’ve captured design trade‑offs in our AgentCore guide.
AWS Interconnect gotchas (preview)
- Quotas exist: On the Google side, the new transport resource has per‑project, per‑region limits. Plan projects accordingly.
- MTU and fragmentation: Validate end‑to‑end payload sizes. Don’t assume jumbo frames until you test across both providers and load balancers.
- Prefix hygiene: Overlapping CIDRs will cause silent pain. Reserve ranges for cross‑cloud flows and summarize aggressively.
- Cost observability: Egress charges can hide in shared projects. Tag ruthlessly and set budget alerts per application team.
- Change control: Because provisioning is faster, it’s easier to make risky changes fast. Wrap it in deployment pipelines, not ad‑hoc clicks.
30/60/90‑day plan to make it real
Days 1–30: Prove the link
- Pick one app tier with clear SLOs and a fallback path.
- Stand up a 1 Gbps link in a preview region pair; announce only required prefixes.
- Measure latency, jitter, and throughput under load and during controlled failover.
- Publish a one‑page runbook with trigger conditions and rollback.
Need a template? Borrow from our AWS Interconnect 30‑day plan, which includes a ready‑to‑copy checklist.
Days 31–60: Scale the blast radius
- Add a second workload (different pattern: data pipeline if you started with DB replication, or vice versa).
- Introduce Cloud WAN or Transit Gateway for aggregation; define per‑team network policies.
- Layer in cost dashboards: cost per GiB, cost per request, idle capacity alarms.
- Run a joint game day with both cloud providers’ consoles in view.
Days 61–90: Production readiness
- Codify everything: IaC modules for provisioning and acceptance, plus guardrails.
- Finalize DR runbooks and RTO/RPO targets backed by hard numbers from your pilots.
- Negotiate committed use where possible after GA pricing is published.
- Train your NOC/SRE teams on triage, escalation, and joint support workflows.
What to do next
- Book a four‑hour, two‑provider pilot in one of the preview region pairs. Keep scope tight.
- Instrument cost and performance from day one. Don’t wait for month‑end bills.
- Write the rollback first. If the link fails, what happens and who decides?
- Plan prefix ownership and collisions now; they’re the number one cause of “mystery” blackholes.
- Document your security model across both clouds: IAM, key rotation, and who can accept activation keys.
If you want a pragmatic, battle‑tested rollout, our team has shipped these architectures for startups and enterprises. Explore our cloud networking services, browse representative projects in the portfolio, or just reach out via contact—we’ll help you pilot without surprises.
Zooming out
Multicloud stopped being a philosophy and turned into a product you can actually use. AWS Interconnect won’t eliminate the need for thoughtful segmentation, cost controls, or solid runbooks—but it does eliminate an entire category of toil and failure modes. Start small, measure everything, and expand only when your numbers support it. Teams that do that will get real resiliency and placement flexibility, not just a slide with two logos and a dotted line between them.