AWS Interconnect multicloud is here, and it changes how you wire up AWS and Google Cloud. The headline: managed, on‑demand private connectivity between clouds that provisions in minutes, not weeks. If you’ve ever wrangled colos, LOA‑CFAs, and a patchwork of partner circuits just to connect VPCs and VPC networks, this will feel like cheating. The joint launch with Google’s Cross‑Cloud Interconnect formalizes a cloud‑to‑cloud model that’s standardized, resilient, and API‑driven. (cloud.google.com)
What actually launched—and when
On November 30, 2025, AWS announced a preview of Interconnect – multicloud, with Google Cloud as the first launch partner and Microsoft Azure slated to follow in 2026. The preview is live in five AWS Regions, accessible right from the console. (aws.amazon.com)
Google’s side calls it Partner Cross‑Cloud Interconnect for AWS. The two providers published an open interoperability spec, emphasized “minutes” to provision, and designed quad‑redundant links between physically redundant facilities and routers, with MACsec on the edge interconnects. Salesforce is already using it. (cloud.google.com)
For teams asking, “Is this real or just marketing?”—Reuters covered the joint launch on December 1, 2025, framing it as a direct response to customers needing faster, more reliable cross‑cloud connectivity (and yes, to recent outages reminding everyone how brittle the internet can be). (reuters.com)
How AWS Interconnect multicloud works (and why it’s different)
The old way: procure circuits, pick locations, wait for cross‑connects, configure VLANs, negotiate link‑local addressing, set up BGP, and then duplicate it all for redundancy. The new way: in the AWS Console or CLI, you choose provider, region, and bandwidth, then accept a corresponding transport on Google Cloud. One attachment represents capacity; the underlay is prebuilt, resilient, and encrypted. (aws.amazon.com)
Under the hood, the service provides quad redundancy across facilities and routers, continuous monitoring on both sides, and MACsec encryption between AWS and Google edge routers. That’s a concrete reliability and security story you can take to a change‑control board. (cloud.google.com)
Bandwidth starts at 1 Gbps in preview and scales up to 100 Gbps at GA. That granularity is key: many cross‑cloud use cases don’t need 10/100 Gbps increments; they need right‑sized, controllable pipes that scale with workload spikes. (cloud.google.com)
What this kills—and what it doesn’t
This move compresses the space for third‑party “multicloud middlemen” who built businesses stitching clouds together. When the clouds themselves expose managed, standardized connectivity with built‑in redundancy, the value shifts from cabling and tickets to architecture and governance. Analysts quickly called out the strategic implications for vendors and for cloud share. (forbes.com)
What it doesn’t kill: thoughtful network design. You still need clean CIDR hygiene, deterministic routing between Transit Gateway, Cloud WAN, and Google’s VPC routing domains, and a plan for DNS, identity, and observability that works across both platforms. You also have to be honest about data gravity. A fast pipe doesn’t erase latency, nor does it nullify egress or inter‑region economics once the preview period ends.
Limits and realities in preview
Three constraints will shape your first pilots:
First, availability: AWS says the preview is in five Regions to start. If your “crown jewels” live elsewhere, you’ll need to stage workloads or wait. (aws.amazon.com)
Second, bandwidth and provisioning behavior: Google indicates 1 Gbps to start, with a runway to 100 Gbps at GA. Great for bursty data flows, but test sustained throughput and failure modes before moving P0 traffic. (cloud.google.com)
Third, billing: Google’s docs note that during public preview you aren’t billed while the transport resource is active—use the time to measure flows and size correctly. Expect pricing to change at GA; model total cost of ownership including cloud‑native egress rules and any cross‑region hops. (docs.cloud.google.com)
Where this shines for developers
Here’s the thing: less network yak shaving means more building. Three patterns stand out:
AI pipelines that straddle vendors. Maybe your feature extraction runs in Vertex AI while you serve agents on Bedrock. A 1–10 Gbps private lane lets you move embeddings, checkpoints, or feature stores without babysitting IPsec overlays.
Cross‑cloud data apps. Run batch prep on Dataproc while training on SageMaker, or flip it: BigQuery for federated analytics and Athena for lakehouse queries against the same S3/Iceberg tables. The pipe is private, the route is simpler, and the provisioning story goes from weeks to minutes. (cloud.google.com)
Operational safety nets. During incidents, you can shift read traffic to an app instance in the other cloud, or drain a queue to another region+cloud for faster recovery. You won’t eliminate outages, but you can buy options. Reuters’ piece even contextualized the launch right after the October 20 AWS outage—no coincidence. (reuters.com)
Architecture basics: a five‑step interconnect play
Here’s a simple framework we’ve used with clients to stand up a safe pilot in under two sprints:
1) Addressing and routes that won’t paint you into a corner
Reserve non‑overlapping /16s per environment, and keep a clean spreadsheet of allocations. On AWS, decide whether to hang this off a per‑domain Transit Gateway or centralize via Cloud WAN. On Google, confirm custom static routes or dynamic Cloud Router policies won’t black‑hole return paths.
2) A minimum viable “trust lane”
Provision the connection with least‑privilege. Scoping endpoints: limit to a single VPC and a single VPC network while you validate routing tables. Use per‑project transports on Google and per‑account attachments on AWS to reduce blast radius. (docs.cloud.google.com)
3) Deterministic failover
Simulate loss of one or two underlay legs and verify traffic keeps flowing. Don’t just trust quad redundancy—induce failures. Watch RTO and packet loss under load while swapping links. (cloud.google.com)
4) Observability first
Pipe VPC Flow Logs, CloudWatch, and Google’s Network Intelligence Center into a single timeline. Tag flows by app and environment. Alert on asymmetric routes; they’re the stealth killer of “it worked in staging.”
5) DNS and identity that spans clouds
Pick a unified naming and auth pattern before traffic moves. Cloud Map or Private Hosted Zones on AWS + Cloud DNS on Google can coexist, but decide who’s authoritative where. SSO and secrets should work irrespective of which side originates a call.
People also ask: Is this better than Direct Connect + Partner Interconnect?
Often, yes—because you don’t manage physical provisioning or vendor ticket queues. You also aren’t forced into 10/100 Gbps increments for early pilots. But if you already have dedicated circuits with rock‑solid performance and long‑term contracts, do the math. The new model optimizes for speed, elasticity, and simplified ops; your legacy build may still win for fixed, high‑throughput lanes already depreciated on your books. (cloud.google.com)
People also ask: What about security and compliance?
The providers built MACsec into the inter‑provider edge and designed for facility and router redundancy. That’s a strong default. Your job is enforcing least‑privilege, encrypting data at rest and in transit beyond the edge, and auditable segmentation. Treat cloud‑to‑cloud like a high‑trust lane, not a free‑for‑all. (cloud.google.com)
People also ask: How do I enable it?
From AWS: pick the target cloud, destination region, and bandwidth—it’s a three‑step console or CLI flow. On Google’s side, you accept or initiate a transport and bind it to the right networks and routers. Plan it, test it, then template it in IaC so others can replicate safely. (aws.amazon.com)
Example pilot: an agentic AI workflow
Say your customer support bot runs in Amazon Bedrock, but your knowledge index and speech stack favor Google. You fine‑tune on Vertex AI, embed into a vector store, and serve via Nova agents on AWS. With a 1–5 Gbps private path, you move features and partial results without worrying about public egress policies. When load spikes, scale the transport up; when it’s quiet, scale down and keep costs in check. The important part: network becomes an API, not a project. (cloud.google.com)
But there’s a catch: preview means change
Preview is where semantics and quotas can shift. The Google docs indicate that during public preview the transport resource isn’t billed while active; take advantage of that to characterize traffic and build realistic cost curves before GA pricing locks in. Keep an eye on paired locations and any per‑project regional quotas. (docs.cloud.google.com)
Operational guardrails we recommend
Based on what shipped and what we’ve seen go wrong in multicloud builds, adopt these guardrails from day one:
- Separate “pilot” and “prod‑candidate” transports with different AWS accounts and Google projects.
- Use explicit route tables and tags; don’t rely on implicit propagation across Transit Gateway or Cloud WAN.
- Freeze CIDR allocations during the pilot; no surprise VPC expansions.
- Automate provisioning with Terraform modules that encode naming, tags, and logging destinations.
- Run monthly failover drills; log mean time to reroute and packet loss under load.
What changes for network teams—really?
Two big shifts. First, tickets give way to APIs. Your job moves up‑stack: capacity modeling, SLO design, and security posture. Second, the business finally gets a credible “multicloud without duct tape” story. That means you’ll be asked to build cross‑cloud by default for AI and analytics. If procurement is still negotiating colo contracts, redirect them to the open spec and managed service model shipped by AWS and Google. (aws.amazon.com)
A note on resilience and recent outages
No link is magic. But building private, quad‑redundant transport between clouds gives you tools to ride out ISP incidents and regional routing drama. After the October 20, 2025 AWS incident, which disrupted thousands of sites and cost U.S. businesses an estimated $500–$650M, boards are asking for concrete, testable resiliency plans. This puts a practical knob in your hands. (reuters.com)
Step‑by‑step: a two‑week pilot plan
Day 1–2: pick the right use case
Choose something real but safe: a nightly data sync, a non‑critical microservice dependency, or feature extraction for a subset of users. Write down the success metrics: throughput targets, error budgets, and failover behavior.
Day 3–4: design and address space
Validate CIDR allocations and route symmetry. Decide AWS hub (Transit Gateway vs Cloud WAN) and Google routing policy. Document a single “happy path” for packets and a single “degraded path.”
Day 5–7: provision and test
Spin up the transport and attachment, bind to one VPC and one VPC network, and run iPerf under varying MTU settings. Induce link loss; capture RTO and loss rates. Keep a runbook with screenshots and timestamps.
Day 8–10: observability and security
Wire logs into your SIEM, enforce IAM boundaries, and validate MACsec status in provider dashboards. Add synthetic probes between services. (cloud.google.com)
Day 11–14: readiness decision
Estimate monthly costs at 1–10 Gbps based on observed flows. Decide to scale the pilot, pause, or prep for GA. Create Terraform modules so the next team can replicate safely.
Cost talk without the hand‑waving
During preview, Google states the transport resource isn’t billed while active; use this window to collect flow logs and size bandwidth thresholds sensibly. At GA, expect standard provider data transfer rules to apply per endpoint behavior; model both directions and be explicit about cross‑region traffic. Don’t forget managed NAT, inspection, and observability costs that come along for the ride. (docs.cloud.google.com)
What to do next
- Run a 14‑day pilot with a single app path and clear SLOs; automate provisioning so you can tear down and rebuild in hours.
- Lock CIDRs, document explicit routes, and enforce least‑privilege. No wildcard trust during pilots.
- Measure before you migrate: throughput under load, failover RTO, and steady‑state costs at 1–10 Gbps.
- Brief leadership on the open spec, quad redundancy, and MACsec defaults so they understand the risk tradeoffs. (cloud.google.com)
- Decide where you’ll place agents, data stores, and model serving as GA expands regions and bandwidth. (cloud.google.com)
Want a deeper multicloud plan tailored to your stack?
We’ve been hands‑on with these patterns for years. If you need a concise playbook for routing, security, and cost controls, start with our AWS Interconnect preview playbook, then read our field guide to a practical AWS–Google multicloud plan and our take on what changes now. When you’re ready to implement, our cloud architecture services can help you land it without surprises.
