Multicloud networking just crossed from DIY art project to first‑class product. As of December 1, 2025, AWS and Google Cloud are offering a jointly engineered way to build private, high‑bandwidth links between your Amazon VPCs and Google Cloud projects in minutes instead of weeks. AWS calls its side Interconnect – multicloud (in preview); Google activates the other half through Cross‑Cloud Interconnect. There’s a published open specification, built‑in resiliency, and early enterprise adopters already validating the path. If you’ve been waiting for a credible, supported, cloud‑native option, this is it.
What actually launched—and what it changes
Here’s the short version. AWS Interconnect – multicloud is now in preview in a handful of AWS Regions and ties directly into your existing Amazon VPC, Transit Gateway, and Cloud WAN constructs. On Google’s side, Cross‑Cloud Interconnect exposes matching endpoints you can provision from the Google Cloud console or API. The two providers collaborated on an open interoperability spec and reference APIs that other clouds can adopt. The promise: you click a few buttons (or script it), select bandwidth, attach to your network hubs, and get private links with dedicated capacity and fast failover.
Why this matters: for years, cross‑cloud private networking meant stitching together physical circuits, partner interconnects, patchwork routers, and a lot of ticketing. Lead times ran weeks to months. Change windows were painful. Documentation lived in runbooks that only three people fully understood. Now you can bring up a managed, redundant link in minutes, align it to your existing policy gateways, and automate the lifecycle with standardized APIs.
Key facts worth planning around:
- Provisioning time: minutes, not weeks—provision from either console or API.
- Bandwidth: starts at 1 Gbps in preview, with targets up to 100 Gbps at general availability.
- Reliability: quad‑redundant design across physically diverse facilities and routers.
- Scope: preview in limited AWS Regions; Google enables corresponding locations via Cross‑Cloud Interconnect.
- Roadmap: the spec is open; Microsoft Azure support is signaled by AWS for later adoption, but not available today.
One more signal: large SaaS platforms are already cited as early users. That matters because it suggests the operational experience (ordering, monitoring, support) won’t be treated as an afterthought.
Who should care—and why now
Not every team needs private cross‑cloud. But if any of these describe you, this launch is immediate value:
- AI and data pipelines span clouds. Maybe you train on one provider’s accelerators and serve from the other’s edge or database portfolio. Private links keep latency predictable and reduce attack surface versus public egress paths.
- GPU scarcity or portfolio hedging. If you chase capacity across providers, consistent network plumbing is the difference between “ship” and “stuck.”
- Regulatory or business continuity mandates. Some sectors must show survivability across providers. Managed private connectivity plus clear SLOs moves your story from aspiration to audit‑ready.
- M&A or product integrations. Two platforms, two clouds. You can either forklift or bridge. This is the bridge.
Zooming out, the business case is about risk and speed. A predictable, supported network primitive lowers blast radius in outages, shortens integration timelines, and stops slow bleed from DIY overhead. It also reframes cost: you’re trading bespoke circuits and wrangling for a managed service model you can automate.
Multicloud networking: architecture patterns that work today
Let’s get practical. Here are three patterns I’d recommend for most teams, from least to most centralized.
Pattern A: VPC‑to‑VPC “stitch” for a single workload
Use this when you’re connecting a few tightly scoped services—say, an inference tier in AWS pulling embeddings from a vector database in Google Cloud. You provision a link, assign 1–10 Gbps bandwidth (preview), and advertise only the specific subnets. On AWS, attach the Interconnect endpoint to the workload VPC or a small Transit Gateway. On Google, attach to a dedicated VPC with a Cloud Router advertising only what’s required. Keep ACLs narrow and use security groups/firewall rules to constrain flows. Upside: fast, minimal blast radius. Downside: doesn’t scale if many teams copy‑paste it without standards.
Pattern B: Hub‑and‑spoke via Cloud WAN and Transit Gateway
This is the right default for most enterprises. Treat the cross‑cloud link as an extension of your network hubs. On AWS, centralize with Cloud WAN or Transit Gateway; on Google, pair with a shared VPC and hierarchical firewall policies. You’ll segment routes into tiers (shared services, data, app) and control propagation. Put inspection in the hub—firewalls, egress controls, and telemetry taps—so every spoke inherits guardrails. This gives you clean multi‑account/ multi‑project scale, consistent observability, and change control that won’t wake you up at 2 a.m.
Pattern C: Service‑to‑service via Private Service Connect
If you want a producer/consumer model with minimal routing complexity, use Private Service Connect on Google Cloud published over Cross‑Cloud Interconnect to expose specific services to AWS consumers. It feels like consuming an internal managed API rather than opening broad network reachability. This is excellent for platform teams offering data or ML feature stores to multiple apps across clouds without exposing whole subnets.
Performance and reliability signals you can plan around
What about latency? It’s location‑dependent. You’re traversing provider points of presence and redundant facilities; your round‑trip will roughly track metro or regional distances between chosen interconnect sites. The safer way to plan is by SLO: measure your baseline via synthetic probes per connection and enforce alarms at the 95th percentile per traffic class.
Resiliency is the other pillar. The providers are advertising quad redundancy at the physical layer and continuous monitoring. In practice, you should still assume links or paths will degrade occasionally. Use equal‑cost multipath where offered, segment traffic, and test failovers quarterly. Keep your BGP timers conservative; speed isn’t the only goal—stability is.
Pricing: what we know (and what we don’t)
Both providers have outlined the service model but not final GA price cards. Expect a familiar structure: capacity tiers (port‑hour‑like) plus data transfer rated for private interconnect paths on each side. Preview SKUs may vary by region and bandwidth. Two rules of thumb for finance and platform leads:
- Budget for capacity on both clouds and for data transfer metering in each provider’s nomenclature.
- Treat this as a workload cost, not a shared tax. Allocate by flow or by namespace and enforce quotas so network doesn’t become an unowned line item.
If your current design relies on NAT gateways for cross‑cloud egress, run the numbers—centralizing through managed interconnect may simplify and reduce exposure. For background on avoiding needless egress line items, see our playbook on how to cut NAT Gateway egress complexity in regional architectures.
Security and governance guardrails
Security posture doesn’t magically improve just because the links are private. Keep these non‑negotiables:
- IP hygiene first. No overlapping CIDRs between spoke VPCs and projects. If you’re already boxed in, carve new ranges and migrate incrementally.
- Least privilege routing. Advertise only the prefixes a consumer needs. Resist “0.0.0.0/0 because it’s Friday.”
- Segregate trust zones. Place regulated data flows on dedicated attachments with separate inspection and logging policies.
- End‑to‑end encryption. Treat the interconnect as secure transport, but maintain TLS between services. Termination should live where you can audit it.
- Observability baked in. Mirror NetFlow/Flow Logs, wire up health SLOs, and keep black‑hole detection in place to catch route leaks.
A 72‑hour pilot plan you can run this week
Here’s a concrete pilot that won’t block your roadmap and will produce real numbers for leadership.
Day 0: choose the right slice
Pick one workload with a narrow, high‑value flow: for example, an AWS Lambda or ECS service calling a retrieval API in Google Cloud, or a batch ETL job pulling features from BigQuery. Confirm the preview region pair is supported for both clouds. Define success: p95 latency target, error budget, and a cost ceiling per GB.
Day 1: provision and ring‑fence
1) Allocate non‑overlapping /24s for the test. 2) In AWS, stand up Interconnect – multicloud and attach to a dedicated Transit Gateway or small VPC. 3) In Google Cloud, create the Cross‑Cloud Interconnect attachment and bind it to a VPC with Cloud Router advertising only the test subnets. 4) Write explicit firewall/security group rules for the single flow. 5) Deploy synthetic probes and flow logs.
Day 2: cut traffic and measure
Shift 10–20% of real traffic through the private path using weighted DNS or service mesh routing. Track p95 and p99 latency, TLS handshakes, and retransmits. Watch route health. Note any packet size issues; if your MTU assumptions are wrong, you’ll see it quickly.
Day 3: failover and finalize
Pull one leg of the interconnect (provider console or API) and observe. Record time to full recovery and any connection spikes. Then, draft your “go/no‑go” with measured latency, availability, and preliminary cost per GB. If the numbers beat your current setup—or even just de‑risk it—socialize a staged rollout plan.
How this stacks against third‑party options
Many teams rely on Cloudflare, Megaport, Equinix, or carrier solutions for cross‑cloud. Those will remain important, especially where you also connect on‑prem colos or need advanced traffic management. The new managed links shine when you want native automation, fewer vendors in change windows, and tight integration with cloud‑native routing and policy. If you’re already deep into a third‑party overlay, don’t rip it out—start by moving a single latency‑sensitive flow onto the managed private path and compare health and cost. If you’re benchmarking container platforms across vendors, our breakdown of Cloudflare container pricing trade‑offs can help you spot hidden network and egress dynamics before you scale.
Developer experience: what will actually change for you
From a developer’s seat, the biggest unlock is predictability. You’ll get:
- Stable private endpoints for cross‑cloud calls without the flakiness of public internet routes.
- Less YAML sprawl. Platform teams can provision attachments as code once; app teams consume a short list of service endpoints.
- Cleaner failure modes. When something breaks, there’s a single pane to check link health before spelunking through dashboards.
For AI builders, this also pairs well with the latest accelerator options. If you’re pinning training to the newest GPU SKUs and serving elsewhere, a private, redundant backbone is the glue. Our analysis of EC2 P6‑B300 availability for builders explains why fast, deterministic links to feature stores and evaluation data matter as you scale.
Risks, limitations, and gotchas
It’s a preview. Expect edges to be a little sharp. A few realities to plan for:
- Region coverage isn’t universal. If your primary regions aren’t in the preview set, you’ll either wait or pilot in adjacent regions and accept the latency hit for testing.
- Azure support is not here yet. It’s hinted for later adoption. If you’re tri‑cloud, you’ll run hybrid patterns for a while.
- Overlap is painful. If you’ve allowed overlapping RFC1918 ranges across accounts and projects, fix that first. Otherwise, you’ll end up NAT‑stacking and losing the benefits.
- Observability takes work. The links are managed, but SLOs are yours. Without per‑flow telemetry, you’ll struggle to prove value.
- Costs can surprise. Private != free. Watch capacity commit and bilateral data transfer. Treat it like any other critical shared resource with budgets and alerts.
People also ask: quick answers
Does this kill partner interconnects and carriers?
No. If you need on‑prem to cloud to cloud with strict traffic engineering, partners and carriers remain vital. The new managed option reduces toil for cloud‑to‑cloud paths and tightens integration with native policy and routing.
Will this eliminate egress fees?
No. You’re paying for private capacity and data transfer on both sides. The advantages are deterministic bandwidth, fewer moving parts, and better automation—not zero cost.
Can I run disaster recovery solely over this?
You can use it as a core plank, but DR still needs independent DNS, data replication strategies, and application‑level failover tests. Treat the interconnect as a reliable lane, not your entire DR plan.
How fast is it compared to public internet?
Often faster and more consistent, but the exact latency depends on the interconnect locations you choose and your traffic path. Measure your own p95/p99 and alert on deviations, not on assumptions.
Governance checklist for day‑2 operations
Bake these into your platform repo before broad rollout:
- Connection as code. Terraform/CloudFormation/Deployment Manager modules that standardize attachments, route filters, and tags.
- Budget guardrails. Per‑link budgets with automated down‑shift of bandwidth tiers or alerts when a flow exceeds its envelope.
- Security invariants. Policy tests that fail a PR if a spoke advertises disallowed CIDRs or requests 0.0.0.0/0.
- Health SLOs. Synthetic probes per attachment with dashboard tiles for latency, drop, and BGP state, plus weekly anomaly detection reports.
- Runbooks. One‑page failover guides for NOC/SRE, including who to page on each provider side.
What to do next
Leaders: charter a 30‑day evaluation with a clearly scoped workload and a success yardstick. Engineers: run the 72‑hour pilot above and bring data back. Finance: set provisional envelopes for capacity and data transfer. Security: pre‑approve the CIDR plan and SLO baseline so you’re not the bottleneck in week two. If you want experienced hands to accelerate the pilot and codify day‑2 guardrails, our cloud architecture team can help.
Cross‑cloud isn’t going away. This launch gives us cleaner primitives to do it right—fewer ad‑hoc routers, fewer tickets, and a lot less guesswork. Use it to simplify the plumbing so you can focus on the part that matters: shipping reliable features faster.
Want more ways to keep infrastructure spend predictable as you modernize networks and run AI across clouds? See our breakdown of the real cost of container platforms on global networks and our guidance to simplify NAT and egress costs. If you’re aligning compute choices to network realities, our analysis of EC2 P6‑B300 for builders puts real‑world numbers behind AI‑heavy decisions. When you’re ready to road‑map your next move, talk to our team.
