Multicloud networking just got a real upgrade. On November 30, 2025, AWS announced Interconnect – multicloud in preview, with Google Cloud as the first launch partner. Google is supporting the other side through Cross‑Cloud Interconnect. The upshot: you can establish private, high‑speed links between your Amazon VPCs and Google VPC networks in minutes instead of the weeks of quoting, ordering, and cross‑provider ticket wrangling we’ve all lived through. For teams who actually run production, this is more than press‑release confetti—it changes how we design for resilience, latency, and cost.
What just launched—and why it matters
The new link is a partner integration: AWS Interconnect – multicloud (in preview across five AWS Regions) pairs with Google’s Cross‑Cloud Interconnect. The promise is straightforward: dedicated, private capacity with built‑in resiliency, managed from each cloud’s console/APIs. For AWS customers, it snaps into things you already know—Amazon VPC, Transit Gateway, and Cloud WAN. On the Google side, you choose 10‑Gbps or 100‑Gbps circuits and attach VLANs, then negotiate routes with BGP.
Why it matters now: the window between a cloud hiccup and a headline is short. On October 20, 2025, a widely publicized outage reminded every executive why single‑provider risk needs a board‑level answer. A first‑party, supported path between major clouds gives you another lever: route traffic where it’s healthy without hair‑pinning through the public internet or maintaining fragile DIY cross‑connects.
How the architecture works (and where the sharp edges are)
Think of it as managed, cross‑cloud L3 under your control. You provision circuits (10 or 100 Gbps), map VLAN attachments, and advertise prefixes with BGP on each side. Typical topologies look like:
• Hub‑and‑spoke: Transit Gateway or Cloud WAN in AWS connects to a Google VPC hub (Cloud Router) that redistributes routes to spokes.
• Active‑active: Dual circuits in each cloud with ECMP and health‑based traffic engineering.
• Segmented: Separate VLAN attachments per trust zone (prod, non‑prod, regulated), each with constrained route maps.
Here’s the thing: it’s easy to push a bad route and burn down your east‑west traffic. Put guardrails in place—prefix‑lists, max‑prefix, MED/LocalPref policies, and explicit blackhole routes for “never” paths. Also verify MTU end‑to‑end; 9K support exists, but a single 1500‑byte hop will quietly crush your throughput.
Multicloud networking vs. DIY: what’s different this time?
Until now, most teams used three approaches: (1) VPNs over the internet, (2) colocation routers with physical cross‑connects, or (3) partner fabrics. Each has trade‑offs. The new first‑party link streamlines three pain points:
• Provisioning time: minutes via console/API instead of weeks of tickets and LOAs.
• Operability: native metrics, alarms, and topology awareness in both clouds, not a third NOC.
• Blast radius: built‑in redundancy and clear SLAs reduce “gray failures” you can’t pin on anyone.
But there’s a catch: preview is still preview. Treat this as production‑adjacent until your specific region pair, bandwidth, and failover behavior meet your SLOs in real tests.
Multicloud networking pricing: what you’ll actually pay
Let’s get practical about line items you can model today on the Google side, where published prices are clear as of December 2025:
• Cross‑Cloud Interconnect circuits: roughly $5.60/hour for 10‑Gbps and $30/hour for 100‑Gbps.
• VLAN attachments: about $0.10/hour for 1–10‑Gbps, scaling modestly for higher tiers.
• Data transfer (U.S. to U.S.): about $0.02/GB leaving Google Cloud through the interconnect.
• Inbound to Google typically isn’t charged by Google, but resources that process traffic can bill you.
On the AWS side, Interconnect – multicloud is in preview; expect pricing to land near other dedicated connectivity models with region‑specific nuances. Your job this week is to build a small, instrumented pilot and measure actual throughput, jitter, and cross‑cloud egress against your patterns—CDNs, databases, streaming, or RPC calls—rather than trusting theoretical math.
Will this end DIY VPNs and colo cross‑connects?
Not entirely. There are real reasons to keep IPsec tunnels or MACsec‑protected waves in play—especially for regulated workloads or where you need very specific encryption or carrier diversity. The new link reduces the need for DIY in many mainstream cases, but you’ll still layer controls: per‑segment VLANs, BGP communities, and policy‑based routing. And if you already have private interconnects in an Equinix or Digital Realty cage, keep them until you can prove equal or better SLOs with simpler ops.
What about Azure?
AWS has said Azure support is slated after the Google integration, with public indications pointing to 2026. If Azure is a must for you, design your network so the AWS↔Google path is modular—drop‑in replaceable with an AWS↔Azure path later. Keep your route policy and segmentation identical across providers to avoid a second refactor when Azure comes online.
Design decisions you should make before a pilot
• Segmentation strategy: Decide the trust boundaries now. At minimum, separate production from non‑production and regulated from unregulated. Use unique BGP communities per segment.
• Source of truth: Put prefixes, ASNs, and communities in Git or your network source of truth—no console‑only changes.
• MTU policy: Standardize on 1500 or 9001; don’t mix. Validate path MTU with automated probes.
• Health and failover: Define the exact signal that triggers failover (BFD loss, path latency, packet loss over N seconds), and test it under load.
• Observability: Export flow logs and interconnect metrics to both clouds’ logging stacks; align retention with incident review needs.
The resilience angle developers actually feel
Developers don’t care about carrier contracts—they care that cart checkout, auth, and analytics stay fast. The new path helps kill classic gotchas: long‑haul internet paths between clouds, noisy tunnels under peak load, and asymmetric routing that breaks mTLS. If you’re modernizing, pair this with service mesh policies that prefer the private path and fall back gracefully when it degrades.
Performance and SLOs: what’s realistic?
With 10‑Gbps circuits, teams typically target 6–8 Gbps sustained for mixed app traffic once you account for TCP overhead, encryption, and app‑level pauses. With 100‑Gbps circuits and jumbo frames, you can push much higher line‑rate numbers for batch data. Latency will track metro/regional fiber, not internet weather. Translate that into app outcomes: how many RPCs can you do per second and still hit p95 under 60 ms? How quickly can you drain traffic from a degraded region without cascading timeouts?
Security posture: defense‑in‑depth, even over private links
Private doesn’t mean trusted. Keep controls layered: security groups/ACLs, L7 policies, mTLS between services, and narrow route advertisements. Use per‑segment VLANs with unique keys and explicit deny‑lists to prevent lateral movement. On the operational side, enable max‑prefix and alert long before a leak. Rotate any shared secrets used by automation that touches both sides.
Risk and edge cases to plan for
• Region coverage: Preview availability won’t match your current production regions. Don’t stretch; stage pilots where it exists.
• Asymmetric costs: Egress pricing differs by direction. Model steady‑state and failover scenarios; a bad day shouldn’t surprise your CFO.
• Route leaks: A single fat‑fingered aggregate can expose private subnets cross‑cloud. Automate policy checks pre‑deployment.
• Vendor updates: Preview features evolve. Pin tested versions of CLIs/SDKs and watch release notes before patch windows.
A pragmatic 30‑day pilot plan (no heroics)
Week 1: Scope and guardrails
• Pick one low‑risk, high‑signal use case: analytics ingest to GCP; or a read‑heavy service calling a GCP API from AWS.
• Define SLOs: throughput target, p95 latency, loss, and RTO for failover.
• Lock routing policy in Git; add CI checks for prefix length, max‑prefix, and communities.
Week 2: Provision and baseline
• Stand up dual 10‑Gbps circuits with two VLANs (prod/non‑prod).
• Validate MTU and BFD; record idle and load baselines.
• Start shadow traffic: duplicate a slice of production calls or batch transfers.
Week 3: Failure drills under load
• Induce path loss (disable a circuit), watch ECMP and app behavior.
• Flip primary/secondary preferences with policy; measure re‑convergence and customer impact.
• Capture cost telemetry: egress by direction, circuit hours, and attachment hours.
Week 4: Bake‑off and go/no‑go
• Compare app KPIs vs. your current method (VPN or colo).
• Run a steady‑state test at 1.5x normal volume during off‑peak.
• Decide: keep piloting, expand to more segments, or park it until your regions are covered.
Checklist: Multicloud Interconnect readiness
• IPAM hygiene: no overlapping CIDRs, documented ownership for every prefix.
• BGP: ASN assignments, max‑prefix limits, route‑map conventions, and MED/LocalPref rules checked into Git.
• MTU: end‑to‑end verified; jumbo if you need throughput, standard if you need simplicity.
• Observability: per‑segment metrics, flow logs, packet loss, and re‑convergence timers in a shared dashboard.
• Security: least‑privilege routing, ACLs, and mTLS between services.
• Cost: trackers for circuit hours, attachment hours, and egress by direction; alerts on spikes.
People Also Ask
How fast can we set up the link?
If your accounts and permissions are ready, you’re looking at minutes—not the weeks of vendor back‑and‑forth typical of physical cross‑connects. Budget more time for routing policy reviews and security checks than for the provisioning itself.
Is the private path encrypted?
The transport is private, and you can layer encryption. Many teams keep mTLS between services regardless of underlay. For hybrid topologies, IPsec over the interconnect remains an option when you need explicit cryptographic controls.
How does this compare to internet egress?
Internet egress is cheaper in some cases but less predictable under stress. The interconnect’s value is consistent latency, throughput headroom, and fault isolation. Run your numbers: if you move steady east‑west traffic or need fast failover, the private path can be net‑cheaper when you include downtime and toil.
Cost control tactics that actually work
• Start with 10‑Gbps circuits; scale to 100‑Gbps only after proving saturation.
• Pin data gravity: keep chatty databases local; use the interconnect for deliberate API/RPC calls and batch transfers.
• Schedule the heavy lifts: batch windows can exploit lower contention and predictable circuits.
• Watch directional bias: if most bytes leave Google, your bill will cluster there; tilt workloads accordingly.
Zooming out: what this changes for platform teams
Platform teams get a simpler default for east‑west traffic between clouds. You can centralize routing intents in one place, align security controls, and stop treating “cloud‑to‑cloud” like a special project. The move also aligns with the broader industry push to make switching and interop less punitive—good for competition, great for customers.
Where this connects with your AWS networking cleanup
If you’re simplifying NAT, egress, and IP plans already, this launch is the perfect forcing function. For example, AWS’s new Regional NAT Gateway reduces egress sprawl and route‑table churn; combine that with a clean interconnect strategy and you’ll cut both ops noise and surprise bills. We wrote about that shift in detail here: Regional NAT Gateway explained.
What to do next
• Pick one real workload and run the 30‑day pilot above.
• Write down your SLOs before you touch a console.
• Keep DIY paths in place until you’ve proved equal or better performance.
• Socialize results with finance and security early; directional egress and segmentation decisions are joint calls.
• If you want a second set of eyes, our team can help assess your network plan and run a safe pilot. See our cloud networking services, browse the portfolio, and catch related posts on the engineering blog. Or just reach out via contacts.
Final take
We’ve all been promised multicloud that isn’t a science project. This is the closest we’ve come. A first‑party, private link between AWS and Google Cloud won’t magically erase architectural debt, but it lets you design with fewer unknowns and fewer vendors in the loop. Treat the preview like sharp tools: start small, instrument deeply, and make it earn its place in your production path. If it does, you’ll have faster failovers, tighter SLOs, and fewer 2 a.m. pages when the internet has a mood.