BYBOWU > Blog > Cloud Infrastructure

AWS Interconnect + Google: What Changes Now

blog hero image
AWS and Google just turned multicloud from a months-long network build into something you can spin up in minutes. With AWS Interconnect linking directly to Google’s Cross‑Cloud Interconnect, private bandwidth and managed resiliency move out of your rack and into the providers’ backbones. If you run AI pipelines across vendors, hedge outage risk, or simply want cleaner data gravity, this matters. Here’s what shipped, what’s still preview‑only, the real costs and gotchas, and a conc...
📅
Published
Dec 07, 2025
🏷️
Category
Cloud Infrastructure
⏱️
Read Time
11 min

On November 30, 2025, Amazon Web Services introduced AWS Interconnect (multicloud) in preview, with Google Cloud as the first launch partner. The headline: you can stand up managed, private links between AWS VPCs and Google VPCs in minutes—dedicated bandwidth, provider‑operated resiliency, and none of the traditional Direct Connect plus third‑party backbone gymnastics. Azure connectivity is slated to follow in 2026. (aws.amazon.com)

Google’s side of the bridge uses Cross‑Cloud Interconnect with an “on‑demand” workflow. During preview, bandwidth starts at 1 Gbps, with the roadmap pointing to 100 Gbps at general availability. In other words, the era of waiting on cross‑cloud circuits for weeks is ending. (cloud.google.com)

Illustration of AWS and Google clouds connected by private fiber links

What exactly shipped—and what didn’t

AWS Interconnect is a managed, private, high‑speed link between your Amazon VPCs and another cloud, starting with Google Cloud. It rides provider backbones, exposes a simple attachment object in AWS, and plugs cleanly into Transit Gateway, Cloud WAN, and VPC routing. You provision through the console or API, pick a destination provider/region, choose bandwidth, and get a single attachment that represents the capacity pool. (aws.amazon.com)

In preview, AWS lists support across five regions, with published examples including US East (N. Virginia) to Google’s N. Virginia and US West (N. California) to Google’s Los Angeles—useful for low‑latency U.S. east/west topologies while the footprint expands. Provisioning claims “minutes,” not days. (docs.aws.amazon.com)

Here’s the thing: this is not just old‑school IPsec or you rolling your own fabric. AWS and Google are operating the physical interconnect, handling capacity ahead of demand, and encrypting at the physical layer between provider routers. That shifts both operational burden and blast radius. (aws.amazon.com)

Why this matters now

After a high‑profile AWS outage in October 2025, a lot of boards started asking pointed questions about single‑vendor exposure. Analysts estimated U.S. losses between $500 million and $650 million for affected businesses. A managed cross‑cloud link that can be spun up quickly—and run as a first‑class service—hits that concern squarely. (reuters.com)

Beyond resilience, the most common driver is AI workload placement. Model training in one cloud, vector search or feature stores in another, and a data lake that refuses to move because your egress bill taps you on the shoulder every month. The new link makes that architecture less brittle by turning “copy data over the public internet” into “private traffic on provider backbones,” with consistent throughput and predictable latency envelopes. Google characterizes setup as minutes, not days; that operational speed compounds when you’re iterating new pipelines. (cloud.google.com)

People also ask: Is AWS Interconnect the same as Direct Connect?

No. Direct Connect is a hybrid link between your premises and AWS; you still stitch clouds together yourself, often through colocation and partner networks. Interconnect is cloud‑to‑cloud, provider‑managed, and abstracts away the physical build. If your current design depends on DC gateways plus third‑party backbone to reach Google, expect a simplification: fewer devices, fewer BGP adjacencies, fewer things to page you at 3 a.m. (docs.cloud.google.com)

Architecture shifts you should plan for

1) Centralized routing without central fragility

Attaching Interconnect to Transit Gateway or Cloud WAN gives you a programmable hub for multi‑region, multi‑account networks. That unlocks shared services across clouds (identity, logging, observability) without tromboning traffic through on‑prem. But treat the new link as a tier‑one dependency: design for failure with multiple attachments, region diversity, and deterministic route priorities. (aws.amazon.com)

2) Traffic engineering for AI and data gravity

Preview bandwidth starts at 1 Gbps and should scale to 100 Gbps at GA. For AI pipelines that burst or stream embeddings, don’t rely on best‑effort. Tag traffic classes, model your peak concurrency, and use multiple attachments if you need higher aggregate throughput or isolation by workload. (cloud.google.com)

3) Security posture: private by default, still verify

Physical‑layer encryption between provider routers plus private addressing and no public internet hop is a real win. Still enforce your own controls: segment with separate attachments for prod/non‑prod, validate that MTU and MSS don’t degrade TLS, and keep zero‑trust principles at the service layer. (aws.amazon.com)

A pragmatic 30‑day pilot plan

You don’t need a transformation program to prove value. Here’s a tight pilot we’ve run with similar fabrics that fits into a month of honest effort.

Week 1: Design the slice

Pick one cross‑cloud path that blends low risk with clear value—e.g., sending inference requests from AWS to a Vertex AI endpoint, or syncing a feature store in Google with an event bus in AWS. Define a single traffic class, latency SLO, and a budget ceiling. Document the current path (today) and the target path (with Interconnect).

Decide your attachment topology: one attachment per environment (dev, stage, prod) to start; later, consider per‑domain (AI, analytics, payments) for isolation. Reserve IP, VRF, and route table entries in both clouds.

Week 2: Stand it up

Provision AWS Interconnect to Google in the closest supported region pair to your workloads; keep it simple—no hairpin through a distant region just to test. Plumb routing through Transit Gateway or Cloud WAN. On Google, configure VPC peering or Private Service Connect to the target service. Validate MTU end‑to‑end (jumbo frames often matter for AI payloads). (aws.amazon.com)

Week 3: Validate

Run iperf3 and real workload traffic. Lock in baseline latency and throughput, then soak test during your busiest hour to catch queueing behavior. Fail attachments intentionally to prove resiliency—if failover surprises you in test, it will terrify you during an incident.

Week 4: Cost and controls

Instrument accounting. Tag the attachment and all cross‑cloud egress routes with cost allocation tags. Apply per‑prefix limits and explicit route priorities to keep chatter from migrating unconsciously. Add guardrails to CI/CD: a check that fails deploys if a service adds a route into the Interconnect without an owner.

What about disaster recovery?

This is one of the cleanest uses. The new AWS‑Google link shortens your path to warm standby: replicate state privately, run health checks across clouds, and fail traffic via DNS or global load balancing when an outage hits. Analysts and reporters emphasized DR as a core benefit in the launch coverage. Treat RTO/RPO as hard goals, not vibes; measure them with game days. (theverge.com)

Costs: where you’ll win—and where you might not

Expect to save on colocation, cross‑connects, third‑party backbones, and the human time of managing BGP sessions and device upgrades. Managed capacity pools and minutes‑level deployment compress project timelines. On the other hand, dedicated bandwidth isn’t free, and cross‑cloud data transfer still exists. Price cards will evolve through preview; model scenarios before you lock commitments. If your workload is north‑south chatty or moves terabytes daily, consider placing data gravity and compute on the same side to avoid paying for your own architecture. (aws.amazon.com)

Limits and gotchas (read this before you ship)

Preview scope: today’s footprint is limited to select region pairs; if you’re heavy in EMEA/APAC outside those pairs, you’ll wait or design interims. Bandwidth headroom: 1 Gbps per attachment in preview is great for control planes and moderate data flows; high‑throughput AI/ETL needs thoughtful striping across multiple attachments until GA scales up. Operational maturity: treat this like any tier‑one provider service—monitor, alert, and keep runbooks. For compliance, document the physical encryption and the private path for your auditors. (cloud.google.com)

Network operations dashboards showing latency and throughput

How this changes your network strategy in 2026

Two strategic moves accelerate immediately. First, you can standardize on Transit Gateway or Cloud WAN as your AWS edge, treating Interconnect attachments like any other spoke. That tidies routing and lets you codify policy in one place. Second, you can adopt a cloud‑native DR posture without co‑lo sprawl—warm across clouds with proven failover. As Azure joins in 2026, a tri‑cloud network fabric stops being a slideware dream and becomes a repeatable design pattern. (aws.amazon.com)

Hands‑on checklist: the Interconnect readiness scan

Use this to decide if you’re ready to pilot in under a week.

  • Inventory region placement for the two workloads you’ll connect; prefer supported region pairs to avoid extra latency.
  • Confirm route table capacity in Transit Gateway/Cloud WAN and in Google VPCs; prune legacy prefixes you no longer need.
  • Decide attachment segmentation: by environment or by domain. Write it down and stick to it for the pilot.
  • Set SLOs: p50/p95 latency and minimum throughput; define failure testing criteria.
  • Tag everything for cost allocation on day one. Don’t rely on detective work later.
  • Build a backout plan: remove the default route toward Interconnect first, then detach.

FAQ: quick answers for your execs

Does this lock us into AWS or Google?

No more than any managed network product. The open specification is designed for adoption by other providers and partners, and Azure is on deck for 2026. If anything, it lowers switching friction by reducing bespoke plumbing. (aws.amazon.com)

Can we hit 100 Gbps today?

Not during preview. Target 1 Gbps per attachment and plan to scale when GA lands with higher tiers. Parallelize attachments if you must, and keep payloads chunk‑friendly. (cloud.google.com)

Is this secure enough for regulated workloads?

It’s private connectivity with encryption on the physical link between provider routers. You’re still responsible for segmentation, IAM, key management, and data layer controls. Map those to your compliance framework and document the path. (aws.amazon.com)

Let’s get practical: a reference slice

Here’s a pattern that shows immediate value. Place an Amazon API endpoint in front of an application that needs to call a Vertex AI model. Keep auth and rate limiting on AWS, send only the minimal request payload over Interconnect to Google, and return compact results. Cache aggressively on the AWS side, and track p50/p95 across the link. If performance holds and costs stay within your ceiling, promote the slice from pilot to production, then iterate to heavier flows like feature sync or analytics jobs.

What to do next

  • Spin a 30‑day pilot with one clear business KPI (latency, cost, RTO/RPO). Don’t boil the ocean.
  • Codify routing and attachment creation in IaC; add guardrails in CI/CD to keep routes intentional.
  • Refresh your DR plan to include active health checks across clouds and practice failovers quarterly.
  • Model data transfer scenarios honestly; colocate data and compute if your flow is too chatty.
  • Track the roadmap to Azure in 2026 and budget for a second attachment family if tri‑cloud is in scope. (aws.amazon.com)

Where we can help

If you want a tight pilot and a straight answer on cost and routing, our team has done this movie before—without the colocation drama. Start with our Interconnect preview playbook for a deeper architecture primer, browse our portfolio of shipped cloud projects, and see how our cloud networking and platform services turn pilots into production faster. When you’re ready to talk specifics, get in touch.

Data points and dates you can bring to leadership

— AWS announced Interconnect (multicloud) in preview on November 30, 2025; Azure support is planned for 2026. (aws.amazon.com)

— Google’s Cross‑Cloud Interconnect pairing supports on‑demand connections in minutes, with preview bandwidth from 1 Gbps and a target of up to 100 Gbps at GA. (cloud.google.com)

— Early coverage emphasized resilience and DR benefits; the October 2025 AWS outage was estimated to cost U.S. businesses $500M–$650M. (theverge.com)

Angle check: who should adopt first?

If you’re AI‑heavy, latency‑sensitive, or DR‑driven, you’re the early adopter audience. If your estate is mostly single‑cloud web apps with modest cross‑cloud traffic, monitor the region rollout and pricing first; the operational simplicity will still be attractive later. For data platforms with gnarly egress today, test a narrow replication path before you move your world.

Zooming out, the most important shift here isn’t a single feature—it’s two providers aligning on an open spec and putting their weight behind managed multicloud plumbing. That reduces the friction tax teams have quietly paid for years. Your job now is to turn that into faster delivery and cleaner reliability numbers, without over‑rotating into complexity.

Reference architecture diagram: AWS VPC to Google VPC over Interconnect
Written by Viktoria Sulzhyk · BYBOWU
3,205 views

Work with a Phoenix-based web & app team

If this article resonated with your goals, our Phoenix, AZ team can help turn it into a real project for your business.

Explore Phoenix Web & App Services Get a Free Phoenix Web Development Quote

Get in Touch

Ready to start your next project? Let's discuss how we can help bring your vision to life

Email Us

[email protected]

We typically respond within 5 minutes – 4 hours (America/Phoenix time), wherever you are

Call Us

+1 (602) 748-9530

Available Mon–Fri, 9AM–6PM (America/Phoenix)

Live Chat

Start a conversation

Get instant answers

Visit Us

Phoenix, AZ / Spain / Ukraine

Digital Innovation Hub

Send us a message

Tell us about your project and we'll get back to you from Phoenix HQ within a few business hours. You can also ask for a free website/app audit.

💻
🎯
🚀
💎
🔥