On November 30, 2025, AWS launched the preview of AWS Interconnect multicloud with Google Cloud as the first partner. If you’ve spent the last few years wrangling DIY circuits, hand-built BGP, and fragile automation to keep two clouds talking, this is the news you’ve been waiting for. The headline: private, high-speed connectivity you can stand up in minutes instead of weeks—managed from the console or API, with dedicated bandwidth and built‑in resiliency. As of December 2, 2025, it’s live in preview across a limited set of regions, with Azure integration targeted for 2026.
Here’s the thing: this isn’t just “another interconnect.” It’s a decision moment for anyone running cross‑cloud data pipelines, AI workloads that span vendors, or failover architectures that actually have to work when the internet hiccups. Below is a field playbook from years of shipping production systems, tuned to what’s new this week and how teams should respond—fast.
What’s actually new—and why it matters
You’ve always been able to connect AWS and Google Cloud. The difference now is the operating model. Instead of buying or provisioning physical ports, coordinating carriers, and scripting your own router configs, you request a cross‑cloud attachment and get:
• A managed, private path between AWS and Google Cloud built from AWS Interconnect multicloud and Google Cross‑Cloud Interconnect.
• Encryption on the wire (MACsec) between edge routers.
• A software‑defined experience: pick provider, region, and bandwidth; the platform pre‑scales capacity and handles failover.
It’s the same promise cloud gave to servers 15 years ago: abstract the hardware, expose clean primitives, and let you scale by API instead of tickets.
Key facts teams will ask about
• Timeline: Preview opened November 30, 2025; expanded coverage is expected during re:Invent week (December 1–5). Azure support is slated for 2026.
• Regions: Preview is available in a subset of AWS Regions (AWS calls out five to start). Expect Google to mirror with partner availability per region as they validate demand and capacity.
• Bandwidth: Starts at 1 Gbps during preview and is expected to scale to 100 Gbps at GA.
• Security: MACsec on provider edges; private, high‑speed links; isolation from the public internet path.
• Early adopters: Large SaaS players are already piloting; that’s a signal the economics and reliability story are credible enough for revenue workloads.
Why now? Follow the outage math
October 20, 2025 was a wake‑up call: a major incident took down swaths of the internet, disrupting consumer apps and enterprise platforms. Insurers and SRE teams don’t care about cloud rivalry; they care about business continuity. If you can stand up a private cross‑cloud path in minutes and keep data flowing when an edge melts down, you reduce your mean time to recovery and your blast radius. Executive takeaway: this is a resilience feature wrapped in a networking feature.
How this changes your architecture choices
Before this week, multicloud often meant one of three paths: (1) expensive and slow telco builds, (2) tightly coupled colocation routers with a thicket of hand‑managed tunnels, or (3) “good enough” public internet paths with TLS and crossed fingers. With the new managed link, you can treat cross‑cloud connectivity as a first‑class primitive. That unlocks new patterns:
• Split‑plane AI
Keep GPUs where they’re available and cost‑effective (say, training in Google Cloud near TPU pools) while serving or fine‑tuning in AWS close to your existing data lakes, IAM, and observability stack. Private links reduce egress unpredictability and improve p95 latency for real‑time inference pipelines.
• Cross‑cloud failover that’s actually testable
If your DR runbooks depend on standing up temporary capacity in the other cloud, the difference between “minutes” and “weeks” is the difference between a headline and a shrug. With managed attachments, you can automate monthly failover drills without a dedicated network engineering battalion on call.
• Data gravity without data jail
Enterprises can keep authoritative datasets in the cloud that best fits governance while running specialized analytics or LLM agents in the other. It’s not “free” by any stretch (we’ll talk costs in a second), but it’s operable—and that’s new.
Is this the end of DIY multicloud networking?
For many teams: yes, thankfully. You’ll still have valid reasons to run your own path (custom routing policy, exotic bandwidth needs, specialized compliance with audited facilities, or a sunk‑cost colo you’re amortizing). But for the 80% case, managing private links through cloud consoles beats managing patch panels and tickets.
How fast can you get started? A 90‑minute dry run
Assuming you’ve got admin in both clouds and non‑production VPCs/VPCNs staged, you can validate the path in a single working session. Here’s a pragmatic sequence we used internally to vet similar services, adapted for this release:
1) Prep the networks (15 minutes)
• In AWS: create or select a non‑prod VPC, plus a small test subnet with a security group allowing ICMP and TCP/443 from your Google CIDR.
• In Google Cloud: create or select a VPC with a matching test subnet; allow ingress from the AWS test CIDR.
• Assign minimal test instances (t3.micro / e2‑micro equivalents) with basic monitoring.
2) Provision the cross‑cloud attachment (20 minutes)
• In AWS, open the Interconnect multicloud console; choose Google as target, pick the destination region, and select 1 Gbps.
• In Google Cloud, accept/associate via Cross‑Cloud Interconnect.
• Tag resources clearly so Finance and SecOps can trace costs and policy.
3) Wire it into your routing domains (25 minutes)
• Attach to AWS Transit Gateway or AWS Cloud WAN if you use a hub‑and‑spoke topology; otherwise route directly at the VPC level.
• In Google Cloud, update routes or leverage VPC peering constructs depending on your chosen pattern.
• Keep route propagation explicit at first—avoid surprises while you learn the service’s failure modes.
4) Validate and baseline (30 minutes)
• Run iperf for throughput; capture p50/p95 latency over 15 minutes.
• Simulate node loss and link flap; confirm failover policy and alarms fire.
• Log raw metrics for a week to build a basic SLO: throughput, jitter, packet loss, reconnect time.
Document the runbook. Treat it like you would a new database engine: small blast radius, repeatable tests, and clear success criteria.
What will it cost?
Pricing isn’t final in preview, but the spend will roughly fall into four buckets you can model today:
1) Port/attachment hours: a per‑hour rate for maintaining capacity, likely tiered by bandwidth.
2) Data transfer across the link: per‑GB charges that vary by direction, region, and tier.
3) Edge services: if you anchor into Transit Gateway or Cloud WAN, include their attachment and data processing costs.
4) Residual egress: certain flows will still hit standard egress pricing when you leave a provider’s boundary; watch region‑to‑region combinations carefully.
Here’s the practical move: tag every test attachment and enforce a per‑hour budget ceiling with alerts, then compare p95 cost/GB over a week against your current colo or carrier path. If your org delivers content at scale, revisit your distribution spend as well—our CloudFront flat‑rate pricing playbook explains how egress choices and distribution tiers interact in weird ways under new network paths.
Security and compliance gotchas
• MACsec handles L2 link encryption between provider edges, but you still need TLS/mTLS at the application layer. Treat the private link as a safer road, not a vault.
• Identity boundaries don’t magically unify. IAM in AWS and IAM in Google Cloud remain separate concerns; keep your key material and permissions scoped per cloud.
• Audit trails: turn on flow logs in both clouds and store them in write‑once buckets with lifecycle policies.
• Change management: promote network attachments like you promote app builds. Use separate projects/accounts, peer review, and a rollout plan.
People also ask
Does this reduce outage risk enough to justify multicloud?
It reduces certain risks—namely carrier and edge failures on public internet routes—and shrinks the time to activate capacity elsewhere. It won’t save you from a widespread control‑plane event or a bad deploy. It does make failover and split‑plane architectures realistic for mid‑market teams that couldn’t justify months of telco work.
Is 1 Gbps useful in preview?
Yes for control traffic, replication trickles, and app cutovers; not enough for heavy analytics. Use preview to build the muscle memory—routing, SLOs, and cost telemetry—so you can scale when 10–100 Gbps lands at GA.
Can I replace my colo cross‑connects?
Probably not overnight. Run them in parallel, migrate specific flows (e.g., metadata, feature stores, or model artifacts), and measure. If your network team lives in an IX, the economics might favor staying until contracts roll off. But the direction of travel is clear.
A practical decision framework for 2026 planning
Use this four‑box to decide how aggressive to be:
• Latency‑sensitive + cross‑cloud data: Pilot now. Establish 1 Gbps preview links for integration, then scale at GA. Align SLOs with business KPIs.
• Latency‑sensitive + single‑cloud data: Wait for GA bandwidth tiers, but build the runbook. You’ll need it the next time capacity is constrained.
• Latency‑tolerant + cross‑cloud data: Migrate batch flows first (ETL, backups). Validate cost/GB vs. your current path.
• Latency‑tolerant + single‑cloud data: Monitor and prep. You may not need it, but the day you do, you’ll want muscle memory and IAM guardrails ready.
How this plays with your AWS network hub
If you’ve standardized on Transit Gateway or Cloud WAN, the managed link becomes just another attachment type. That’s the right mental model: attachments in, policies across, inspection and egress at the edges. Keep route tables explicit and segment by environment (prod/stage/dev) so a mis‑scoped association doesn’t leak routes between clouds.
For teams modernizing legacy stacks, note that this networking capability pairs well with the modernization toolchain AWS has been showcasing at re:Invent. If you’re in the middle of a database or app migration, our breakdown in AWS Database Insights: Your 2026 Migration Plan will help you think through sequencing, data move windows, and rollback.
What about Azure?
AWS has publicly said Azure support is on the roadmap for 2026. If you’re tri‑cloud, plan your network as a hub with policy and inspection regardless of provider. Don’t hardwire the Google link into app logic; keep it as infrastructure you can swap or expand.
Limitations to keep in mind
• Preview constraints: limited regions, 1 Gbps cap, evolving APIs. Treat it like beta software: valuable, but not a single point of failure.
• Pricing clarity: expect shifts before GA. Budget by range, not single‑point estimates.
• Operational maturity: alarms, dashboards, and chaos drills are on you. Managed doesn’t mean “hands‑off.”
• Data residency: if you have strict residency rules, ensure your routing domain never hairpins traffic through an unexpected region.
Let’s get practical: a 12‑point readiness checklist
1) Inventory cross‑cloud flows (source, dest, protocol, p95 latency, p99 bandwidth).
2) Map flows to environments (prod/stage/dev), then tag them in both clouds.
3) Define SLOs for each flow (availability, latency, recovery time).
4) Establish IAM roles and projects/accounts for network changes only.
5) Set budget alerts for attachment hours and data transfer per link.
6) Stand up a 1 Gbps preview link in non‑prod; document runbook.
7) Attach to Cloud WAN or Transit Gateway and validate route isolation.
8) Baseline throughput, jitter, and loss; store metrics for a week.
9) Run a failover game day; record time to recovery and operator steps.
10) Review packet capture and flow logs with SecOps; tune inspection points.
11) Write a rollback plan (disable routes, detach link, revert to current path).
12) Present a “go/no‑go” memo with costs, SLO impact, and GA scaling plan.
A note on cost traps (and how to avoid them)
Two traps bite first‑time multicloud teams: shadow egress and silent hairpinning.
• Shadow egress: moving data across a private path doesn’t mean “free.” Cross‑provider movement still triggers transfer pricing that varies by source region and service. Keep distinct meter tags for app traffic vs. replication traffic so Finance can forecast accurately.
• Silent hairpinning: a route leak or overly broad propagation can pull traffic through the link unintentionally. Keep a deny‑all default and selectively allow. For content delivery and perimeter work, revisit your CDN mix; our Cloudflare containers pricing analysis shows how quickly small topology shifts ripple into your bill.
Executive view: what this changes for 2026 planning
• Resilience: Treat private cross‑cloud links as a required control for revenue‑critical systems. Budget for it like insurance you’ll actually use.
• AI strategy: Expect to split training/serving and shuffle features and artifacts across clouds. This link is your baseline for predictable performance and auditable movement.
• Vendor leverage: Healthy competition got us here, and you should use it. Ask your reps for committed‑use discounts tied to cross‑cloud volume once GA lands.
How we can help
We’ve been building multicloud architectures since before it was fashionable, and we’ve got the scars to prove it. If your team needs a fast, low‑risk pilot or a production‑grade rollout, start with our services overview, browse relevant customer work, and reach out on the contact page. For deeper context on cross‑cloud patterns, these pieces pair well with today’s news: AWS Interconnect with Google: Multicloud, Minus the Pain and AWS–Google Multicloud Networking: What to Do Now.
What to do next (today, this week, this quarter)
Today: Pick two candidate flows (one latency‑sensitive, one batch). Stand up the preview link in non‑prod, baseline latency and jitter, and enable alarms.
This week: Run a 90‑minute game day to simulate failover. Produce a two‑page memo with SLO impact and projected cost/GB bands at 1, 10, and 40 Gbps.
This quarter: Move one production‑adjacent workload (feature store sync, metadata propagation, or model artifact distribution) to the managed link. Lock budgets, tags, and dashboards; plan a GA scale‑up path to 10–100 Gbps.
If you’re already feeling the pressure to make a call, that’s because this announcement finally removed the biggest excuse—lead time. The rest is on us as builders: clean design, clear SLOs, and the discipline to test the thing when the sun is shining.
