AWS Interconnect multicloud is finally real—and it’s not just marketing. In a joint move, AWS and Google Cloud launched a managed, private link that lets you provision dedicated bandwidth between the two clouds in minutes, not months. The preview pairs AWS Interconnect–multicloud with Google’s Cross‑Cloud Interconnect and targets high‑speed, resilient cross‑cloud networking for real workloads. (aws.amazon.com)
What just changed—and why it matters
Here’s the thing: for years, “multicloud” meant stitching Direct Connect to a colo, cross‑connecting to Google or using partner facilities, then layering BGP, VRFs, firewalls, and policy routes. It worked, but the lead time and operational tax were brutal. Now, AWS offers Interconnect–multicloud (preview) with Google as the first launch partner, and says Azure connectivity is planned in 2026. Google frames the collaboration as managed, private, and on‑demand Cross‑Cloud Interconnect. Provisioning is measured in minutes. (aws.amazon.com)
Timing isn’t accidental. A widely reported AWS outage on October 20, 2025 rattled teams, and enterprise leaders have been pushing for simpler, more resilient cross‑cloud paths ever since. Early adopters reportedly include Salesforce. (reuters.com)
How AWS Interconnect multicloud actually works
Operationally, AWS presents a single object—an “interconnect”—that represents capacity to another cloud. You choose the partner (today: Google), pick the destination region, and request bandwidth. Under the hood, AWS handles physical capacity and scaling ahead of demand. Critically, AWS published an open specification and APIs so other providers can implement the same pattern. (aws.amazon.com)
In preview, AWS limits each customer to a free 1 Gbps connection and explicitly warns not to route production traffic on the preview link. Pricing lands before GA. The preview is available in five AWS Regions. (docs.aws.amazon.com)
On the AWS side, you can integrate Interconnect with building blocks you already know—Amazon VPC, AWS Transit Gateway, and AWS Cloud WAN—so your existing segmentation, route policies, and observability can extend across the link. That’s the practical win: you don’t have to reinvent your network to go multicloud. (aws.amazon.com)
Latency, bandwidth, and routing expectations
Don’t expect magic. Think of this as a provider‑managed private path with dedicated bandwidth and BGP route exchange between your AWS edge construct (VPC/Transit Gateway/Cloud WAN) and Google’s Cross‑Cloud edge. You still need to plan route propagation, summarize where sensible, and segment traffic domains to avoid sprawl. Treat the link as a scarce, high‑value resource and measure it the way you measure a Direct Connect: with clear SLOs, packet‑loss budgets, and jitter thresholds appropriate for your workloads. (cloud.google.com)
People also ask: key questions, straight answers
Is AWS Interconnect multicloud free?
During public preview, you can create one 1 Gbps connection at no cost, but AWS says it’s for testing only—don’t send production traffic. Pricing will be announced before GA, and preview connections will be removed when the service becomes generally available. (docs.aws.amazon.com)
How fast can I turn it up?
AWS and Google both position the service as on‑demand and provisioned in minutes. That’s a massive change from past DIY builds that took weeks or months in colos. (reuters.com)
How is this different from Direct Connect or IPsec VPN?
Direct Connect is your private pipe from on‑prem to AWS; Google’s Cross‑Cloud Interconnect is the peer on their side. Historically, you’d stitch them yourself via partners. The new collaboration gives you a cloud‑to‑cloud private link run by the providers, reducing bespoke plumbing, variability, and lead time. You’ll still keep DX and site‑to‑site VPN for on‑prem, but the inter‑cloud path becomes a first‑class citizen. (aws.amazon.com)
A pragmatic rollout plan: 30 / 60 / 90 days
Let’s get practical. Here’s the playbook we’re running with teams who want results without chaos.
Day 0: Readiness checklist
Before a single route is advertised, assemble this:
- Workload map. Identify cross‑cloud candidates: model training on Google GPUs reading from S3, BI on BigQuery joining with Redshift snapshots, DR for transactional apps, or SaaS running split‑brain across both clouds.
- Network policy inventory. Current CIDRs, route aggregation plan, overlapping ranges, and segmentation model (projects/accounts, org folders, Cloud WAN segments, TGW route tables).
- Security posture. TLS everywhere, mTLS for service‑to‑service, and an identity story that spans IAM roles, service accounts, and workload identity federation.
- Observability. Flow logs, VPC/Firewall logs, metrics, and distributed tracing that can follow requests across clouds.
- Exit criteria. What proves the pilot worked? Define throughput, error rate, and failover SLOs up front.
Days 1–30: Pilot without drama
Use the preview 1 Gbps link as a forcing function to right‑size scope. Start with one path: for example, a data pump from Google Cloud Storage to Amazon S3 using AWS DataSync’s cross‑cloud transfers, or the reverse using Google’s Transfer Service. Keep your blast radius small, prove the path, and instrument everything. (aws.amazon.com)
On AWS, attach Interconnect to Cloud WAN or Transit Gateway, not directly to a sprawl of VPCs. On Google, land it in a well‑scoped VPC/Shared VPC with clear firewall rules. Summarize routes; don’t advertise /32s just because you can.
Days 31–60: Scale the pattern
Add a second traffic class—say, inference traffic from Vertex AI agents hitting AWS microservices, or analytics queries from BigQuery to a curated dataset in S3. Validate your route segmentation and confirm there’s no east‑west bleed across segments. Prove failover: intentionally drain the link and watch your fallback (VPN or public egress with TLS) take over cleanly. If it doesn’t, fix DNS and connection pooling; those are the first culprits.
Days 61–90: Pre‑GA hardening
Write runbooks for link saturation, route leaks, and brownouts. Bake synthetic probes that traverse the inter‑cloud path continuously. Set budgets for cross‑cloud egress and track them; even if Interconnect pricing is pending, the underlying services will still bill data transfer. Get legal to review shared responsibility language for preview features so no one is surprised. (docs.aws.amazon.com)
Security and compliance: the gotchas that bite
Private isn’t the same as trusted. Treat the inter‑cloud link as an extension of your zero‑trust perimeter. Encrypt application traffic, enforce identity at every hop, and segment based on business risk, not convenience. If you collapse all routes into one global VRF, you’ll regret it during an incident.
Route design matters. Summarize to fixed blocks and advertise the minimum necessary. On Google, use hierarchical firewall policies with explicit logging; on AWS, lean on VPC flow logs and CloudWatch/Cloud WAN route analysis. Keep guardrails in place with policy automation so a single rushed change request can’t widen access for the entire org.
Cost model: what we know—and how to budget anyway
Budgeting is the uncomfortable part of any preview. Today: one 1 Gbps connection free in preview, with pricing to be disclosed before GA and preview links removed at GA. Plan scenarios now: assume per‑hour link capacity plus data transfer charges on each side, track egress early, and cap traffic classes until you have clear unit economics. (docs.aws.amazon.com)
If your container compute strategy rides on cross‑cloud traffic, revisit runtime costs, too. If you’re using managed containers or serverless that now run cooler due to bursty cross‑cloud calls, pair this rollout with a re‑assessment of compute pricing models and autoscaling policies. For those leveraging Cloudflare’s container/worker stacks at the edge as part of a multicloud pattern, note Cloudflare’s recent CPU‑time pricing changes for Containers and Sandboxes; it’s a reminder that execution and network costs move together. (developers.cloudflare.com)
Reference architectures to steal
AI retrieval + inference split
Park your retrieval store (say, a BigQuery or AlloyDB‑adjacent vector service) on Google for proximity to Vertex AI agents, while keeping inference‑adjacent microservices on AWS. Route only the embeddings and top‑K payloads over Interconnect; everything else stays local. This reduces egress, isolates failure domains, and keeps hot paths fast.
Data lake with cross‑cloud ETL
Land operational data in Amazon S3 and mirror curated datasets to Google Cloud Storage on a schedule. Run cross‑cloud ETL via DataSync and Google’s data transfer services, then query from both ends with federated engines. Build throttles so ETL bursts don’t starve transactional traffic. (aws.amazon.com)
Disaster recovery you’ll actually test
Use Interconnect for continuous replication of state and periodic cutovers for real DR tests. Keep DNS automation ready, pre‑warm caches, and monitor state divergence. If your last DR drill was a wiki page and hope, this link gives you permission to practice for real.
Risks, limits, and what could go wrong
Preview is not production. AWS’s doc is blunt: don’t route prod traffic yet, and expect preview connections to be removed at GA. Build with that constraint; if you can’t tolerate a surprise teardown, you’re testing the wrong system. (docs.aws.amazon.com)
Also, shared fate isn’t eliminated. Reports suggest proactive monitoring and coordinated maintenance windows between providers, which is promising, but your SLO still depends on two clouds, your routing, and your app behavior under stress. Plan for partial failure states and brownouts, not just total loss. (theverge.com)
Finally, DNS and identity cause more outages than routing. Cache TTLs appropriately, favor idempotent retries, and use workload identity federation so your tokens don’t become the single point of pain during a failover.
FAQ: fast facts for execs and architects
Which regions? AWS says the preview is in five regions. Validate current coverage in the console before you promise timelines to stakeholders. (aws.amazon.com)
Azure too? AWS indicates Azure connectivity is planned in 2026. Treat that as roadmap, not a contract. (aws.amazon.com)
Who’s using it? Reuters cites Salesforce as an early adopter. That’s a useful signal for scale and enterprise readiness once GA hits. (reuters.com)
Why now? Demand for resilient, AI‑driven, cross‑cloud workloads is spiking; the October 20, 2025 outage concentrated board‑level attention on multicloud failover paths. (reuters.com)
Hands‑on configuration cues (keep it simple)
On AWS, start with Cloud WAN as the control plane if you’re already multi‑region; otherwise, a Transit Gateway core with per‑domain route tables is fine. Avoid full‑mesh VPC attachments. Use tags to drive automation and policy checks. On Google, route Interconnect into a Shared VPC project with least‑privilege firewall policies and logging at the hierarchy level.
For routing, summarize by environment (/16 per env is common), and advertise only what the other side needs. Enforce BGP communities or equivalent tagging so you can quickly dampen routes during incidents. Keep MTU consistent; jumbo frames mismatched across provider edges will ruin your week.
What to do next
- Spin up a preview link and run a 14‑day pilot with one traffic class. Prove baseline throughput and loss under controlled load. (docs.aws.amazon.com)
- Map your top three cross‑cloud use cases (AI retrieval/inference split, ETL to shared lake, DR). Prioritize by egress sensitivity and SLOs.
- Attach via Cloud WAN or TGW with segmented route tables. No flat networks.
- Instrument end‑to‑end: synthetic probes, app‑level lat/loss metrics, and cost alerts tied to egress.
- Draft GA budgets with three tiers: dev/test, steady state, and surge. Expect pricing before GA; bake contingency into Q1 planning. (docs.aws.amazon.com)
Zooming out
This collaboration marks a real turn in multicloud networking. The promise isn’t that “multicloud becomes easy.” It’s that the ugliest yak‑shaving—lead times, physical cross‑connects, bespoke APIs—moves inside the providers’ responsibility. Your job shifts to architecture, guardrails, and proving SLOs. That’s where it should be.
If you’re planning a strategic multicloud program, we’ve been shipping similar patterns for years. Start with our field notes on what modern multicloud looks like with AWS Interconnect, our hands‑on guidance for running AWS–Google without the pain, and a tactical checklist in what to do right now for AWS–Google networking. If your cost model shifts as you move traffic and compute around, pair it with our notes on container pricing changes that can surprise you. Then, when you’re ready to plan or pressure‑test an implementation, get in touch.
