As of November 30, 2025, AWS Interconnect is in preview and—paired with Google Cloud’s Cross‑Cloud Interconnect—lets you provision private, dedicated‑bandwidth connectivity between AWS VPCs and Google Cloud in minutes. Azure support is slated for 2026, with the preview available in five AWS Regions to start. That’s a big shift from the DIY circuits, colo contracts, and fragile tunnels many of us have tolerated for years. (aws.amazon.com)
Why does this matter? Because cross‑cloud apps are no longer unicorns. Data pipelines for AI training, hybrid analytics stacks, and vendor‑diverse resilience plans depend on predictable throughput and clear failure domains. AWS Interconnect promises a managed, cloud‑native way to connect the dots—without juggling third‑party exchanges or months of network engineering. Google says on‑demand bandwidth begins at 1 Gbps in preview and scales up to 100 Gbps at GA, with setup time dropping from days to minutes. (cloud.google.com)
What is AWS Interconnect—really?
AWS Interconnect is a managed, private, high‑speed link between your AWS network and other clouds. Think of it as cloud‑to‑cloud Direct Connect, except you don’t rack gear or sign colo orders. It integrates with AWS networking primitives you already use—Amazon VPC, Transit Gateway, and Cloud WAN—and presents a single attachment that represents your capacity to the other provider. Provision it in the console or via API, pick the target cloud and region, choose bandwidth, and go. The underlying spec is open so other providers can implement it. (aws.amazon.com)
Under the hood, Interconnect sits alongside (not inside) your existing routing domains. You still control segmentation, route propagation, and security policies. The promise is simpler provisioning, built‑in resiliency, and bandwidth guarantees that site‑to‑site VPNs can’t deliver.
Why now? And what changed at re:Invent 2025
AWS re:Invent runs December 1–5, 2025, and this preview landed right as the show opened—framing a week where “agentic AI” and cross‑cloud data movement dominate the agenda. Interconnect isn’t the flashiest keynote moment, but it’s the most practical for teams that have to ship. (reinvent.awsevents.com)
There’s also the reliability angle. After a major AWS outage on October 20, 2025, resilience budgets got real again. Reuters reports the new joint service aims to deliver private links in minutes instead of weeks—a direct response to customers demanding faster failover paths and predictable performance across providers. (finance.yahoo.com)
Where AWS Interconnect fits—and where it doesn’t
Here’s the thing: not every workload deserves a private, cross‑cloud backbone. Use AWS Interconnect when the business case is obvious:
- High‑volume, latency‑sensitive data flows (vector stores, feature pipelines, streaming ETL) between AWS and Google Cloud.
- Training on one side, serving on the other—e.g., Bedrock fine‑tunes feeding Vertex AI agents or vice versa.
- Resilience strategies that keep hot standbys in another cloud to escape correlated failures or regional caps on accelerators.
Skip it (for now) when you have bursty, low‑duty cycles, opportunistic data transfers, or you can tolerate Internet‑based transfer with modern QUIC/TLS optimizations. The simplicity tax of another network domain is real. Keep it lean.
People also ask: Will AWS Interconnect replace Direct Connect?
No. Direct Connect is still your best path for on‑prem to AWS with deterministic egress economics and mature SLAs. Interconnect answers a different question: cloud‑to‑cloud without the colo middleman. If you’re already running Direct Connect to a carrier‑neutral facility with cloud exchanges, Interconnect removes layers and vendor dependencies—but you’ll weigh that against your egress and architecture constraints before you rip anything out.
How fast is it, and what about latency?
Bandwidth starts at 1 Gbps in preview and is expected to scale up to 100 Gbps at GA on the Google side of the handshake. Latency remains a function of region‑to‑region distance and how AWS and Google place Interconnect points of presence. You don’t beat physics, but you do cut jitter and remove the unpredictability of Internet routes. Provisioning time is minutes, which is the headline change for operator workflows. (cloud.google.com)
A practical architecture playbook
Let’s get practical. Below are three proven patterns you can adopt immediately.
1) Split‑stack AI: Train here, retrieve there
Use Amazon S3 and SageMaker fine‑tuning on AWS while serving agents on Google Cloud for Gemini‑native workflows. Keep warm embeddings and feature stores synchronized across clouds on an hourly schedule; send only deltas. Gate big transfers behind a controlled window to keep costs predictable. Pair this with robust tracing (see our CloudWatch GenAI observability playbook) so data movements are auditable.
2) Analytics bifurcation: Snowflake or BigQuery without the tax
If your exec team wants BigQuery for marketing analytics and Redshift for ops, Interconnect gives you a predictable pipe for fact tables and feature exports. Use Transit Gateway route tables to isolate analytics subnets, and enforce service‑to‑service auth with short‑lived workload identities rather than long‑lived keys.
3) Cross‑cloud hot standby
Mirror a minimal blast radius: one region in AWS and a paired region in Google Cloud. Keep only state‑bearing services cross‑replicated (databases, object storage, queues). Everything else redeploys on failover. Run monthly game days to rehearse route flips and secret rotation. If you’re doing large model inference, pin your plan to current accelerator availability (our take on EC2 P6‑B300 capacity planning still applies).
The 7‑step pilot: 30 days to confidence
You don’t need a program charter to learn. Here’s a quick pilot that earns real signal fast.
- Pick one data flow. Example: 2 TB/day parquet export from Redshift to BigQuery for marketing attribution. Define success metrics: throughput, variance, error rate, and total $/TB.
- Tag it as a product. Assign a product owner, SRE, and data owner. Write a one‑pager with SLOs and rollback conditions.
- Provision AWS Interconnect. In the AWS console, select Google Cloud as the CSP, choose the destination region, and pick bandwidth aligned to your burst profile. Expect a single attachment representing capacity once provisioned. (aws.amazon.com)
- Lock down segmentation. Use dedicated route tables and security groups for cross‑cloud subnets. Require mTLS between services and short‑lived workload identity tokens for auth.
- Instrument end‑to‑end. Emit span context across both clouds. Track queue depth, transfer lag, and cross‑cloud retry behavior. Push metrics into a shared dashboard with alerting.
- Run a 72‑hour soak. Drive synthetic load to stress peak bandwidth. Record P50/P95 throughput and jitter across fixed intervals. Capture costs daily.
- Decide. Promote to limited production if you hit SLOs and cost targets. If not, scale bandwidth, adjust batching windows, or revert to Internet‑based transfer while you iterate.
Security and compliance: what changes, what doesn’t
Private connectivity reduces exposure to Internet path risks, but it isn’t a free pass. Keep encryption on, enforce data residency, and treat the cross‑cloud link as a high‑value asset. Create separate IAM boundaries for producers and consumers; avoid any pattern that lets cloud A’s identity mint privileges in cloud B. Audit the link with traffic mirroring at low sampling rates and centralize logs with write‑once retention.
For regulated data, map your control sets to the new path: data classification labels, DLP policies, key management, and cross‑border transfer agreements. The cleanest way to pass audits is to move only what you must, keep it ephemeral, and document the hell out of it.
Cost model reality check
Interconnect streamlines provisioning, but you still pay for data transfer out on the source side and for the managed connectivity itself. Pricing for the preview isn’t the headline—operational simplicity is. Your play is to reduce retransmits, eliminate over‑the‑Internet jitter that bloats retries, and right‑size bandwidth so batch windows finish inside off‑peak cost periods. If your team hasn’t modeled egress in a while, revisit our real‑world containers cost playbook and compare the trade‑offs with your current path.
People also ask: Does AWS Interconnect help during outages?
It can shorten recovery time if your architecture is built for it. You still need cross‑cloud deployment pipelines, immutable artifacts, and warm paths for state. The point is to remove the network as the bottleneck. After October’s outage, many boards asked “What’s our cross‑cloud failover plan?” Interconnect makes the answer simpler to implement if you’ve done the rest of the work. (finance.yahoo.com)
Operational guardrails that save you later
Before you scale beyond a pilot, put these rails in place:
- Change control on bandwidth. Treat it like capacity reservations. Changes go through a ticket and require a rollback plan.
- Route intent checks. Use policy‑as‑code to block route announcements that violate segmentation. No default routes, ever.
- Per‑flow budgets. Tag cross‑cloud traffic and alert when cost per TB deviates from the baseline by more than 15%.
- Runbooks with timed drills. Rotate secrets on a schedule; rehearse cutovers monthly; record mean time to confidence (MTTC) after a change.
How it compares to what you’re doing today
Site‑to‑site VPN: Fast to start, unpredictable at scale. Great for low‑duty control planes, not for bulk data or latency‑sensitive traffic.
Colo + exchange fabric: Powerful but heavy. Contracts, cross‑connects, and multiple vendors to herd. Interconnect replaces this for many teams without losing determinism.
Public Internet + hardened protocols: Cheap and surprisingly capable with modern TLS/QUIC, but you trade consistency. Good for async backups and opportunistic syncs.
Roadmap signals you can plan around
The joint messaging is clear: this is a managed, on‑demand, private path with an open spec. AWS’s what’s‑new post highlights preview availability in five Regions and Azure on deck for 2026. Google’s networking team calls out 1–100 Gbps scaling and minutes‑level setup. Translation: design for modularity now—multi‑provider links are becoming a first‑class primitive rather than an exception path. (aws.amazon.com)
FAQ: Do I need Cloud WAN or Transit Gateway with Interconnect?
You don’t need them, but many enterprises will still use them. Cloud WAN and Transit Gateway give you clean segmentation and centralized control. Interconnect becomes one more attachment point with deterministic bandwidth. If you’re already organized around a hub‑and‑spoke, keep it—just add the spoke that points at Google Cloud. If you’re greenfield, you can start smaller and grow into those services as your topology expands.
Governance and people: the part most teams skip
Interconnect changes org charts as much as it changes diagrams. You’ll want a single owner for cross‑cloud networking (think: Platform Infra) and a partnership with Security and Finance. Write clear RACI for bandwidth changes, key management, and incident response. And yes—budget for training. A builder’s session at re:Invent costs less than one hour of an incident call with six directors on the bridge.
What to do next (this week)
Time to move.
- Identify one data flow you can pilot in 30 days and write its SLOs.
- Book an hour to provision AWS Interconnect and stand up a test link to your Google Cloud project.
- Instrument the pipeline end‑to‑end and track cost per TB.
- Run a 72‑hour soak test and decide on promotion criteria.
- Plan a tabletop: simulate AWS region impairment and rehearse failover to Google Cloud.
- If you need an extra set of hands, our what we do page outlines how we help teams ship secure, reliable platforms fast.
Zooming out: multicloud stops being a special case
Multicloud has been a cultural debate for a decade. With AWS Interconnect, it’s just another API call. That doesn’t absolve you from designing for failure, inventorying data flows, or building guardrails—but it does remove the excuse that the network is too hard. Use it to simplify, not to sprawl.
If you want a migration blueprint mindset while you roll this out, our field guide to low‑drama platform upgrades—like Next.js 16 no‑drama migrations or keeping CI budgets under control—offers the same spirit: ship value, keep risk contained, and make tomorrow easier than today.
Last note: the preview is new, and details can evolve quickly. Track the AWS and Google networking blogs for region expansions, bandwidth tiers, and pricing updates. Your goal is not “multicloud for its own sake,” but resilient, observable, cost‑aware systems that serve the business. This release helps you get there faster. (aboutamazon.com)