AWS Interconnect multicloud just shipped in preview with Google as the first launch partner, and it’s the most pragmatic step we’ve seen toward simple, private connectivity across major clouds. Minutes instead of weeks. Managed instead of DIY. If you run critical workloads on AWS and Google Cloud—or you’ve wanted a cleaner path to do so—this is the week to move from whiteboard to working links.
What AWS Interconnect multicloud actually is
At its core, Interconnect is a managed, private, high-speed connection between your Amazon VPCs and your VPCs on another cloud, starting with Google Cloud’s Cross-Cloud Interconnect. It abstracts the physical circuits, VLANs, and router configs and presents you with a single logical object you create on one side and accept on the other using an activation key. Both providers handle the heavy lifting and ongoing operations.
The preview launched November 30, 2025 with five initial region pairs: N. Virginia (us-east-1 ↔ us-east4), N. California (us-west-1 ↔ us-west2), Oregon (us-west-2 ↔ us-west1), London (eu-west-2 ↔ europe-west2), and Frankfurt (eu-central-1 ↔ europe-west3). During public preview you get one 1 Gbps interconnect per customer at no cost, and it’s clearly marked: don’t route production traffic yet. General availability will bring elastic bandwidth scaling, up to 100 Gbps according to Google’s roadmap.
Security and resilience aren’t an afterthought. Links are encrypted end-to-end between edge routers using MACsec, and the physical pathing is quad-redundant across facilities and devices. On the AWS side, the attach point is a Direct Connect gateway that your Transit Gateway or Cloud WAN can hit; on the Google side, it’s Cloud Router. MTU is set to 8500 on the AWS side for headroom, and you can advertise up to 1,000 IPv4 plus 1,000 IPv6 prefixes from Google. CloudWatch includes a built-in network synthetic monitor for latency and loss, so you can alarm on the metrics that matter.
Why this matters now
Here’s the thing: cross-cloud is no longer the exotic exception you justify once a year. Between AI accelerators living where capacity is available and risk committees asking for real failover options after recent outages, multicloud is a practical insurance policy and an enabler. Historically you paid for that insurance with months of procurement, cross-connects, and brittle routing. Interconnect cuts that to minutes and moves the work into cloud-native constructs your teams already use.
There’s also a people factor. Network teams can shift from racking gear and wrangling LOAs to setting policy and guardrails. Platform and data teams can test active-active patterns without begging for long lead times. And leadership gets a credible story for resilience that isn’t “we’ll failover over the public internet and hope.”
Architecture in practice: three patterns that shine
1) Active–active data and AI
Keep state synchronized across clouds and let traffic hit either side based on proximity, cost, or accelerator availability. Example: S3 in AWS and BigQuery in Google with nearline replication, plus a shared feature store for models served on whichever platform has GPUs free. With private, encrypted transport and predictable latency, your app feels colocated even when it isn’t.
2) Active–standby disaster recovery that’s actually testable
Stand up a warm Google environment for a core AWS workload (or vice versa), replicate databases continuously, and run monthly failover drills over the private link. Because the path is managed, you’re not re-plumbing when you test; you’re exercising the same route you’ll use in anger.
3) Burst compute without data gravity tax
Keep your data lake in AWS, but push specific analytics or ML training bursts into Google when it has the right accelerators or discounts. Or flip it: orchestrate pipelines in Google Cloud that privately read from S3 or RDS without hairpinning through on‑prem hubs.
Quick-start: a safe 60‑minute dry run
Before you touch production, prove the workflow end-to-end. You can do the following in a lunch break.
Step 1 — Prep the attach points. In AWS, confirm you have a Direct Connect gateway with either a Transit Gateway or virtual private gateway attached. In Google, ensure a project with a Cloud Router in the paired region.
Step 2 — Create the interconnect. In the AWS console, create a Multicloud Interconnect targeting the corresponding Google region and set bandwidth to the 1 Gbps preview tier. Capture the activation key the console gives you.
Step 3 — Accept on Google Cloud. In Google’s console or CLI, create the partner Cross‑Cloud Interconnect “transport,” supplying the activation key. Google provisions and establishes the encrypted link to AWS.
Step 4 — Advertise routes. On the Google side, configure BGP sessions and export a tiny, non-overlapping test prefix (for example, 10.255.0.0/24). On AWS, confirm the routes appear at the Direct Connect gateway and propagate to your Transit Gateway route tables.
Step 5 — Verify and measure. From a test instance in AWS, ping and curl a target in Google; from Google, hit an AWS private service. Watch CloudWatch’s synthetic monitor for round-trip latency and loss. Capture a baseline for your ops runbook.
Step 6 — Tear down or tag appropriately. Name it “preview‑poc,” attach cost center tags, and ensure no prod CIDRs are accepted. This is about confidence, not cutover.
Design decisions you can’t skip
Addressing strategy. Avoid overlapping RFC1918 ranges between clouds. If you already overlap, plan a NAT boundary on the side that changes least often and document translation in your incident runbook.
Route scope and filtering. Keep the BGP blast radius small. Start with narrow prefixes and add service by service. On AWS, build explicit TGW route table associations per account or OU so a rogue prefix can’t black-hole half your estate.
Traffic engineering. Treat this like any backbone: define which flows are allowed cross-cloud and why. Prefer app‑level allowlists (e.g., specific data syncs, service meshes) over “full mesh because we can.”
Throughput and MTU. With 8500‑byte MTU on AWS and jumbo support on Google, make sure hosts, load balancers, and service meshes won’t fragment. If you can’t go jumbo end‑to‑end, clamp MSS at the edges.
Security stance. The link is private and MACsec-encrypted, but identity and data controls still rule. Use VPC Service Controls on Google and SCPs/GuardDuty/Network Firewall on AWS. Log flows on both sides and forward to your SIEM with consistent retention.
Cost observability. Preview is free at 1 Gbps, but network egress, inter‑region hops, and data service reads still have prices. Tag the interconnect, set budget alerts, and track utilization with CloudWatch percent‑use metrics so you’re not surprised when GA pricing lands.
People Also Ask
Is AWS Interconnect multicloud just Direct Connect in disguise?
No. It uses Direct Connect gateways as attach points, but the cross‑cloud fabric and operations are managed jointly by AWS and Google. You’re not procuring circuits, juggling VLANs, or maintaining routers. You create on one side, accept on the other, and both providers provision capacity and monitor health.
Does this replace VPNs or SD‑WAN?
In many cases, yes. For steady, critical cross‑cloud flows, managed private links beat IPSec tunnels and DIY overlays on reliability and operational load. You may still keep VPNs for edge cases, quick experiments, or as a tertiary path during preview.
What does it cost?
During public preview you can create one 1 Gbps interconnect per customer at no charge for the service itself. Normal service egress, reads, and inter‑region costs apply. AWS has not published GA pricing yet; plan budgets with headroom and monitor utilization so you can right‑size when pricing goes live. Google indicates bandwidth will scale up to 100 Gbps at GA.
Which regions are in preview?
Paired regions at launch: us‑east‑1 ↔ us‑east4, us‑west‑1 ↔ us‑west2, us‑west‑2 ↔ us‑west1, eu‑west‑2 ↔ europe‑west2, and eu‑central‑1 ↔ europe‑west3. More will follow, and Azure is slated to join in 2026.
A pragmatic rollout plan: 30 / 60 / 90 days
Day 0–30: Prove value and set guardrails
Spin up your preview 1 Gbps link in a sandbox account and a non‑prod Google project. Validate three use cases: private S3↔BigQuery reads, service‑to‑service calls behind internal load balancers, and a database replica stream. Write a two‑page guardrail doc that covers CIDR policy, allowed ports, tagging, CloudWatch alarms, and incident paths. Socialize it with security and networking. If you want a ready‑made template, adapt the deployment playbook we published for Interconnect with Google in the hands‑on playbook.
Day 31–60: Integrate with your backbone
Attach the interconnect to your Transit Gateway or Cloud WAN and carve per‑domain route tables (prod vs. non‑prod, or by business unit). Build a single‑button deployment in your platform repo that requests the interconnect, sets BGP, and applies route policy. Exercise DR: fail a stateless service to Google for an hour using the private path. Capture latency and throughput baselines. Compare to your VPN overlay and document the operational delta. For broader multicloud context, cross‑reference the steps in our original Interconnect deployment playbook.
Day 61–90: Make it real and sustainable
Pick one tier‑1 workload to pilot an active–standby pattern. Implement traffic steering at DNS or API gateway layer with health‑based failover. Add runbooks for common incidents: prefix leak, asymmetric routing, loss spikes, and activation‑key errors. Wire budget alerts for data egress and service reads on both sides. Draft the transition memo that upgrades the link to production the week GA pricing is published. If you’re planning AI customization across clouds, coordinate with your model platform roadmap—our take on custom models with Nova Forge is here: build‑your‑own models now.
Risks and gotchas (read this twice)
Preview is preview. The providers explicitly say not to route production traffic yet. Treat everything you do now as a rehearsal you’ll repeat at GA. Keep changes narrowly scoped and reversible.
Overlapping CIDRs. If your enterprises used 10.0.0.0/8 everywhere, you’ll hit walls fast. A translation layer (NAT or proxy) buys time, but plan an addressing refactor so you’re not painting yourself into a corner.
MTU mismatches. Jumbo in one place and standard elsewhere equals silent throughput pain. Verify end‑to‑end and clamp where needed.
Prefix limits. The Google side can advertise up to 1,000 IPv4 and 1,000 IPv6 prefixes. If your topology is noisy, aggregate aggressively or you’ll spend your limit on route churn.
Operational ownership. The fabric is managed, but you still own policy, routing intent, and incident triage. Decide who pages whom for loss spikes and how you’ll prove innocence when the problem sits outside your app.
Cost fog. The link itself may be free in preview, but reads, writes, and egress aren’t. Tag early, alarm early, and keep a weekly scorecard so finance isn’t surprised later.
How this shifts multicloud strategy
Interconnect erases much of the drudgery that made multicloud brittle and expensive. That doesn’t mean “put everything everywhere.” It means you can finally be intentional: keep systems of record where your teams are strongest, but use the other cloud for the one or two things it does unmistakably better—be that analytics, a specific AI stack, or regional reach. The open specification behind this collaboration also lowers the risk of one‑off lock‑in; Azure support is on the roadmap, and other providers can adopt the API.
Zooming out, the immediate win is resilience. A private, monitored path with coordinated maintenance beats improvised IPSec tunnels when you’re under pressure. The longer‑term win is agility: when GPUs, TPU v‑whatevers, or data residency rules force your hand, you won’t be stuck waiting on dark fiber or vendor paperwork.
What to do next
• Stand up the preview 1 Gbps link in a sandbox and run the 60‑minute dry run above.
• Write and circulate the two‑page guardrail doc (CIDRs, ports, alarms, tagging, incident flow).
• Baseline latency, loss, and utilization; wire budget alerts for network and data services.
• Pick one workload for a controlled failover drill and document the result.
• If you want expert help, explore our cloud networking services or ping us—our deployment playbook and Google‑specific guide can accelerate your rollout.
Final word
AWS Interconnect multicloud moves multicloud from “we should” to “we can.” It’s managed, private, and fast to stand up. Use the preview window to build muscle memory and prove value without risking production. When GA hits, you’ll be ready to scale bandwidth, codify policy, and give leadership a resilience story that stands up in the boardroom—and during the next 3 a.m. incident.
