AWS Interconnect is no longer a slideware promise; it’s shipping in preview with Google as the first partner, which means you can spin up private, high‑speed cross‑cloud links in minutes. Pair it with Google Cloud’s Cross‑Cloud Interconnect and you’ve got an opinionated, supported path to multicloud networking without months of colo contracts or DIY patchworks. If you’ve been waiting for a credible way to reduce blast radius from single‑cloud outages while keeping latency and throughput predictable, this is your moment.
What just changed—and why it matters
On November 30, 2025, AWS announced the preview of AWS Interconnect – multicloud with Google as launch partner and Azure following in 2026. The service is available in five AWS Regions to start and is designed to create private, resilient, high‑bandwidth links between your Amazon VPCs and other cloud environments. The next day, major outlets highlighted the promise: stand up private links in minutes, not weeks, and use provider consoles and APIs rather than chasing cross‑connect paperwork. For teams burned by October outage headlines, reliable cross‑cloud failover is no longer a wish list item; it’s a board question with dates.
Here’s the thing: this isn’t just “Direct Connect meets Partner Interconnect.” AWS Interconnect plugs cleanly into native services you already run—Amazon VPC, Transit Gateway, and Cloud WAN—while Google’s Cross‑Cloud Interconnect (CCI) exposes ports in 10/100 Gbps tiers with standard BGP control via Cloud Router. You keep the constructs your security and platform teams know, but extend them across providers with far less friction.
How AWS Interconnect and Cross‑Cloud Interconnect actually work
At a high level, you provision an Interconnect attachment on the AWS side and a CCI port on the Google side in compatible metros. Routing is exchanged over BGP sessions (Cloud Router on Google; Direct Connect gateway and your chosen AWS routing edge on the Amazon side). The path is private—no traversal of the public internet—and designed for deterministic throughput and latency. On Google, CCI aligns with the same operational model as Dedicated/Partner Interconnect, so your NOC isn’t learning a brand‑new discipline. On AWS, you attach Interconnect to Transit Gateway or Cloud WAN for hub‑and‑spoke scale, or directly into a VPC if you’re keeping things simple.
Security and resilience aren’t bolt‑ons. MACsec line‑rate encryption is available on Interconnect offerings, and Google’s Interconnects support BFD and 9K MTU (where enabled) to improve failover and efficiency. Private Service Connect over CCI is GA, which lets you publish and consume services privately across that link—think “managed service endpoints without poking holes in every VPC.” The result is a pragmatic middle ground between classic MPLS backbones and ad‑hoc VPN mesh.
AWS Interconnect vs. Direct Connect vs. VPN: what’s the difference?
Good question. Direct Connect is for hybrid connectivity (on‑prem to AWS). Cross‑Cloud Interconnect is for cloud‑to‑cloud. VPN is your quick, flexible fallback, but it’s limited by internet path variability, crypto overhead, and operational drift when scaled to dozens of tunnels. AWS Interconnect – multicloud is purpose‑built for provider‑to‑provider private links and manages the undifferentiated heavy lifting you used to solve with colos, partner fabrics, or SD‑WAN overlays. You’ll still use Direct Connect to reach your data centers, and you’ll still use VPN for edge cases and as a control-plane parachute, but Interconnect is the new default for high‑confidence, cross‑cloud production paths.
Design patterns you’ll actually deploy in Q1 2026
1) Hot‑standby, same‑region DR across clouds
Run primary compute in us‑east‑1 with RDS and ElastiCache, and hot‑standby services on Google with GKE Autopilot and Cloud SQL read replicas. Use Interconnect/CCI for private data sync and health‑based failover via Route 53 + Cloud DNS policies. Keep your API surface behind Private Service Connect and API Gateway so authz is consistent regardless of which side is active.
2) Split‑stack for AI workloads
Where your vector store and compute preferences differ from your inference strategy, split stacks are clean: e.g., training and BigQuery feature engineering on Google, inference services or retrieval in AWS using Bedrock or S3 Vectors. The private link keeps token and embedding costs predictable while staying off the public internet. If RAG is your focus, our write‑up on S3 Vectors at billion‑vector scale pairs well with this pattern.
3) Global WAN with regional exits
Use AWS Cloud WAN as the global policy layer and attach regional Transit Gateways. Anchor Cross‑Cloud Interconnects in the same metros to give your application teams predictable east‑west paths. This setup lets you apply central policies (isolation, inspection, QoS) while still enabling product squads to move fast in their own VPCs and projects.
Primary keyword check: planning AWS Interconnect
When you plan AWS Interconnect, treat it like a new backbone, not a fancy tunnel. Decide up front which services will ride the private link, what gets proxied via Private Service Connect, and where you’ll terminate TLS. Define ownership: platform networking owns the Interconnect, product teams own service endpoints and SLOs. If you skip these calls, you’ll end up recreating the same “shadow” networks Interconnect was meant to replace.
The 90‑minute starter plan (lab to first packets)
Set aside a sandbox AWS account and a Google Cloud project. You’re not building production on day one—you’re proving the workflow so your runbooks are fast when it counts.
- Choose a metro pair and Regions. Start where both providers have strong presence (e.g., Northern Virginia/US East and a corresponding Google metro). Confirm availability in the preview Regions.
- Provision on Google: request a Cross‑Cloud Interconnect port in your chosen location and attach it to a Cloud Router with a reserved ASN. Enable BFD if available.
- Provision on AWS: in the console, create an AWS Interconnect – multicloud attachment. Attach to a Transit Gateway (recommended) or the target VPC. Set the Amazon‑side ASN to match your design.
- Create a Direct Connect gateway on AWS and link it to the Transit Gateway. The gateway is your anchor for BGP sessions back to the Cloud Router on Google.
- Bring up BGP. Exchange a minimal set of prefixes first (a /28 test subnet each side). Validate convergence and failover by disabling one side and timing route withdrawal.
- Test traffic. Launch a tiny web service on each side and curl across the private link. Check performance with iperf3 at quiet times. Confirm MTU settings and path MSS clamping.
- Security guardrails: add NACLs and security groups to enforce least privilege; on Google, use firewall policies and tags. No 0.0.0.0/0 shortcuts. Verify MACsec or IPSec posture per your compliance needs.
- Observability: wire up CloudWatch metrics for Interconnect health and BGP status; on Google, export Interconnect and Cloud Router metrics to Cloud Monitoring. Set alerts that page humans if BFD or session status flaps.
By the end of this exercise, you have a working, private path, runbooks for provisioning, and baseline metrics for latency, throughput, and failover. Now you can talk production.
Performance, throughput, and security facts to anchor expectations
Cross‑cloud ports come in well‑known tiers: 10 Gbps and 100 Gbps are standard for Google’s Interconnect offerings; AWS Interconnect integrates with Direct Connect gateways and your AWS network edges. MACsec is supported to encrypt L2 links; HA VPN over Interconnect is GA if you prefer IPSec between gateways; and BFD is available to speed up failure detection. Jumbo frames (9K MTU) are supported in relevant Interconnect paths—test end‑to‑end before enabling to avoid silent fragmentation. Most importantly, the provisioning model is API‑first: you shouldn’t be waiting on ticket queues to get capacity when a product team needs a new spoke.
Don’t expect magic: latency is bounded by physics and path design. You’ll usually see materially better jitter and fewer tail latency spikes than internet VPNs, but cross‑region traffic still pays distance penalties. Treat the link like a precious resource and keep noisy, bursty jobs (bulk ETL, artifact sync) off your daytime SLO windows.
Cost model and the biggest traps
There are three buckets to model: port/attachment charges on both clouds, egress/ingress data transfer, and routing/attachment scale costs (Transit Gateway, Cloud WAN, Cloud Router). Add inspection costs if you hairpin through firewalls. Two traps I see repeatedly:
- Double‑egress surprises. Moving data between clouds usually triggers egress on the source side; if you bounce through a third Region, you can pay twice. Keep the path metro‑local where possible.
- Unbounded service discovery. Letting every service talk to every service across clouds multiplies east‑west traffic. Use Private Service Connect to publish a small set of producer services instead of opening your whole VPC.
Procurement tip: lock in port commitments that match your non‑peak steady state, then use burstable compute/storage strategies on each side to absorb spikes without permanently over‑provisioning the interconnect.
What about Azure?
AWS has said Azure connectivity is on deck in 2026. If you’re a tri‑cloud shop, start with clean abstractions now: a per‑cloud hub VPC/project with identical route policies, a shared naming standard for Cloud Routers and Transit Gateways, and one golden runbook for bringing up new links. When Azure lands, you’ll add another hub without rewriting everything.
People also ask
Can we fail over automatically between AWS and Google?
Yes, but “automatic” still needs guardrails. Use health checks on both sides, BFD to speed path decisions, and DNS or Anycast to steer traffic. Run game days. The hardest part is data consistency—stateless front ends are easy; stateful systems require planned replication and clear RTO/RPO.
Is AWS Interconnect redundant if we already have Direct Connect?
No. Direct Connect gets you from on‑prem to AWS. Interconnect handles cloud‑to‑cloud. You’ll often run both: DC for hybrid, Interconnect for multicloud. Keep them separate in your inventory and billing so troubleshooting is sane.
Do we still need colocation providers?
Less often. Interconnect reduces your need for custom cross‑connects, but you may still use colos for legacy kit, compliance zones, or specialized network functions. The difference is you choose them—you’re no longer forced into them.
A pragmatic framework for multicloud networking decisions
When a product team asks for multicloud, run this five‑call framework before you touch the console:
- Why here, why now? Is this resilience, vendor leverage, or a specific service fit? If it’s only discount hunting, say no.
- What crosses clouds? List the two or three services that must traverse the link. Everything else uses a public edge.
- Data gravity? Pick a primary analytics home (BigQuery, Redshift, Snowflake) and keep bulk data local. Use compact interfaces (events, features, embeddings) across clouds.
- Identity and authz? Decide on a single source of truth (OIDC/SAML). Multi‑cloud fails fast without consistent identity.
- Runbooks and SLOs? Write failure modes first. If you can’t describe how you’ll detect and resolve a flap at 3 a.m., you’re not ready.
Zooming out: what this unlocks in 2026
With a supported path for private cross‑cloud links, platform teams can finally standardize multicloud without per‑project detours. Expect faster disaster recovery drills, saner split‑stack AI architectures, and fewer late‑night scrambles to procure cross‑connects. This change also pushes vendors to improve service parity and private service publishing: if you can move traffic privately in minutes, you’ll demand first‑class support for it.
If you want a quick executive‑level brief on the market implications, read our earlier note, AWS Interconnect + Google: What Changes Now. For a deeper design playbook, we also covered the preview mechanics in Your Multicloud Playbook. These pair well with the security perspective we shared after recent front‑end incidents in React2Shell: What to Patch Now and Why It Broke, because resilient backbones are only useful if your edge is sane.
What to do next (developers and platform leads)
- Spin a lab today. One AWS sandbox, one Google project, a tiny service each side. Prove BGP up/down and measure latency, MTU, and throughput.
- Decide your publishing model. Default to Private Service Connect for cross‑cloud producers; block east‑west by default.
- Pick two patterns. Choose one DR and one split‑stack pattern to standardize, document them, and say “no” to one‑off designs.
- Budget the backbone. Model port/attachment plus egress for your top three data flows. Add a 20% buffer for spikes.
- Run a game day. Kill a link, watch failover, and record time to steady state. Tune BFD timers and DNS TTLs accordingly.
- Socialize a two‑page memo. One page on what changed (Interconnect + CCI), one on your standard patterns. Share it with security and SRE.
If you’d like a sanity check on your architecture or want us to run the first build with your team, start here: Bybowu services and contact us. We’ve helped teams move from whiteboards to working links in a single sprint.
Implementation checklist you can copy
Use this as your PRD appendix or change ticket template:
- Metro and Region pair selected; capacity forecast attached.
- Interconnect/CCI ports requested; ASNs assigned; BFD and MTU plan documented.
- Transit Gateway/Cloud WAN policy updated; Cloud Router configured.
- Prefix lists defined; route limits annotated; blackhole policy tested.
- Private Service Connect endpoints/services declared with owners.
- MACsec/IPSec posture approved by security; key rotation documented.
- Monitoring: session status, throughput, error counters, and jitter SLO alerts.
- Game day schedule; rollback plan; pager rotation sign‑off.
Risks, constraints, and edge cases
Preview means preview: feature availability and Regions may be limited at first, and quotas can be tighter than GA services. Route scale matters more than you think—unbounded prefixes will tank convergence and blow up control‑plane CPU. Jumbo frames are fantastic until a middle device doesn’t support them; validate end‑to‑end or lock to 1500 MTU. Finally, remember that compliance boundaries don’t vanish because the link is private; document data classifications per flow and apply tokenization or envelope encryption where required.
On the product side, avoid treating multicloud as an excuse to skip hard choices. Consolidate data gravity, publish a small surface area of services, and keep your edge observability sharp. Multicloud resilience only works if failover is routine and boring.
One last perspective from the trenches
We’ve spent years building multicloud backbones the hard way: colos, LOAs, cross‑connects, port turn‑ups, and tickets that take longer than product sprints. AWS Interconnect plus Cross‑Cloud Interconnect turns that slog into an API call and a weekend lab. It won’t solve sloppy designs or replace discipline, but it removes the biggest operational barrier. Use the time you get back to raise the bar on runbooks, game days, and service boundaries.
Want a deeper dive into adjacent AWS updates that play into this? Our take on Graviton5 migrations and S3 Vectors for RAG shows how compute and data layers are evolving alongside the network. Together, they set up a solid 2026 platform story.