The Cisco AsyncOS zero-day—CVE‑2025‑20393—is being exploited right now against Secure Email Gateway and Secure Email and Web Manager appliances. It’s a CVSS 10, pre-auth attack that yields root on vulnerable systems. There’s no patch as of December 22, 2025, and U.S. federal agencies have a December 24 remediation deadline under the Known Exploited Vulnerabilities program. If you run these appliances, you’ve got a short runway to triage, contain, and, where necessary, rebuild. (cisco.com)
Cisco AsyncOS zero‑day: what we know
Cisco publicly confirmed on December 17 that attackers have been exploiting a previously unknown flaw in AsyncOS to run system-level commands on affected appliances. The campaign was first noticed on December 10 and likely started in late November. Cisco assigned CVE‑2025‑20393, rated it 10.0, and said the only reliable way to remove persistence after compromise is a full rebuild. (cisco.com)
Cisco Talos attributed the activity with moderate confidence to a Chinese‑nexus APT tracked as UAT‑9686. Observed tooling includes a Python backdoor dubbed AquaShell, along with AquaTunnel (reverse SSH), Chisel (tunneling), and AquaPurge (log tampering). These implants are designed to survive cursory cleanup and create durable footholds. (blog.talosintelligence.com)
Exposure isn’t universal. Cisco says impacted systems share two conditions: the Spam Quarantine feature is enabled and reachable from the public internet. That setting isn’t on by default—but in the real world, “temporary” exposure has a way of becoming permanent. All AsyncOS versions are affected; Cisco Secure Email Cloud is not. (cisco.com)
Is there a patch? What’s the CISA deadline?
There is no fixed software update at time of writing (December 22, 2025). Cisco’s current guidance: restrict exposure, harden access, assess for indicators, and, if compromise is confirmed, rebuild the appliance. Meanwhile, CISA added CVE‑2025‑20393 to KEV and set December 24, 2025 as the due date for U.S. FCEB agencies to mitigate or discontinue use. Expect private‑sector security teams and insurers to mirror that urgency. (cisco.com)
Why product and engineering leaders should care
Secure Email appliances sit in the mail path and in many networks enjoy outbound trust. A rooted gateway can:
• Observe or alter password resets, MFA emails, and sign‑in links.
• Inject or allow malicious mail flows that look legitimate to recipients and downstream controls.
• Serve as a pivot point into your internal network via encrypted tunnels (AquaTunnel/Chisel). (blog.talosintelligence.com)
That’s not just an IT problem; it’s a fraud and brand risk problem. If your app’s authentication depends on email, compromise of the gateway can undermine user trust and incidentally erode KPIs from activation to retention.
The 48‑hour plan (use this checklist)
Hour 0–2: confirm exposure and cut risk
• Inventory: list every Secure Email Gateway and Secure Email and Web Manager appliance, physical or virtual, and note network placement.
• Check Spam Quarantine exposure: in the web UI, confirm whether Spam Quarantine is enabled and whether its port is reachable from the internet. If yes, you’re in scope for this campaign. (cisco.com)
• Contain immediately:
– Remove public exposure of Spam Quarantine and web management. Place appliances behind a firewall; allow only trusted management hosts. Disable HTTP; require TLS. (cisco.com)
– If you suspect active compromise, isolate the appliance network segment. Don’t power off; you may need volatile artifacts for forensics.
Hour 2–6: threat hunt for persistence
Based on Talos reporting, look for these families and behaviors:
• AquaShell: Python backdoor that passively listens for crafted POSTs, often embedded within an existing Python web server file. Talos observed it written to /data/web/euq_webui/htdocs/index.py on compromised systems. (blog.talosintelligence.com)
• AquaTunnel: reverse SSH tunnel (based on ReverseSSH) beacons outbound to attacker infrastructure.
• Chisel: single‑port HTTP tunneling for lateral movement.
• AquaPurge: log tampering—expect gaps or lines purged around certain keywords. (blog.talosintelligence.com)
Practical checks you can run now:
• File integrity: diff the web UI directories for unexpected modifications or appended Python blocks.
• Process scan: enumerate processes and sockets for chisel/reverse ssh patterns; review recent binaries in tmp and data directories.
• Egress audit: new persistent outbound connections to unfamiliar IPs/domains—especially long‑lived SSH‑like sessions.
• Log anomalies: sudden drops in log volume; filters that look like crudely scripted regex operations consistent with AquaPurge behavior. (blog.talosintelligence.com)
Hour 6–18: decide to rebuild
If your hunt turns up any indicators, treat the device as compromised. Cisco’s stance is unambiguous: rebuilding is currently the only viable way to eradicate persistence. Pull configuration backups you trust, prepare clean media for your specific model, and plan a maintenance window. (cisco.com)
If you find no indicators, you’re not out of the woods. Keep the device behind filtering, restrict management surfaces, rotate credentials that touched the appliance, and continue monitoring for at least two weeks while Cisco’s investigation evolves. (cisco.com)
Hour 18–48: rebuild, verify, and communicate
• Rebuild procedure:
1) Back up only what you must (clean configuration exports, not binaries).
2) Factory reset or reimage the appliance from trusted Cisco software sources.
3) Before reconnecting, apply your network hardening baseline: separate mail and management interfaces; allowlisted management hosts; TLS for admin; disable unneeded services like HTTP/FTP. (cisco.com)
• Verification:
– Re-run your integrity and egress checks.
– Confirm Spam Quarantine is not publicly reachable.
– Capture a golden snapshot for future diffing.
• Communication:
– Brief executives and affected teams (security, SRE, support).
– Preemptively notify customers if you process sensitive email flows (password resets, invoices) and you suspect compromise. Be specific about dates and mitigations.
People also ask
Should we take the gateway offline until a patch ships?
If you can maintain email continuity using a cloud service or failover path, temporarily taking the exposed device offline is reasonable—especially if you can’t immediately eliminate public exposure. Cisco emphasizes restricting access and following a multi‑step process to restore appliances to a secure state; confirmed compromises should be rebuilt. (cisco.com)
Does this affect Cisco Secure Email Cloud?
No. Cisco says Secure Email Cloud is not impacted by this campaign. The focus is on on‑prem and virtual appliances with Spam Quarantine exposed. (cisco.com)
Is the Cisco AsyncOS zero‑day being exploited broadly?
Multiple security advisories and press coverage confirm active exploitation, with CISA placing the CVE on the KEV list and setting a December 24 deadline for federal agencies. Talos ties the activity to UAT‑9686 and documents implants and IOCs. (esentire.com)
What this means for your app and business
Zooming out, an email security edge device with root‑level persistence is a perfect staging point for business email compromise, invoice redirection, and MFA interception. If your product relies on email for authentication, think like an attacker: could they delay or divert the next password reset? Could they learn enough about your vendor relationships to craft convincing phish? That’s why the fix isn’t just “apply a patch later.” It’s network‑level segmentation, strict access control, and sometimes a blunt rebuild.
A practical hardening baseline (keep this after the crisis)
Here’s the baseline we ask teams to adopt after the immediate 48‑hour push. It’s not fancy; it’s the stuff that keeps surprises from becoming headlines:
• No public management: Management and quarantine functions sit behind a firewall or VPN. No direct internet access, ever. (cisco.com)
• Split roles: Separate mail and management interfaces; no shared IPs. Enforce TLS for admin and SAML/LDAP for strong auth. (cisco.com)
• Externalize logs: Forward logs to an immutable store with retention. That’s your insurance when attackers run log‑cleaning utilities. (blog.talosintelligence.com)
• Egress control: Block unexpected outbound protocols from the appliance; allowlist only what’s needed for updates and reputation feeds.
• Config drift checks: Take signed config snapshots after changes; auto‑diff weekly. Treat undocumented changes as incidents until proven benign.
• Recovery drills: Quarterly rebuild exercises with timed RTO/RPO goals. If you’ve never rebuilt one of these appliances, the middle of an incident is a rough first rehearsal.
Data points and dates you can brief upstairs
• First observed activity: at least late November 2025; Cisco aware December 10; advisory published December 17. (cisco.com)
• Severity: CVSS 10.0, pre‑auth code execution leading to root on affected appliances. (cisco.com)
• Actor/tools: UAT‑9686; AquaShell (Python backdoor), AquaTunnel (reverse SSH), Chisel (tunnel), AquaPurge (log tampering). (blog.talosintelligence.com)
• Regulatory clock: CISA KEV deadline December 24, 2025 for federal agencies; strong signal for everyone else. (esentire.com)
How this compares to other recent “drop‑everything” patches
This incident sits in the same urgency tier as the WebKit and Chrome zero‑days affecting user endpoints—but it’s trickier operationally because it hits a piece of email infrastructure many teams rarely touch. If you’re building a holiday patch plan, we’ve been here before with fast‑moving web stack risks. Our breakdowns on shipping security updates under time pressure may help you triage in parallel with your app workstreams: see our guide on WebKit and Chrome ANGLE zero‑day response, our React2Shell holiday patch plan, and our note on what to patch when vendor releases slip.
The rebuild, step by step (bookmark this)
1) Prep a clean jump host in a separate management network. Document current network ACLs so you don’t strand yourself mid‑reimage.
2) Export a minimal, trusted configuration. Do not carry forward any binaries, custom scripts, or post‑install packages.
3) Acquire the correct software image from Cisco and verify it. Use checksums from the portal; don’t reuse local caches. (cisco.com)
4) Reimage the appliance or perform a factory reset followed by a clean install.
5) Before reconnecting it to production networks, enforce the hardening baseline: no public exposure, split admin/mail interfaces, TLS‑only admin, disable unused services, rotate all credentials associated with the device (local admin, LDAP binds, mail relays). (cisco.com)
6) Restore configuration; then validate: review egress for unexpected tunnels, scan for modified web UI files, and confirm logging to an external store is running.
7) Document the incident with timestamps: when exposure was present, when containment occurred, and whether you found persistence. This helps legal, compliance, and—if needed—customer comms.
What to do next
• Today: confirm whether Spam Quarantine or admin surfaces are publicly reachable; if yes, close them and start hunting.
• Within 24 hours: decide whether to rebuild based on indicators; if rebuilding, schedule the window and line up clean images.
• Within 48 hours: complete remediation, rotate related secrets, and brief leadership with clear dates, findings, and residual risk.
If you’d like a second set of hands during the holiday crunch, our team can help with rapid triage and rebuilds. See our services, browse relevant posts on the engineering blog, and reach out via contact.
FAQ for developers and IT
Do Pages Router web apps or Next.js sites relate to this?
No—this isn’t a web framework bug. But your application’s reliance on email for auth and customer communication means the blast radius crosses into product experience. If you had to delay a web deployment to give the network team a rebuild window, that’s the right trade this week.
What if we can’t rebuild before December 24?
Put the appliance behind strict access controls now, shift sensitive flows (password reset, invoices) to a cloud mail relaying service you trust, and continue hunting for persistence. Document your mitigations; leadership will need that paper trail even if you’re not a federal agency.
How do we prove we’re safe after rebuild?
Keep a checklist: exposure removed, config split, TLS‑only admin, logs externalized, egress allowlist enforced, IOCs re‑checked. Treat this like a change request that requires evidence, not just a new image. For more on proving safety for leadership, we’ve written about patch order and verification for web stacks—those patterns apply here too. See our note on patch order and proof.
Final word
Here’s the thing: incidents like this don’t just test your patching—they test your architecture discipline. If you discovered a publicly reachable quarantine or admin port today, fix the exposure and keep it fixed after the headlines fade. Rebuild decisively if you see indicators. And don’t wait for a patch to enforce a baseline that prevents a repeat. That’s the real win: turning a frantic 48 hours into a permanent improvement in how you build and run the edge.