There’s a live fire in the JavaScript ecosystem: the expr-eval vulnerability, tracked as CVE-2025-12735. The issue allows a crafted variables object passed to evaluate() to trigger code execution in your application context. Disclosure hit early November 2025, with updates continuing through late November. As of November 23, 2025, the original expr-eval package has not shipped a patched npm release; a community-maintained fork is active, with changes that restrict which functions can execute. If you parse math expressions anywhere in your stack—Node, browser, serverless—assume you’re exposed until you prove otherwise.
What is the expr-eval vulnerability (CVE-2025-12735)?
At its core, expr-eval lets you evaluate a user-provided math expression against a variables map. The flaw is that function objects can slip into that map. Once they do, the evaluator may run them during expression evaluation. If an attacker can influence the variables object, they can ride that path to execute arbitrary code with your process’s privileges. In browser code, that means access to your app’s JS environment; on the server, it can mean reading secrets, touching the filesystem, calling internal services—whatever the runtime allows.
Two details matter for risk: first, whether untrusted input can affect either the expression string or the variables map; second, whether your runtime has sensitive capabilities (environment variables, network adjacency, filesystem, child_process). Many teams carefully sanitize expressions but forget the variables object because it feels “data-like.” That’s the gap this bug walks through.
Fast triage: 90 minutes to know your blast radius
Here’s a practical, time-boxed checklist you can run this afternoon. Assign one engineer to drive, one to observe, and one to record evidence.
- Find the dependency. Run one or more of:
If your app isn’t a direct user, scan transitive deps:npm ls expr-eval || true npx pnpm why expr-eval || true yarn why expr-eval || truenpx osv-scanner --lockfile package-lock.json npx osv-scanner --lockfile pnpm-lock.yaml npx osv-scanner --lockfile yarn.lock - Locate usage in code. Search for parser and evaluation calls:
Capture call sites showing where the variables map originates.rg -n "(new\s+Parser\(|Parser\.evaluate|\.evaluate\()" src apps packages - Classify data origin. For each call, label variables as A) trusted (constants), B) server-derived (database/queue), or C) user-influenced (request body, URL params, form fields, webhooks). Anything C is red.
- Decide interim control. If you find any C, freeze deployments and apply the mitigation wrapper below. Push a hotfix branch and deploy behind a feature flag.
- Create evidence. Commit a short
SECURITY-NOTE.mddocumenting where you checked, what you changed, and how you tested. Screenshots of search output help.
Interim mitigation you can ship today
Until you upgrade, enforce two rules at the call site: no function values in the variables object, and a strict allowlist of evaluator functions.
// mitigation.js
import { Parser } from 'expr-eval';
// 1) Deep-scan object graphs and reject functions / accessors
function assertNoFunctionsDeep(obj, path = 'vars') {
const visited = new Set();
const stack = [{ value: obj, path }];
while (stack.length) {
const { value, path } = stack.pop();
if (value && typeof value === 'object') {
if (visited.has(value)) continue; // cycle-safe
visited.add(value);
for (const [k, v] of Object.entries(value)) {
const p = `${path}.${k}`;
const desc = Object.getOwnPropertyDescriptor(value, k);
if (desc && (desc.get || desc.set)) throw new Error(`accessor not allowed at ${p}`);
if (typeof v === 'function') throw new Error(`function not allowed at ${p}`);
stack.push({ value: v, path: p });
}
}
}
}
// 2) Build a parser with a minimal function allowlist
export function safeEvaluate(expression, vars) {
assertNoFunctionsDeep(vars);
const parser = new Parser();
parser.functions = Object.freeze({
// basic math only; extend intentionally
abs: Math.abs, ceil: Math.ceil, floor: Math.floor, round: Math.round,
min: Math.min, max: Math.max, pow: Math.pow, sqrt: Math.sqrt,
log: Math.log, log10: Math.log10 ?? ((x) => Math.log(x) / Math.LN10),
exp: Math.exp, sin: Math.sin, cos: Math.cos, tan: Math.tan
});
return parser.parse(expression).evaluate(vars);
}
This isn’t a substitute for a true upstream fix, but it cuts off the exploit path in typical app code. If you previously relied on custom functions injected via parser.functions, reintroduce them individually and document the use case.
Upgrade strategy: what to install, and when
Here’s the thing: the original expr-eval repository received a proposed security patch in early November, but a published npm version hadn’t landed as of November 23, 2025. Meanwhile, the maintained fork has implemented allowlisting and mandatory function registration. Depending on your risk tolerance, you’ve got three viable paths:
- Path A: Move to the fork. Replace
expr-evalwith a maintained fork that enforces a function allowlist and does not execute arbitrary functions via the variables map. Treat this as a minor breaking change—run your test suite and fuzz a handful of expressions you rely on (trig, power, conditionals). Pin a major version with a caret (e.g.,^3) and set a Renovate/Dependabot limit to avoid surprise jumps this month. - Path B: Vendor the patch. If your change window is tight, copy the patched code into a private package (scoped to your org) and depend on that package until the upstream path stabilizes. Keep a ticket to retire the vendored code within 30 days.
- Path C: Remove runtime evaluation. If you only evaluate a small set of formulas, precompile or replace them with explicit code. This often improves performance and deletes a class of security problems permanently.
People also ask: are browser-only apps affected?
Yes, but the impact profile is different. In a browser, code execution stays inside the page’s JS sandbox, which can still leak user data, session tokens, or app state. If you’re evaluating expressions in a worker or a shared iframe, isolate that logic and scrub the variables object the same way.
People also ask: what about serverless and edge runtimes?
Function-per-request models make exploitation attempts cheap for attackers. If your variables include callbacks, accessors, or configuration objects that expose secrets, you’re at risk. Deploy the mitigation wrapper and roll logs at the edge: flag any rejected evaluation with a security tag and route to your SIEM. If you run Node 20/22/25 in serverless, use runtime policies (e.g., blocked child processes) to reduce blast radius.
How to prove to leadership you’re safe (with receipts)
Security leaders want dates, versions, and tests. Provide this bundle:
- Inventory proof: the output of
npm ls(orwhy) showing either noexpr-eval, or an upgraded fork version. - Mitigation proof: a link to the commit introducing
safeEvaluate(), plus a unit test that injects a function and expects an error. - Dependency policy: Dependabot/Renovate config that auto-opens PRs for security advisories touching
expr-evaland expression-parsing packages in general. - Runtime guard: a log screenshot of a blocked attempt (red team or synthetic test) with the security tag you added.
Red flags in real code you should refactor now
When I review incidents like this, I keep seeing the same patterns:
- Variables map built from raw request JSON. If you stream
req.bodyintoevaluate(), you’ve effectively given attackers an API to your runtime. - Dynamic function injection for “extensibility.” Teams add JavaScript functions into the variables map to power custom operators. That makes life easy—and opens the door.
- “Safe because internal.” Internal doesn’t mean trusted. Mobile apps, partner integrations, and misconfigured proxies will route attacker input your way sooner or later.
Hardening patterns that age well
The fix for this bug aligns with better long-term architecture:
- Make the variables map a data structure, not an API surface. Forbid functions and accessors. Serialize/deserialize through JSON to force pure data when feasible.
- Limit function catalogs explicitly. Whether you use a fork or your own wrapper, treat
parser.functionsas an allowlist you maintain. - Add expression length and complexity limits. Cap tokens and recursion depth to prevent pathological inputs.
- Sandbox. If you truly need dynamic evaluation with user input, run it in an isolated process with tight syscalls and resource caps. Yes, even in serverless.
Operational guardrails you can automate this week
Let’s get practical. Bake these controls into your pipeline so the next library-level RCE becomes a routine patch, not a fire drill:
- Lockfile scanning on every PR. Add OSV-Scanner or npm audit as a required check. If you use GitHub, wire this into your agent workflows—see our take on GitHub Agent HQ for a sane model.
- Token hygiene. Library swaps trigger CI/CD changes. Rotate tokens, least-privilege scopes, and watch for accidental leakage during emergency patches. If you missed the recent registry shifts, read our guide: npm token changes.
- Runtime smoke tests. After upgrade, run a synthetic exploit attempt in staging. Assert that your wrapper throws and logs, and that your WAF or edge logger captures the event.
- Platform upgrade cadence. When you’re already touching dependencies, take advantage of newer runtime defaults. Our Node.js 25 upgrade playbook outlines safer defaults worth adopting while you’re here.
FAQ-style quick hits
Is my app affected if I only evaluate constant formulas?
If the variables map is entirely constants you control and never influenced by user input or external sources, risk drops dramatically. Still implement the no-functions rule to prevent future regressions when a new feature pipes in user data.
Do I need to rotate secrets?
If you found any path where untrusted variables reached evaluate() on a server, treat it as a potential compromise and rotate credentials. Think API keys, database passwords, and CI tokens. Use your incident playbook even if you have no signs of exploitation.
Does this affect TypeScript projects differently?
No. Types reduce certain mistakes at compile time, but they don’t block an attacker adding a function in JSON at runtime. Your guardrails must live in executable code.
The security case in numbers and dates
For your internal report: the CVE identifier is CVE-2025-12735. Public disclosure began in the first week of November 2025, with updates landing into the third week of November. The original package had not published a patched npm version by November 23, 2025. A maintained fork implemented an allowlist model and mandatory registration for custom functions. Various advisories classify this as high-to-critical severity because exploit complexity is low once untrusted data reaches the variables object.
What to do next (developers)
- Run the 90-minute triage. If any user-influenced data reaches
evaluate(), ship the mitigation wrapper today. - Choose a path: move to the maintained fork, vendor the patch, or remove dynamic evaluation for now.
- Write a failing test that injects a function and confirm it fails after your fix.
- Automate dependency scanning and open a recurring ticket to revisit this in 30 days.
What to do next (engineering leaders)
- Ask for the evidence bundle: inventory, mitigation commit, tests, and a runtime guard screenshot.
- Fund a half-day to remove dynamic evaluation where it isn’t essential. It pays down long-term risk.
- Review your pipeline secrets and rotation policy; emergency fixes often widen credential exposure.
- If you need help assessing the broader supply chain blast radius, our team does this routinely—see what we do and get in touch via Bybowu contacts.
Zooming out: why this one matters
Supply chain incidents are rarely about a single library; they expose brittle spots in how we build. The expr-eval vulnerability hits a common pattern—treating a rich execution engine as a harmless calculator. The fix is as much cultural as technical: treat anything that can execute as an attack surface, declare what it’s allowed to do, and enforce it in code. Do that, and the next CVE becomes a minor inconvenience instead of your Sunday night.
If you want a second set of eyes on your mitigation or want us to turn this into a repeatable guardrail for your org, reach out. We’ve helped teams ship safer, faster through similar package-level incidents—and we’re happy to do the same here.