30 MCP CVEs in 60 Days. 117 Days to the EU AI Act Deadline.
Published April 8, 2026
Two numbers from this quarter that should be on every CISO's whiteboard.
Thirty. Between January and February 2026, security researchers filed more than 30 CVEs targeting Model Context Protocol servers, clients, and surrounding infrastructure. They range from trivial path traversals to a CVSS 9.6 remote code execution flaw in a package that has been downloaded close to half a million times. CVE-2025-6514 turned trusted OAuth proxies into RCE primitives across 500,000+ developer environments. CVE-2026-32211 (CVSS 9.1) took out Microsoft's own Azure MCP Server with an authentication bypass that leaks sensitive data. Adversa AI's April roundup catalogs eight more, including a working cross-tool hijacking exploit with public proof-of-concept code.
One hundred seventeen. That's how many days until August 2, 2026, when the majority of the EU AI Act's high-risk system rules enter application. Penalties activate the same day: up to €35 million or 7% of global revenue, whichever is larger. Finland already turned on its enforcement powers in December. The other member states are following through Q2.
If you ship an agent into the EU and it touches MCP tools, those two clocks just collided.
Why MCP became a CVE factory
MCP solved a real problem. Before it, every agent framework had its own bespoke way to expose tools to LLMs, which meant every integration was a one-off and every credential was glued in by hand. MCP standardized the wire format, the discovery protocol, and the tool-call shape. Adoption exploded. As of February, more than 8,000 MCP servers are exposed on the public internet — most of them hobbyist projects that someone wired into a production agent six weeks later because it "just worked."
The problem is that the standard didn't bring a security model with it. MCP servers are trusted by default once registered. Tool descriptions are passed verbatim into the LLM context. There is no canonical way to scope a token, no canonical way to scan an incoming tool definition for prompt injection, and no canonical way to log what the agent actually sent and received. The 30 CVEs are not 30 different bugs. They're the same five or six categories — tool poisoning, schema manipulation, rug-pull updates, over-broad permissions, missing auth, prompt injection — repeated across servers built without a threat model.
If you are running agents against external MCP servers right now, the honest assumption is that at least one of them is compromised, will be compromised, or is silently exfiltrating arguments to a logging endpoint you didn't authorize.
Why the AI Act deadline makes this urgent, not theoretical
Pre-August, "we have an MCP problem" was a security ticket. Post-August, for any agent classified high-risk under Annex III — and Annex III is broader than most teams realize, covering hiring tools, credit scoring, biometrics, critical infrastructure, education, law enforcement, migration, and any system that gates access to essential services — it becomes a regulatory finding. Articles 9 through 17 require, in plain language:
A documented, ongoing risk management system. A complete record of training data provenance. Technical documentation deep enough for a regulator to reconstruct how the system makes decisions. Automatic logging of system events under Article 12. Transparency to the people the system affects. Human oversight that is real, not theatrical. Accuracy, robustness, and cybersecurity controls. A quality management system that ties it all together.
Article 12 is the one that should keep you up at night if your agents call MCP tools. "Automatic logging of system events" is not satisfied by stdout. It means an immutable record of every input, every tool call, every model output, with enough integrity that a regulator or an auditor can verify nothing has been altered after the fact. If your tool-call audit trail lives in CloudWatch alongside an IAM role that any of your engineers can edit, you do not have an audit log under Article 12. You have a nice idea about an audit log.
What "compliant by August 2" actually requires
Strip away the language and the operational ask is concrete:
You need to know what tools your agents can call. You need to scan those tool definitions before they enter the LLM context, not after. You need to filter or tokenize personal data on its way out so a compromised MCP server cannot read it. You need an immutable, externally verifiable record of every tool call, prompt, and decision. You need a kill switch that works in seconds, not in a CAB meeting. And you need to be able to hand a regulator a compliance attestation mapped to the specific articles of the Act, on demand.
This is the gap VeriSwarm built for, and it's the gap we've been narrowing every release this year.
Guard sits between your agents and the MCP tools they call. The MCP scanner runs the same six checks researchers used to file those 30 CVEs — tool poisoning, typosquatting, schema manipulation, rug-pull patterns, prompt injection, excessive permissions — against every tool definition before it reaches the model. Guard Proxy intercepts the live tool calls, runs PII through Microsoft Presidio for tokenization, applies tenant-scoped transformation rules, and can enforce a policy block in milliseconds. The kill switch is one API call. It does not need a CAB.
Vault is the Article 12 answer. Every event — agent input, tool call, model output, policy decision, override — is hash-chained into an append-only ledger. Chain verification is a public endpoint. Exports are signed. When an auditor asks "show me everything this agent did between July 14 and August 2," the answer is a single API call and a tamper-evident bundle, not a six-week archaeology project across five log shippers.
Compliance endpoints (/v1/compliance/eu-ai-act, nist-ai-rmf, iso-42001) generate the framework-mapped attestations directly from your tenant's live posture. Policy changes flow through the same Vault ledger, so the attestation is grounded in what your system actually did, not what a spreadsheet claimed it did.
Cedar policies let you encode the Annex III boundaries as declarative rules instead of as English in a Confluence page. A high-risk hiring agent that is not allowed to call an external CV-parsing MCP unless it's on the approved list? That's three lines of Cedar, evaluated on every decision, with the result recorded in Vault.
The honest read for the next 117 days
A non-trivial number of teams are going to discover, sometime in late July, that their plan was "we'll figure it out." The MCP CVE wave already proved that agent infrastructure is the soft underbelly of the agentic stack. The Act is going to convert that security debt into financial liability denominated in eight figures.
The work to close the gap is real but bounded. Inventory the MCP servers your agents reach. Scan them. Put a proxy in front of the calls. Get the audit trail into something hash-chained. Map your agents against Annex III. Generate the attestation. Test the kill switch.
VeriSwarm exists to make that a one-week project instead of a one-quarter project. Gate is free; Guard, Passport, Vault, and the compliance endpoints are the paid tier. If you want to see your current posture against the Act, the /v1/compliance/eu-ai-act endpoint will tell you in JSON, today.
The clocks are not negotiable. The infrastructure is.
VeriSwarm is the trust, security, and audit layer for AI-native agents. Start free at veriswarm.ai.