The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. Its most consequential provisions — the obligations for high-risk AI systems — take effect August 2, 2026. That is five months away. A proposed Digital Omnibus package could push this to December 2027, but it has not passed and may be amended. August 2026 remains the statutory default.
The MCP ecosystem — 2,822 servers and growing — has largely ignored this. That needs to change.
The Classification Problem
The Act distinguishes between AI systems (the deployed product) and general-purpose AI models (the foundation model). An MCP server is neither. It is a tool, a component, a service. But the Act doesn't regulate in isolation — it regulates the composite system.
An AI agent that uses Claude (GPAI model) plus MCP servers (tools) to perform tasks with autonomy, infer outputs, and influence environments meets the Article 3 definition of an AI system. The MCP server becomes part of that system. And the system's risk classification depends on its intended purpose, not its architecture.
This means the same MCP server — say, a database connector — carries different regulatory weight depending on who uses it:
| Use Case | Risk Level | Why |
|---|---|---|
| Recipe database for a cooking app | Minimal | Not in Annex III |
| Medical records for diagnostic assistance | High | Annex III, area 5: healthcare |
| HR database for hiring decisions | High | Annex III, area 4: employment |
| Legal case search for court filings | High | Annex III, area 8: justice |
The MCP server doesn't change. The context does. This is the fundamental tension: risk is a property of deployment, not of components.
What the Act Requires
For high-risk AI systems, the obligations are substantial. Four articles matter most for MCP:
Article 12 — Logging. High-risk systems must automatically record events throughout their lifecycle. For MCP, this means tool calls, parameters, responses, and outcomes become part of the required audit trail. The current MCP specification has no standardized logging format, no retention requirements, and no tamper-resistance. Servers like eu-audit-mcp are early attempts to fill this gap with HMAC hash-chained logs.
Article 13 — Transparency. Deployers must receive documentation of capabilities, limitations, potential risks, human oversight measures, and output interpretation mechanisms. MCP tool descriptions — a sentence or two of plain text — are insufficient. The Act expects structured, comprehensive documentation that helps deployers understand what the system does and what can go wrong.
Article 14 — Human Oversight. Humans must be able to monitor, interpret, override, and maintain awareness of automation bias. For autonomous agents making tool calls, this means human-in-the-loop checkpoints for sensitive operations. An MCP server that enables irreversible actions in a high-risk domain needs built-in confirmation flows, not just tool execution.
Article 15 — Cybersecurity. The Act explicitly contemplates prompt injection, data poisoning, adversarial examples, and confidentiality attacks. MCP servers handling sensitive data need hardened security posture — authentication, input validation, rate limiting. This is not aspirational; it is a legal requirement for high-risk deployments.
The Supply Chain Provision
Article 25 is the provision that reaches MCP server developers directly, even if they don't build AI systems themselves.
If an MCP server gets integrated into a high-risk AI system, the system provider can — and must — require the MCP server developer to provide, by written agreement:
- Technical documentation about the server's behavior, limitations, and failure modes
- Access for conformity assessment
- Ongoing cooperation for compliance
This is supply chain due diligence, codified into law. If you build a commercial MCP server and an enterprise integrates it into a high-risk system, expect contractual demands for documentation, security attestations, and compliance cooperation.
The open-source exemption: Servers released under free and open-source licenses are exempt from Article 25 cooperation requirements. This is a significant carve-out. The majority of the MCP registry is open-source — and this exemption is one reason to stay that way. However, the exemption does not apply to GPAI models with systemic risk, and it does not exempt open-source servers from the prohibited practices in Article 5.
Who Is Liable?
The Act defines clear roles:
| Role | Who in MCP | Primary Obligations |
|---|---|---|
| GPAI model provider | Anthropic, OpenAI, Google | Technical docs, training data summaries, copyright compliance |
| AI system provider | Whoever assembles agent + model + MCP tools into a product | Full high-risk obligations: conformity assessment, risk management, logging, human oversight |
| MCP server developer | Tool builders | Article 25 supply chain cooperation (commercial only) |
| Deployer | End-user organization | Use per instructions, assign human overseers, monitor, report incidents |
The critical trap: a deployer becomes a provider — inheriting all provider obligations — if they put their own brand on the system, make substantial modifications, or change the intended purpose to make it high-risk. An organization that takes an agent framework, adds MCP servers, and deploys it under their brand for medical decision support has likely become the provider of a high-risk AI system.
The Compliance Ecosystem Is Emerging
Several MCP servers already address EU AI Act compliance:
- SonnyLabs EU_AI_ACT_MCP — 17 compliance tools covering risk classification, role determination, prohibited practices checking, transparency disclosures, and prompt injection detection.
- Ansvar EU_compliance_MCP — 49 EU regulations (GDPR, DORA, NIS2, AI Act, MiCA, eIDAS 2.0) with 2,528 searchable articles.
- ArkForge mcp-eu-ai-act — Open-source compliance scanner for CI/CD pipelines.
- eu-audit-mcp — HMAC hash-chain audit trails for Article 12 logging.
- compliance-trestle-mcp — NIST OSCAL toolchain integration for FedRAMP and government security frameworks.
This is early-stage tooling. None of it constitutes compliance on its own — compliance is a property of the deployed system, not individual components. But these servers provide building blocks that system providers will need.
What This Means for Trust Scoring
MCP Scorecard currently measures provenance, maintenance, popularity, and permissions. Regulatory readiness is not yet a scoring dimension. It should become one.
Observable, scorable signals include:
- Logging capability — Does the server produce structured, queryable logs of tool interactions?
- Transparency documentation — Are capabilities, limitations, data access patterns, and failure modes documented beyond basic tool descriptions?
- Human oversight hooks — Does the server support confirmation or approval flows for sensitive operations?
- Security posture — Authentication, input validation, rate limiting, vulnerability handling.
- Domain risk awareness — Does the server identify which high-risk domains it might be used in?
- Open-source status — Affects Article 25 supply chain obligations.
These are observable facts, consistent with our principle of scoring what we can see. A server with high regulatory readiness is not "compliant" — that depends on deployment context. But it gives deployers the building blocks they need to achieve compliance, and that is a trust signal worth measuring.
The Timeline
| Date | What Happens |
|---|---|
| Already active | Prohibited practices (Article 5), AI literacy (Article 4), GPAI obligations (Articles 51–55), penalty regime (up to 7% global turnover) |
| August 2, 2026 | High-risk system obligations (Annex III), transparency rules (Article 50), GPAI enforcement powers, regulatory sandboxes |
| December 2027 | Backstop date if Digital Omnibus passes (uncertain) |
| August 2, 2027 | Full roll-out: product-embedded high-risk systems (Annex I) |
Finland became the first EU member state with full enforcement powers in December 2025. Fines can reach EUR 35 million or 7% of global annual turnover — whichever is higher.
The MCP ecosystem has five months to get serious about regulatory readiness. Not because every server needs to be "compliant" — most won't operate in high-risk domains. But because the servers that do will need supply chain partners that can meet Article 25 demands. The ones that can demonstrate readiness will have a competitive advantage. The ones that can't will be replaced by ones that can.
This analysis is based on the EU AI Act text, Commission guidelines, and published legal commentary as of March 2026. It is not legal advice. Organizations operating in regulated domains should consult qualified legal counsel.
Sources: EU AI Act full text · Commission Guidelines on AI System Definition · CMS: Agentic AI and the EU AI Act · Orrick: 6 Steps Before August 2026 · Linux Foundation: Open Source and the AI Act