The MCP registry added 549 servers between March 6 and March 14, 2026. Of those, 305 — 55.6% — are developer tools. Not tangentially related to development. Not developer-adjacent. Tools built by developers, for developers, to solve development problems. The next largest category, AI/ML, accounts for 83 entries (15.1%). Everything else is single digits.
This is not a surprise. MCP was born as a developer protocol — a way for AI coding assistants to call external tools. Its first users were developers. Its first servers were developer tools. What is notable is the trend direction: the developer share is not declining as MCP expands into other domains. It is strengthening. In the February batches, devtools hovered around 50% of new entries. This batch pushed past 55%. The gravity well is deepening.
The Devtool Breakdown
Not all 305 developer tools are alike. They cluster into recognizable subcategories, each representing a different theory about how AI agents should interact with the development workflow.
| Subcategory | Estimated Count | Examples |
|---|---|---|
| API bridges and integrations | ~85 | Google Maps, SaaS connectors, platform SDKs |
| Database and data tools | ~55 | ArcadeDB, Superset, various DB connectors |
| Code analysis and quality | ~45 | Skylos (dead code), linters, test frameworks |
| IDE and editor tools | ~35 | VS Code extensions, editor integrations |
| DevOps and infrastructure | ~30 | Container tools, deployment, CI helpers |
| Browser automation and testing | ~25 | Skyvern, Playwright wrappers, scraping |
| Documentation and knowledge | ~20 | Qiskit docs, API reference servers |
| Diagram and visual output | ~10 | Excalidraw Architect, chart generators |
The largest subcategory — API bridges — reflects MCP's most straightforward value proposition: wrapping an existing API so an AI agent can call it without the developer writing boilerplate. An agent that can query Google Maps, check breach databases, or pull SEC filings without custom integration code is immediately useful. These bridges are easy to build, easy to understand, and provide instant utility. They are also the most commoditized. The registry already has dozens of overlapping Google, GitHub, and Slack bridges. Differentiation will come from quality, not novelty.
Database tools are the second-largest cluster and arguably the most consequential. When an AI agent can directly query, inspect, and modify a database, the development workflow changes fundamentally. ArcadeDB (score 77, 737 stars) is a multi-model database — document, graph, key-value, time-series, and vector — that shipped an MCP server as a first-class integration. Apache Superset MCP (score 58, 17 stars) bridges AI agents to Superset dashboards, letting them query and visualize data through an existing BI platform. These are not toy projects. They are database companies and data platforms treating MCP as a strategic interface.
The "Established Project Adds MCP" Pattern
The most important signal in this batch — and across the past several batches — is not the volume of new devtools. It is the type of project choosing to add MCP support. A pattern has solidified:
- Scrapling — 19.4k stars, entered at score 92
- Kubeshark — 11.8k stars, entered at score 84
- edgartools — 1.8k stars, entered at score 84
- Skyvern — 20.8k stars, entered at score 79
These are not MCP-native projects. They are proven developer tools with established user bases, active maintainers, and years of commit history. They added MCP as a distribution channel — a new way for their existing capabilities to reach users through AI agents. When Skyvern, a visual browser automation platform with over 20,000 stars and a funded company behind it, decides to publish an MCP server, it validates the protocol in a way that a hundred new single-developer wrappers cannot. The message to other maintainers is clear: MCP is worth the integration effort.
The scoring model captures this pattern precisely. These established projects enter the registry with high scores because they already have the signals the model rewards — stars, forks, contributors, release cadence, active commit weeks, permissive licenses, codes of conduct. A new MCP-native project with ten stars and three months of history will score 40-55 regardless of code quality. An established project with thousands of stars and years of maintenance enters at 75-92. The trust score is, in this sense, a proxy for project maturity — and maturity is exactly what determines whether a developer tool will still exist in a year.
Skyvern: The Batch's Standout
Skyvern (io.github.Skyvern-AI/skyvern) deserves a closer look. It is a visual browser automation platform that uses computer vision and LLMs to interact with websites — not through DOM selectors or XPath, but by looking at what's on screen. Traditional browser automation is brittle: change a CSS class or rearrange a layout and the selectors break. Skyvern's approach is more resilient because it operates on the visual representation, the same way a human would. 20,800 stars. AGPL-3.0 licensed. Active development. Score: 79.
This is a developer tool, but it points toward a broader shift. Browser automation has historically required developers to write and maintain scripts. A visual automation layer exposed through MCP means an AI agent can navigate websites, fill forms, extract data, and complete workflows without pre-written selectors. The implications for QA testing, data collection, and process automation are substantial. Skyvern is not the first browser automation MCP server in the registry — there are several Playwright wrappers — but it is by far the most mature and the most technically differentiated.
The Visual Output Frontier
A quiet trend is emerging among devtools: servers that let AI agents produce visual output, not just text. Excalidraw Architect (score 62, 73 stars) generates diagrams from natural language prompts. Superset MCP bridges to dashboards and charts. In the previous batch, Pyxel (17,000 stars) let agents create pixel-art games. Gearsystem (score 72, 331 stars) is a Sega Master System / Game Gear emulator — an unusual entry, but one that makes sense when you realize it lets an AI agent interact with retro gaming environments visually.
The pattern: AI agents are moving beyond text input and text output. They are increasingly able to generate diagrams, charts, dashboards, game assets, and visual artifacts. MCP servers that enable this shift are arriving steadily — not in a single wave, but as a persistent trickle across batches. The tools exist. The question is whether AI model capabilities will keep pace with what the tools offer.
Quality Stratification
Of the 305 devtools in this batch, the trust score distribution follows the familiar pattern:
| Tier | Devtool Count (est.) | Share |
|---|---|---|
| High Trust (80-100) | ~5 | 1.6% |
| Moderate Trust (60-79) | ~45 | 14.8% |
| Low Trust (40-59) | ~175 | 57.4% |
| Very Low Trust (20-39) | ~80 | 26.2% |
The math is structural, not qualitative. A developer tool launched last week with 15 stars, one contributor, and three releases will score in the 40s regardless of its code quality, documentation, or usefulness. The scoring model rewards observable signals that accumulate over time — community adoption (stars, forks), maintenance consistency (commit weeks, release cadence), and governance maturity (contributor count, code of conduct, license). New projects simply have not had time to accumulate these signals. This is by design: a trust score is not a quality score. It measures the evidence available to verify a project's trustworthiness, and new projects have little evidence. Scores will rise for projects that survive and grow. Most will not.
Security Tools Arrive
Fourteen new servers in this batch target security use cases — 2.6% of the total, small in absolute numbers but significant as a category that barely existed two months ago. HIBP (score 57, 5 stars) bridges the Have I Been Pwned API, letting AI agents check email addresses and domains against breach databases. Previous batches brought Shodan and VirusTotal integrations. The pattern is DevSecOps through the MCP interface: security scanning, vulnerability checking, breach monitoring, and compliance verification as tool calls that AI agents can make during development workflows.
This is a natural evolution. If an AI coding assistant can write code, it should be able to check whether the dependencies it chose have known vulnerabilities, whether the credentials it was given have been exposed in breaches, and whether the infrastructure it is configuring meets security baselines. Security as an always-on background check, not a separate phase in the pipeline. The tooling is arriving to make that possible.
Specialty Devtools: SAP, Quantum, SEC
Three entries in this batch illustrate how far the devtool category stretches beyond its web-development core:
SAP CAP (score 73, 87 stars) is the SAP Cloud Application Programming model's official MCP server. SAP development is a world unto itself — its own languages (ABAP, CDS), its own frameworks, its own deployment models. An MCP server for SAP CAP means AI coding assistants can now help with SAP-specific development tasks, a domain where developer tooling has historically been proprietary and expensive.
Qiskit Docs MCP (score 74) brings IBM's quantum computing documentation into the MCP ecosystem. Qiskit is the most widely used quantum computing SDK, and its documentation is extensive and complex. An AI agent that can search and retrieve Qiskit documentation on demand is immediately useful to the roughly 500,000 developers who have used the framework. This is the first quantum computing entry in the MCP registry.
edgartools (score 84, 1.8k stars) wraps SEC EDGAR filings — financial data for every public company in the United States. It scored High Trust on entry, joining the top tier immediately. This is a financial devtool: developers building fintech applications, analysts building research pipelines, and AI agents that need to answer questions about corporate filings, insider transactions, and regulatory disclosures.
The Long Tail Problem
305 new developer tools in a single eight-day batch. The registry now contains well over 2,000 devtools total. This abundance creates a discovery problem that is becoming the ecosystem's primary bottleneck. A developer looking for, say, an MCP server to detect dead Python code has to find Skylos (score 73, 332 stars) among hundreds of entries. A developer wanting to generate architecture diagrams needs to know that Excalidraw Architect exists. A developer building on SAP needs to discover that SAP CAP has an official MCP server.
The registry itself provides basic search and filtering, but it was not designed for this scale of discovery. Trust scoring and curation — the kind this index provides — become increasingly important as the catalog grows. Without signals to separate mature, well-maintained tools from weekend experiments, developers face the same paralysis they experience with npm packages: too many options, too little information to choose between them. The 305 devtools in this batch include a few that will become essential infrastructure and many that will be abandoned within months. The scoring model cannot predict which is which, but it can tell you which ones have the strongest observable foundations today.
What's Not Here Yet
The gaps in the devtool catalog are as informative as the entries. Categories that remain conspicuously underrepresented:
- Profiling and performance — No MCP servers for CPU/memory profiling, flame graph generation, or performance regression detection. Given how often developers ask AI assistants "why is this slow?", the absence is surprising.
- CI/CD pipeline management — A few GitHub Actions wrappers exist, but no deep integrations with Jenkins, GitLab CI, CircleCI, or cloud-native build systems. AI agents cannot yet manage deployment pipelines as tool calls.
- Package management — No MCP servers for npm, pip, cargo, or Maven that go beyond basic search. Dependency resolution, version conflict analysis, and license auditing through MCP are absent.
- Code review automation — Pull request analysis, diff summarization, and review comment generation as MCP tools are largely missing. The closest entries are general code analysis tools.
- Monitoring and observability — Beyond Kubeshark's network traffic analysis, there are few MCP bridges to Datadog, Grafana, PagerDuty, or other observability platforms.
These gaps will fill. The question is whether they fill with purpose-built, high-quality integrations from established projects (the Scrapling/Kubeshark pattern) or with dozens of thin wrappers that duplicate effort without adding value (the npm wrapper pattern). The registry's trajectory so far suggests both will happen simultaneously.
The Outlook
Developer tools will continue to dominate MCP registrations for the foreseeable future. The protocol's developer-centric origins, the natural overlap between AI coding assistants and external tool calls, and the sheer size of the developer population all reinforce this gravity. The more interesting question is not whether devtools will maintain their 55%+ share — they will — but whether the quality distribution will shift. Right now, fewer than 2% of new devtools enter at High Trust. If the "established project adds MCP" pattern accelerates — if more projects like Skyvern, Kubeshark, and edgartools decide MCP is worth supporting — the top of the trust distribution will fill out. That would signal MCP's transition from a developer playground to developer infrastructure.