Anthropic Skills Directory alternative
An Anthropic Skills Directory alternative for continuous security review
Anthropic's official Skills Directory is a curated allowlist — submit a Claude skill, Anthropic reviews it (security included), it gets listed or it doesn't. That's a meaningful trust signal: the badge says a human at Anthropic looked at this skill and approved it. SkillAudit is a different architecture for the same problem — a continuous, reproducible, transparent public scoreboard that grades any Claude skill or MCP server on demand, regardless of whether it ships through Anthropic. Closed-loop curation versus open-loop scoring; both belong in a serious install decision.
TL;DR
The Skills Directory is a gated allowlist with a private one-time review process: you ship, Anthropic eyeballs, you get in. The review bar is not published, the verdict is binary (listed / not listed), and the audit doesn't re-run when the repo changes. SkillAudit is a continuous, transparent scoreboard with a published rubric and a per-axis A–F grade — it scans on every commit, the methodology is public, and the report card is a stable URL. We've already audited 101 of the most-installed MCP servers with the same six-axis rubric: 50% had SSRF, 38% had credential-handling findings, only 19% earned an A. Anthropic's nine official MCP language SDKs all live on the same board with the same grades — including the ones that scored an F. Honest, reproducible, and additive on top of an Anthropic listing rather than a replacement for it.
Why teams look for an Anthropic Skills Directory alternative
Anthropic's directory is doing real work — every listed skill cleared a private security review, the marketplace effect of a curated allowlist is genuine, and "we ship through Anthropic's directory" is a credible signal a buyer can rely on. The team running that review is doing a good job: the bar is real and the program has teeth.
But ICP buyers and authors keep landing on three structural gaps that an editorial allowlist can't close:
- Coverage. The directory lists a small, curated subset of Claude skills. The MCP ecosystem is several thousand servers across
awesome-mcplists, npm, GitHub, indie registries, and vendor-official releases that ship from the vendor's own GitHub org and never go through Anthropic. A team adopting the Stripe MCP server, the Cloudflare MCP server, the GitHub MCP server, or any of the hundreds of community releases is making an install decision in territory the directory doesn't cover at all. Most installs happen off-directory. - Re-audit cadence. A directory listing is granted at submission time. If maintenance lapses, a new vulnerable tool handler lands in a follow-up commit, or an unrelated PR introduces an unsanitised
fetch(req.url), the listing doesn't downgrade automatically. Continuous adoption review needs continuous scoring, and a one-time editorial review is by construction not that. - Transparency of the rubric. Anthropic's review bar is private. That's reasonable from their side — publishing it would let bad actors optimize against it, and the human reviewer's judgment is part of the value. But for an author preparing a submission, or a team writing an internal "what does it take to clear our security bar" policy, an opaque rubric is hard to act on. SkillAudit's rubric is published in full on the methodology post and runs the same way against every repo.
Three reasons buyers and authors look for an Anthropic Skills Directory alternative when the question is engineering trust:
- You're installing an off-directory MCP server. The directory tells you nothing about a server it doesn't list. SkillAudit grades any public GitHub repo, npm package, or uploaded ZIP in 60 seconds.
- You're an author and you want a public engineering signal while you're in the directory queue. Submission review takes time. SkillAudit gives you an A-grade badge to embed on your README the same day you ship.
- You want a re-running gate. A buyer who installs an A-grade SkillAudit repo today wants to know if the next commit changes that grade. The public report card updates on every scan; an editorial allowlist doesn't.
How SkillAudit is different
SkillAudit is a six-axis static + LLM-assisted scanner built specifically for Claude skills and MCP servers. The six axes — security, permissions hygiene, credential exposure, maintenance, client compatibility, documentation — were chosen by reading the actual source of vendor-official MCP releases that shipped vulnerabilities. The output is a single A–F grade plus a public report card at a stable URL, with file paths and finding counts. The scanner runs on a paste-a-URL basis, the rubric is published, and the grade is reproducible: the same repo at the same SHA produces the same grade, every run.
Where the Skills Directory is editorial, gated, binary, and one-time, SkillAudit is reproducible, open, graded, and continuous. They sit on different sides of the trust-signal architecture, and a buyer can — and should — use both.
Side by side
| Anthropic Skills Directory | SkillAudit | |
|---|---|---|
| Trust-signal type | Editorial allowlist (listed / not listed) | Reproducible per-repo A–F grade with file-path findings |
| Coverage | Skills submitted to and accepted by Anthropic | Any public GitHub repo, npm package, or uploaded ZIP |
| Off-directory MCP servers | Not covered (awesome-mcp, vendor-official releases, npm) | Covered — paste any URL, get a grade in 60 seconds |
| Cadence | One-time review at submission | Continuous — re-runs on commit, badge reflects latest scan |
| Rubric transparency | Private review bar (not published) | Public — six-axis methodology and weighting in the research post |
| Static SSRF check | Internal review (output not exposed) | Yes — pattern-based check tuned to MCP idioms; first-class axis |
| Static command-exec check | Internal review (output not exposed) | Yes — flags shell=True, execSync, os.system |
| LLM-assisted prompt-injection probe | Internal review (output not exposed) | Yes — extracts handlers, red-teams via Claude Haiku 4.5 |
| Credential-echo detection | Internal review (output not exposed) | First-class axis; flags process.env.X in handler return paths |
| Maintenance signal | Counted in initial review | Continuous — last commit, open issues, advisory feed re-checked per scan |
| Client compatibility | Listing implies compatibility with Anthropic's reference client | Yes — flags non-standard transports + protocol-version mismatches across Claude Code, Cursor, Windsurf, Codex |
| Public per-repo report card | The listing itself; no findings detail | Yes (e.g. /audits/owner-repo/) — file paths and finding counts |
| Embeddable badge for authors | "Listed in Anthropic Skills Directory" linkable badge | SkillAudit grade badge — A through F, updates on each scan |
| Cost | Free for authors who get listed; not all submissions accepted | Free for 3 audits/month on public repos; $19/mo Pro; $99/mo Team |
What the data says
We ran SkillAudit against 101 of the most-installed Claude skills and MCP servers — the full live board is public and growing. The corpus includes vendor-official releases (Stripe, PayPal, MongoDB, Redis, Cloudflare, AWS, Azure, GCP, Heroku, Notion, Snowflake, Pinecone, Couchbase, Auth0, Resend, Brave, Vectara, Meilisearch, plus the nine official Anthropic MCP language SDKs), popular indie frameworks (FastMCP, mcp-use, mcp-agent), and community releases.
Results: 50% (50/101) shipped SSRF findings, 38% (38/101) had credential-handling findings, 10% (10/101) had command-exec findings, and only 19% (19/101) earned an A grade. Full grade distribution: 19 A · 30 C · 10 D · 42 F. Methodology and per-repo grades are in our research post: The state of MCP server security, 2026.
The relevant point for the Skills Directory comparison: SkillAudit grades vendor-official, indie, and Anthropic's own reference repos on exactly the same rubric. Anthropic's modelcontextprotocol/typescript-sdk earned an F, and so did modelcontextprotocol/inspector — we report it the same way we'd report any community repo. That's the point of an open scoreboard: the rubric doesn't get bent for any vendor, including the one running the closed-loop directory.
When the Anthropic Skills Directory is still the right signal
The directory is doing work no static scanner can fully substitute for. Specifically:
- You want an editorial trust signal that includes human judgment. A directory listing means a human reviewer at the platform vendor decided the skill was worth listing — that bundles in product-quality, security, and "does this actually solve the problem the README claims" signal in a way an automated grade cannot. If your buying criteria explicitly include "vetted by Anthropic," the listing is what you want.
- You're publishing a Claude skill specifically for the Anthropic ecosystem. The listing is meaningfully visible inside Anthropic's own surfaces (Claude Code, the docs, etc.). For Claude-skill authors, getting in is a distribution win on top of a trust signal — and SkillAudit doesn't replace that.
- You want an explicit "Anthropic blessed" check in your procurement workflow. Some org policies are written that way. The directory is the artefact those policies reference.
- You don't have time to read methodology. Editorial allowlists trade transparency for simplicity: you trust the curator, you adopt what they list. If that trade-off matches your team's workflow, the directory is the right answer and SkillAudit is overkill.
The most useful framing: the Skills Directory says "Anthropic vouched for this skill at the time of listing"; SkillAudit says "this is the current per-axis security and quality grade as of the last commit, and here are the file paths the rubric flagged." Both signals matter; they answer different questions and decay on different timelines. An editorial signal needs no maintenance once the listing is granted; an engineering signal updates on every commit.
Workflow
SkillAudit isn't a replacement for the Skills Directory — it's the engineering-side trust signal that runs continuously alongside it. Two adoption paths, depending on which side you're on:
- Authors: Run SkillAudit before submitting to the Skills Directory. Fix the SSRF, the prompt-injection, the credential echo. Embed the SkillAudit grade badge on the README so reviewers and prospective installers see it. When the listing comes through, you have both — a green SkillAudit grade and a directory listing, with the engineering grade staying live in case the listing decision lags or expires.
- Buyers: Use the Skills Directory as your editorial allowlist. For everything off-list — vendor-official MCPs, community repos, indie skills — paste the GitHub URL into SkillAudit and gate adoption on a minimum grade. Drop the report-card link into your team's adoption documentation so the next reviewer can see why the install was greenlit.
Indie developers shipping MCP servers can use both at zero cost: the Skills Directory is free for accepted listings; SkillAudit's free tier covers your skill grade up to three audits a month, and the public report card is shareable on your README the same day you ship.
Try SkillAudit on your repo — free
Paste any GitHub URL on the home page, get a graded report card in 60 seconds. Your repo joins the public board only if you opt in; private repos audit through a single-repo OAuth scope, never org-wide.