Anthropic Skills Directory alternative

An Anthropic Skills Directory alternative for continuous security review

Anthropic's official Skills Directory is a curated allowlist — submit a Claude skill, Anthropic reviews it (security included), it gets listed or it doesn't. That's a meaningful trust signal: the badge says a human at Anthropic looked at this skill and approved it. SkillAudit is a different architecture for the same problem — a continuous, reproducible, transparent public scoreboard that grades any Claude skill or MCP server on demand, regardless of whether it ships through Anthropic. Closed-loop curation versus open-loop scoring; both belong in a serious install decision.

TL;DR

The Skills Directory is a gated allowlist with a private one-time review process: you ship, Anthropic eyeballs, you get in. The review bar is not published, the verdict is binary (listed / not listed), and the audit doesn't re-run when the repo changes. SkillAudit is a continuous, transparent scoreboard with a published rubric and a per-axis A–F grade — it scans on every commit, the methodology is public, and the report card is a stable URL. We've already audited 101 of the most-installed MCP servers with the same six-axis rubric: 50% had SSRF, 38% had credential-handling findings, only 19% earned an A. Anthropic's nine official MCP language SDKs all live on the same board with the same grades — including the ones that scored an F. Honest, reproducible, and additive on top of an Anthropic listing rather than a replacement for it.

Why teams look for an Anthropic Skills Directory alternative

Anthropic's directory is doing real work — every listed skill cleared a private security review, the marketplace effect of a curated allowlist is genuine, and "we ship through Anthropic's directory" is a credible signal a buyer can rely on. The team running that review is doing a good job: the bar is real and the program has teeth.

But ICP buyers and authors keep landing on three structural gaps that an editorial allowlist can't close:

Three reasons buyers and authors look for an Anthropic Skills Directory alternative when the question is engineering trust:

How SkillAudit is different

SkillAudit is a six-axis static + LLM-assisted scanner built specifically for Claude skills and MCP servers. The six axes — security, permissions hygiene, credential exposure, maintenance, client compatibility, documentation — were chosen by reading the actual source of vendor-official MCP releases that shipped vulnerabilities. The output is a single A–F grade plus a public report card at a stable URL, with file paths and finding counts. The scanner runs on a paste-a-URL basis, the rubric is published, and the grade is reproducible: the same repo at the same SHA produces the same grade, every run.

Where the Skills Directory is editorial, gated, binary, and one-time, SkillAudit is reproducible, open, graded, and continuous. They sit on different sides of the trust-signal architecture, and a buyer can — and should — use both.

Side by side

 Anthropic Skills DirectorySkillAudit
Trust-signal typeEditorial allowlist (listed / not listed)Reproducible per-repo A–F grade with file-path findings
CoverageSkills submitted to and accepted by AnthropicAny public GitHub repo, npm package, or uploaded ZIP
Off-directory MCP serversNot covered (awesome-mcp, vendor-official releases, npm)Covered — paste any URL, get a grade in 60 seconds
CadenceOne-time review at submissionContinuous — re-runs on commit, badge reflects latest scan
Rubric transparencyPrivate review bar (not published)Public — six-axis methodology and weighting in the research post
Static SSRF checkInternal review (output not exposed)Yes — pattern-based check tuned to MCP idioms; first-class axis
Static command-exec checkInternal review (output not exposed)Yes — flags shell=True, execSync, os.system
LLM-assisted prompt-injection probeInternal review (output not exposed)Yes — extracts handlers, red-teams via Claude Haiku 4.5
Credential-echo detectionInternal review (output not exposed)First-class axis; flags process.env.X in handler return paths
Maintenance signalCounted in initial reviewContinuous — last commit, open issues, advisory feed re-checked per scan
Client compatibilityListing implies compatibility with Anthropic's reference clientYes — flags non-standard transports + protocol-version mismatches across Claude Code, Cursor, Windsurf, Codex
Public per-repo report cardThe listing itself; no findings detailYes (e.g. /audits/owner-repo/) — file paths and finding counts
Embeddable badge for authors"Listed in Anthropic Skills Directory" linkable badgeSkillAudit grade badge — A through F, updates on each scan
CostFree for authors who get listed; not all submissions acceptedFree for 3 audits/month on public repos; $19/mo Pro; $99/mo Team

What the data says

We ran SkillAudit against 101 of the most-installed Claude skills and MCP servers — the full live board is public and growing. The corpus includes vendor-official releases (Stripe, PayPal, MongoDB, Redis, Cloudflare, AWS, Azure, GCP, Heroku, Notion, Snowflake, Pinecone, Couchbase, Auth0, Resend, Brave, Vectara, Meilisearch, plus the nine official Anthropic MCP language SDKs), popular indie frameworks (FastMCP, mcp-use, mcp-agent), and community releases.

Results: 50% (50/101) shipped SSRF findings, 38% (38/101) had credential-handling findings, 10% (10/101) had command-exec findings, and only 19% (19/101) earned an A grade. Full grade distribution: 19 A · 30 C · 10 D · 42 F. Methodology and per-repo grades are in our research post: The state of MCP server security, 2026.

The relevant point for the Skills Directory comparison: SkillAudit grades vendor-official, indie, and Anthropic's own reference repos on exactly the same rubric. Anthropic's modelcontextprotocol/typescript-sdk earned an F, and so did modelcontextprotocol/inspector — we report it the same way we'd report any community repo. That's the point of an open scoreboard: the rubric doesn't get bent for any vendor, including the one running the closed-loop directory.

When the Anthropic Skills Directory is still the right signal

The directory is doing work no static scanner can fully substitute for. Specifically:

The most useful framing: the Skills Directory says "Anthropic vouched for this skill at the time of listing"; SkillAudit says "this is the current per-axis security and quality grade as of the last commit, and here are the file paths the rubric flagged." Both signals matter; they answer different questions and decay on different timelines. An editorial signal needs no maintenance once the listing is granted; an engineering signal updates on every commit.

Workflow

SkillAudit isn't a replacement for the Skills Directory — it's the engineering-side trust signal that runs continuously alongside it. Two adoption paths, depending on which side you're on:

  1. Authors: Run SkillAudit before submitting to the Skills Directory. Fix the SSRF, the prompt-injection, the credential echo. Embed the SkillAudit grade badge on the README so reviewers and prospective installers see it. When the listing comes through, you have both — a green SkillAudit grade and a directory listing, with the engineering grade staying live in case the listing decision lags or expires.
  2. Buyers: Use the Skills Directory as your editorial allowlist. For everything off-list — vendor-official MCPs, community repos, indie skills — paste the GitHub URL into SkillAudit and gate adoption on a minimum grade. Drop the report-card link into your team's adoption documentation so the next reviewer can see why the install was greenlit.

Indie developers shipping MCP servers can use both at zero cost: the Skills Directory is free for accepted listings; SkillAudit's free tier covers your skill grade up to three audits a month, and the public report card is shareable on your README the same day you ship.

Try SkillAudit on your repo — free

Paste any GitHub URL on the home page, get a graded report card in 60 seconds. Your repo joins the public board only if you opt in; private repos audit through a single-repo OAuth scope, never org-wide.

Audit my repo