OSV-Scanner alternative

An OSV-Scanner alternative for Claude skills and MCP servers

OSV-Scanner is Google's free CLI for matching your lockfile against the OSV.dev advisory feed. It's the right tool for "are any of my dependencies vulnerable to a known CVE." It is not the right tool for "is this MCP server's tool-handler code safe to install" — that surface didn't exist when CVE-driven scanners were designed.

TL;DR

OSV-Scanner is fast, free, open-source, and excellent at the job it was designed for: running osv-scanner -r . in CI and getting a list of dependencies that match a published OSV advisory. That job and ours barely overlap. SkillAudit grades the tool-handler source code of an MCP server — SSRF in fetch(url), credential echo from environment variables, prompt-injection susceptibility, permission scope — none of which is a CVE. We scanned 101 of the most-installed MCP servers: 50% had SSRF, 38% had credential-handling findings, 19% earned an A. A clean OSV-Scanner result tells you nothing about any of that.

Why teams look for an OSV-Scanner alternative when adopting MCP

OSV-Scanner is built around a beautifully simple model: there is a public, curated database of advisories at osv.dev; your lockfile pins specific package versions; the scanner does the join and tells you which advisories match. It's polyglot (npm, PyPI, Go, Maven, RubyGems, Cargo, and more), runs offline-first, and is free for anyone — including indie authors who can't justify a Snyk subscription. It deserves the install.

What it isn't, and doesn't claim to be, is a SAST that reads your source code for novel bugs. There is no OSV-Scanner check that fires on a tool handler shaped like:

server.tool('fetch_url', async ({url}) => {
  const res = await fetch(url);  // SSRF: url is attacker-controlled
  return await res.text();
});

…because nothing in that snippet matches a CVE. The library is the MCP SDK; the SDK is current; the bug is the developer's. CVE-driven scanners systematically miss this. So do most SAST scanners, because the "vulnerable sink" pattern (untrusted URL flowing into fetch) is widespread in conventional code and would generate too many false positives if fired naively. SkillAudit's checks are tuned to MCP idioms specifically — template-string fetches, dynamic baseURL patterns, env-var leakage in handler return paths — so the precision is workable.

Three reasons MCP authors and team buyers look for an OSV-Scanner alternative:

How SkillAudit is different

SkillAudit is a six-axis static + LLM-assisted scanner built specifically for Claude skills and MCP servers. The six axes — security, permissions hygiene, credential exposure, maintenance, client compatibility, documentation — were chosen by reading the actual source of vendor-official MCP releases that shipped vulnerabilities, not by deriving them from a CVE taxonomy. The output is a single A–F grade plus a public report card at a stable URL the author can embed as a badge on their README.

Where OSV-Scanner is a CVE lookup over your lockfile, SkillAudit is a source-code static analyzer plus an LLM red-team plus a maintenance/permissions/docs review — over any public GitHub repo, npm package, or uploaded ZIP. Different layer of the stack; complementary checks.

Side by side

 OSV-ScannerSkillAudit
Threat model focusKnown CVEs in declared dependencies (lockfile join with OSV.dev feed)MCP tool-handler SSRF, prompt injection, credential echo, permission scope
What it readsLockfiles (package-lock.json, go.sum, poetry.lock, etc.)Tool-handler source code, env-var usage, fetch/exec call sites, README + manifest
Catches a CVE in a transitive dependency?Yes — primary use caseOnly if the CVE happens to match a SkillAudit static pattern; not a primary axis
Catches SSRF in fetch(url) tool handlers?No — not in scope (no CVE for first-party code)Yes — pattern-based static check tuned to MCP idioms
LLM-assisted prompt-injection probeNoYes — extracts tool handlers, red-teams them via Claude Haiku 4.5
Credential-echo detection (env var → tool response)No (different layer)First-class axis; flags process.env.X in handler return paths
Single A–F buyer gradeNo — finding list keyed by advisory IDOne letter grade + per-axis pass/warn/fail
Public per-repo report card URLNo (CLI tool; results stay in your CI)Yes (e.g. /audits/owner-repo/)
Public embed badge for authorsNoYes — skill-grade badge written for marketplace listings
CostFree, open source (Apache-2.0)Free for 3 audits/month on public repos; $19/mo Pro; $99/mo Team
Where it runsCLI, GitHub Action, anywhere a Go binary runsSaaS web UI + GitHub Action + API
Polyglot reachWide: npm, PyPI, Go, Maven, RubyGems, Cargo, and moreAny GitHub repo + npm + ZIP; checks tuned to MCP/skill conventions
CI integrationMature: official GitHub Action, container image, single binaryGitHub Action with min-grade gate (Pro)

What the data says

We ran SkillAudit against 101 of the most-installed Claude skills and MCP servers — the full live board is public and growing. The corpus includes vendor-official releases (Stripe, PayPal, MongoDB, Redis, Cloudflare, AWS, Azure, GCP, Heroku, Notion, Snowflake, Pinecone, Couchbase, Auth0, Resend, Brave, Vectara, Meilisearch, plus the nine official Anthropic MCP language SDKs), popular indie frameworks (FastMCP, mcp-use, mcp-agent), and community releases.

Results: 50% (50/101) shipped SSRF findings, 38% (38/101) had credential-handling findings, 10% (10/101) had command-exec findings, and only 19% (19/101) earned an A grade. Full grade distribution: 19 A · 30 C · 10 D · 42 F. Methodology and per-repo grades are in our research post: The state of MCP server security, 2026.

The relevant point for the OSV-Scanner comparison: an MCP repo with a clean OSV scan can still be an F-grade install. The gap is shaped like the difference between "are my dependencies known-vulnerable" (CVE lookup) and "is the code I wrote safe under LLM-driven inputs" (source-code analysis + prompt-injection probing). Different questions, both worth answering.

When OSV-Scanner is still the right choice

OSV-Scanner is the right scanner for plenty of work — including for MCP authors. Specifically:

The most useful framing: OSV-Scanner answers "are any of my dependencies vulnerable"; SkillAudit answers "is the code I wrote on top of those dependencies safe to expose to an LLM as tools." Both questions matter; the answers are independent.

Switching cost

SkillAudit isn't a replacement for OSV-Scanner — it's the second scanner you add when you ship MCP. Both can coexist in CI without conflict; their findings are disjoint. A typical adoption path:

  1. Keep OSV-Scanner in CI as a fast CVE gate. The Action takes seconds and costs nothing.
  2. Add the SkillAudit GitHub Action with a minimum-grade gate (e.g. fail PR if grade falls below B).
  3. Drop the SkillAudit badge into your MCP server's README so reviewers and team buyers can read your grade at a glance before installing.

Indie developers publishing skills to a public marketplace can use both at zero cost: OSV-Scanner is free indefinitely; SkillAudit's free tier covers your skill grade up to three audits a month.

Try SkillAudit on your repo — free

Paste any GitHub URL on the home page, get a graded report card in 60 seconds. Your repo joins the public board only if you opt in; private repos audit through a single-repo OAuth scope, never org-wide.

Audit my repo