GitHub Code Scanning alternative

A GitHub Code Scanning alternative for MCP servers and Claude skills

GitHub Code Scanning is great at the job it was built for: run CodeQL's standard query pack against your repo on every push, surface findings in the Security tab and inline on PR diffs, and gate the merge on configured severity. For a traditional web service it's nearly free, well-integrated, and credible. For a Claude skill or MCP server, the standard pack misses the things that actually matter — because the taint sources and threat models that pack was tuned for don't match the MCP tool-handler surface. SkillAudit fills that gap with rules built specifically for server.tool(...) handlers, plus an LLM-assisted prompt-injection probe that no SAST tool runs.

TL;DR

GitHub Code Scanning runs CodeQL on your repo. CodeQL's standard pack models the OWASP-shaped web-app world: HTTP request parameters as taint sources, SQL drivers / shells / file IO as taint sinks. That model fires on classic controllers and request handlers; it doesn't fire on the MCP idiom — server.tool('fetch_url', async ({url}) => fetch(url)) — because the standard pack doesn't recognise tool arguments as a taint source in the same way it recognises req.query.x. Worse, prompt-injection susceptibility isn't a CodeQL category at all; the standard pack has no concept of "untrusted content flows into a tool response that the LLM will treat as instructions." SkillAudit reads every server.tool(...) / @app.tool handler with MCP-shaped rules — SSRF, command-exec, credential echo, permission scope — and runs an LLM-assisted prompt-injection red-team. Output: a single A–F grade and a public report card. We've graded 101 of the most-installed MCP servers this way. 50% shipped SSRF, 38% had credential-handling findings, 19% earned an A. Keep GitHub Code Scanning on your repo for the OWASP coverage; add SkillAudit for the MCP surface.

Why teams look for a GitHub Code Scanning alternative for Claude skills and MCP servers

GitHub Code Scanning isn't bad. It's free for public repos, the integration with the Security tab and PR review is well-designed, and CodeQL's standard pack is one of the most mature SAST query libraries in the world. Teams running Code Scanning on a backend service get genuine coverage — SQL injection, path traversal, deserialization, hardcoded secrets, dangerous regex, the well-trodden OWASP categories.

But ICP buyers and authors keep landing on three structural reasons CodeQL's standard pack underfires on MCP code:

Three buyer-side reasons we hear:

How SkillAudit is different

SkillAudit is a static + LLM-assisted scanner built specifically for the MCP tool-handler surface. The rules know about the MCP idiom: every server.tool registration, every @app.tool decorator across the nine MCP language SDKs (typescript, python, ruby, kotlin, java, csharp, swift, rust, go), every handler body, every fetch() / http.request() / urllib.urlopen() call inside a handler. Then it runs the seven checks that compose into a single A–F grade:

Output is a single A–F grade plus a public report card at a stable URL with file paths and finding counts. Run time: roughly 60 seconds. Adoption is paste-a-URL or a GitHub Action that gates the merge on minimum grade — the same merge-gate model GHCS already uses, just running our rule pack instead of the stock CodeQL pack.

Side by side

 GitHub Code Scanning (CodeQL standard pack)SkillAudit
Scanner typeSAST — CodeQL taint analysis with the standard query packSAST + LLM-assisted; pattern-based for MCP idioms
Threat modelOWASP Top 10 in traditional web/server codeLLM-mediated attacker via tool handlers — MCP-specific
What's modelled as a taint sourcereq.query, req.params, HttpServletRequest, @RequestBody, etc.Tool arguments — every server.tool / @app.tool handler argument
SSRF in fetch(args.url) tool handlersStock pack misses — args aren't tagged as sourceYes — first-class MCP rule
Command-exec in tool handlersCatches some when spawn(...) is the shape it knows; misses MCP-shaped variantsYes — handler-aware rule fires on execSync, os.system, shell=True
Prompt-injection susceptibilityNot a category — CodeQL doesn't model LLM-as-instruction-channelYes — Claude Haiku 4.5 red-teams every extracted handler
Credential echo from process.envCatches some hardcoded-secret patterns; misses env-var-on-handler-return-pathFirst-class axis; flags env vars in handler return paths
Permission-scope reviewNot in scopeYes — flags org-wide OAuth scopes when single-repo would do
Maintenance / repo signalsNot in scopeLast commit, open issues, advisory feed; per-axis
Client compatibilityNot in scopeYes — non-standard transports + protocol-version mismatches
Findings locationRepo Security tab + PR diff (GitHub-native)Public per-repo report card at a stable URL
Buyer-side public gradeNo — findings are private to the repo ownerYes — public board; embeddable badge for authors
Custom MCP query packPossible in theory; nobody has shipped a maintained oneBuilt-in — the rule pack is the product
PricingFree for public repos; included in GitHub Advanced Security for private reposFree: 3 audits/mo public · Pro $19/mo · Team $99/mo

The shape of the gap, in 30 seconds

Here is the canonical SSRF in an MCP tool handler. CodeQL's standard pack does not fire on it — the taint source isn't modelled. SkillAudit fires on it because the rule knows what an MCP tool registration looks like:

// Clean under GitHub Code Scanning (standard CodeQL pack — args
// not modelled as a taint source on server.tool registrations).
// F-grade SSRF under SkillAudit.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";

const server = new McpServer({ name: "fetch-helper", version: "1.0" });

server.tool(
  "fetch_url",
  "Fetch the contents of any URL",
  { url: z.string() },
  async ({ url }) => {
    const res = await fetch(url);                  // SSRF
    const body = await res.text();
    // and a credential echo on the same return path
    return { content: [{ type: "text",
      text: `${body}\n[token: ${process.env.API_TOKEN}]`
    }] };
  }
);
server.connect(new StdioServerTransport());

Two findings in eight lines: SSRF on fetch(url) with an LLM-controlled argument, and credential echo (process.env.API_TOKEN) reaching a tool response the LLM will quote back in its own output. Neither fires on a stock CodeQL run, because the standard pack's source/sink configuration was tuned for HTTP request handlers, not MCP tool registrations. Both are routine catches in our rule pack — the same rules that produce the public grades in our corpus. None of this is a knock on CodeQL's engine — it's the right engine, on the wrong query pack, for this code.

What the data says

We ran SkillAudit against 101 of the most-installed Claude skills and MCP servers — the full live board is public and growing. The corpus includes vendor-official releases (Stripe, PayPal, MongoDB, Redis, Cloudflare, AWS, Azure, GCP, Heroku, Notion, Snowflake, Pinecone, Couchbase, Auth0, Resend, Brave, Vectara, Meilisearch, plus Anthropic's nine official MCP language SDKs), popular indie frameworks (FastMCP, mcp-use, mcp-agent), and community releases.

Results: 50% (50/101) shipped SSRF findings, 38% (38/101) had credential-handling findings, 10% (10/101) had command-exec findings, only 19% (19/101) earned an A. Full grade distribution: 19 A · 30 C · 10 D · 42 F. Methodology and per-repo grades are in our research post: The state of MCP server security, 2026.

The relevant point for the GHCS comparison: most of the F-grade vendor-official repos in the corpus — Cloudflare's mcp-server-cloudflare, Heroku's heroku-mcp-server, Stripe's agent-toolkit, MongoDB's mongodb-mcp-server, the GitHub github-mcp-server itself — are GitHub-hosted, often with Code Scanning enabled, and the standard CodeQL pack is firing zero on the SSRF / credential-echo / prompt-injection findings SkillAudit reports. That's the model gap: CodeQL is doing what it does, it's just not tuned for the MCP idiom. SkillAudit is.

When GitHub Code Scanning is still right

GHCS earns its keep, even on the same repo. Specifically:

The most useful framing: GHCS is the SAST that runs on every push to catch the OWASP-shaped patterns CodeQL's standard pack knows about; SkillAudit is the SAST that runs on every push to catch the MCP-shaped patterns CodeQL's standard pack doesn't. Same engine class, different rule pack, different threat model. They don't conflict.

Workflow

Three steps to add SkillAudit on top of your existing Code Scanning setup:

  1. Keep Code Scanning on. The standard CodeQL pack is doing real work on the OWASP-shaped surface. Don't disable it.
  2. Add SkillAudit as a second job in CI. A separate workflow that runs the SkillAudit GitHub Action on push (Pro plan) or on every plugin install (gate via .claude/plugins.lock changes). The two checks run in parallel; the merge gate fails if either fails.
  3. Pair the public grade with the private findings. GHCS findings stay in your Security tab; SkillAudit's grade goes on the public board and the README badge. A buyer evaluating whether to install your MCP can read the grade in five seconds; your security team can drill into both feeds in the Security tab and the SkillAudit report card.

Indie developers shipping MCP servers can run SkillAudit at zero cost — the free tier covers three audits a month on public repos, and the public report card is shareable on your README the same day you publish. Buyers gate adoption with the public grade; authors embed the badge to win listings.

Try SkillAudit on your repo — free

Paste any GitHub URL on the home page, get a graded report card in 60 seconds. Your repo joins the public board only if you opt in; private repos audit through a single-repo OAuth scope, never org-wide.

Audit my repo