F

Public audit · 2026-04-24

mcp-use/mcp-use

Overall: F (0/100) · v0.2 scan · 6 axes · LLM prompt-injection probe

SkillAudit report — mcp-use/mcp-use

Scanned 2026-04-24 by SkillAudit v0.2 (static checks + LLM-assisted prompt-injection red-team).
Commit: ba0a357 · Stars: 9804 · Days since last push: 0
LLM prompt-injection probe: skipped — set ANTHROPIC_API_KEY to enable the LLM-assisted prompt-injection red-team

Overall grade: F (0/100)

AxisScoreGrade
security0/100F
permissions100/100A
credentials0/100F
maintenance100/100A
compatibility70/100C⚠️
docs100/100A

Security findings

Production sources:

const response = await fetch(url, {

const response = await fetch(url, {

const response = await fetch(tarballUrl);

fetch(\${TELEMETRY_URL}?${params.toString()}\).catch(() => {});

await fetch(\http://${host}:${port}\);

const response = await fetch(\http://${host}:${port}/inspector/health\, {

await fetch(\${apiBase}/api/tunnels/${existingSubdomain}\, {

await fetch(\${apiBase}/api/tunnels/${tunnelSubdomain}\, {

await fetch(\${apiBase}/api/tunnels/${tunnelSubdomain}\, {

await fetch(\${apiBase}/api/tunnels/${existingSubdomain}\, {

Test-site findings (lower weight): 16 total in test/ paths — first 3 shown

const response = await fetch(url, {

const proc = spawn(command, args, {

const proc = spawn(command, args, {

Permissions

_No findings on this axis._

Credentials

Production sources:

sk-*** (OpenAI / Anthropic-style API key, 20 chars)

sk-*** (OpenAI / Anthropic-style API key, 20 chars)

sk-*** (OpenAI / Anthropic-style API key, 20 chars)

sk-*** (OpenAI / Anthropic-style API key, 20 chars)

sk-*** (OpenAI / Anthropic-style API key, 20 chars)

sk-*** (OpenAI / Anthropic-style API key, 26 chars)

sk-*** (OpenAI / Anthropic-style API key, 27 chars)

sk-*** (OpenAI / Anthropic-style API key, 34 chars)

sk-ant-*** (Anthropic API key, 34 chars)

console.log(

Maintenance

_No findings on this axis._

Compatibility

Production sources:

Documentation

_No findings on this axis._


Methodology

SkillAudit v0.2 clones the repo at the provided ref (default: default branch, HEAD) into an ephemeral sandbox, runs six static checks over .js/.ts/.py sources, queries the GitHub API for maintenance signals, and runs an LLM-assisted prompt-injection red-team over the MCP tool surface. Each axis is scored against the rubric at .

The prompt-injection axis extracts each server.tool(...) / @app.tool registration + the first ~60 lines of handler body, hands them to Claude Haiku 4.5 with a red-team system prompt, and asks for structured findings on untrusted-content flow into tool responses. One API call per scan, bounded at ~15K input tokens.

How to improve this grade

_Report generated by skillaudit.dev_

Want your repo audited?

First 100 audits go to waitlist signups in order. The engine runs against public GitHub URLs today.

Join the waitlist →