C

Public audit · 2026-04-24

perplexityai/modelcontextprotocol

Overall: C (75/100) · v0.2 scan · 6 axes · LLM prompt-injection probe

SkillAudit report — perplexityai/modelcontextprotocol

Scanned 2026-04-24 by SkillAudit v0.2 (static checks + LLM-assisted prompt-injection red-team).
Commit: dd5e078 · Stars: 2094 · Days since last push: 9
LLM prompt-injection probe: skipped — set ANTHROPIC_API_KEY to enable the LLM-assisted prompt-injection red-team

Overall grade: C (75/100)

AxisScoreGrade
security75/100C⚠️
permissions100/100A
credentials100/100A
maintenance100/100A
compatibility100/100A
docs90/100A

Security findings

Production sources:

return fetch(url, options);

Test-site findings (lower weight): 5 total in test/ paths — first 3 shown

const response = await fetch(\http://localhost:${port}/mcp\, {

const response = await fetch(\http://localhost:${port}/mcp\, {

const response = await fetch(\http://localhost:${port}/mcp\, {

Permissions

_No findings on this axis._

Credentials

_No findings on this axis._

Maintenance

_No findings on this axis._

Compatibility

_No findings on this axis._

Documentation

Production sources:

missing


Methodology

SkillAudit v0.2 clones the repo at the provided ref (default: default branch, HEAD) into an ephemeral sandbox, runs six static checks over .js/.ts/.py sources, queries the GitHub API for maintenance signals, and runs an LLM-assisted prompt-injection red-team over the MCP tool surface. Each axis is scored against the rubric at .

The prompt-injection axis extracts each server.tool(...) / @app.tool registration + the first ~60 lines of handler body, hands them to Claude Haiku 4.5 with a red-team system prompt, and asks for structured findings on untrusted-content flow into tool responses. One API call per scan, bounded at ~15K input tokens.

How to improve this grade

_Report generated by skillaudit.dev_

Want your repo audited?

First 100 audits go to waitlist signups in order. The engine runs against public GitHub URLs today.

Join the waitlist →