A2A Vulnerability Scanner
34by Khaledayman9
A multi-agent AI system that scans websites for security vulnerabilities. It uses the A2A (Agent-to-Agent) protocol for orchestration and MCP (Model Context Protocol) for tool integration. Specialized agents work together: they gather threat intel, crawl the site, analyze vulnerabilities, and produce a report.
Getting Started
README
A2A Vulnerability Scanner — AI-Powered Security Analysis
A multi-agent AI system that scans websites for security vulnerabilities. It uses the A2A (Agent-to-Agent) protocol for orchestration and MCP (Model Context Protocol) for tool integration. Specialized agents work together: they gather threat intel, crawl the site, analyze vulnerabilities, and produce a report.
Table of Contents
- Overview
- Protocols & Standards (A2A, MCP)
- Architecture
- Scan Pipeline
- User Interface
- Prerequisites
- How to Run
- Example Execution
- Configuration
- Project Structure
- MCP Servers
- License
Overview
- Multi-agent setup: 5 AI agents (orchestrator, web_search, web_scanner, vulnerability_analyzer, report_generator) discovered at runtime.
- Orchestrated workflow: Orchestrator runs the pipeline and calls the other agents in sequence via A2A.
- Tooling: Agents such as web_search use MCP servers (e.g. DuckDuckGo, CVE search) for threat intelligence.
- API: FastAPI gateway on port 8000; scan and report endpoints.
- Frontend: Next.js app with a scan form, live status, results, and PDF export.
Tech stack
- Backend: Python 3.11, FastAPI, Uvicorn.
- Agent layer: A2A (Agent-to-Agent) protocol for inter-agent communication; MCP (Model Context Protocol) and FastMCP for tool servers; LangChain/LangGraph for orchestration.
- Frontend: Next.js, TypeScript, Tailwind CSS.
Protocols & Standards (A2A, MCP)
This project is built on A2A and MCP:
-
A2A (Agent-to-Agent) — The orchestrator and all agents communicate over HTTP using the A2A protocol. Each agent exposes endpoints (e.g.
/execute,/search,/scan,/analyze,/generate); the orchestrator discovers agents at runtime and invokes them in sequence to run the scan pipeline. This keeps agents decoupled and reusable. -
MCP (Model Context Protocol) — Agents that need external tools (e.g. web search, CVE lookup) connect to MCP servers. The web_search agent uses MCP to call tools such as DuckDuckGo HTML search; the MCP client loads tools dynamically, and the AI decides when and how to use them. The project includes an
mcp_serversmodule and MCP client management for this. -
FastMCP — MCP servers in this codebase are implemented with FastMCP where applicable.
Architecture
Agents talk to each other over A2A; the web_search agent uses MCP to access external tools.
API Gateway (port 8000)
/api/scan, /api/scan/{id}
│
▼
Orchestrator (8001)
Runs the 4-step pipeline
│
┌─────────────────────┼─────────────────────┐
▼ ▼ ▼
Web Search (8005) Web Scanner (8002) Vulnerability Analyzer (8003)
Threat intel, CVE Crawl, subdomains Headers, SSL, XSS, SQLi, etc.
│ │ │
└─────────────────────┼─────────────────────┘
▼
Report Generator (8004)
Summary, findings, PDF-ready report
- API Gateway (8000): Accepts
POST /api/scanandGET /api/scan/{scan_id}. - Orchestrator (8001): Coordinates the four steps via A2A; discovers and calls each agent’s HTTP endpoint.
- Web Search (8005): Uses MCP to connect to tool servers (e.g. DuckDuckGo HTML search, CVE search) for threat/CVE context.
- Web Scanner (8002): Fetches the target and linked pages, optional subdomain discovery.
- Vulnerability Analyzer (8003): Security headers, SSL, forms, XSS, SQLi, outdated tech.
- Report Generator (8004): Executive summary, findings, recommendations, PDF structure.
Scan Pipeline
When you start a scan, the orchestrator runs four steps in order:
- Web Search — Gathers public info and CVEs about the target/technology.
- Web Scanner — Crawls the target URL and discovers pages (and optionally subdomains).
- Vulnerability Analyzer — Evaluates headers, SSL, forms, XSS, SQLi, and tech stack.
- Report Generator — Builds executive summary, detailed findings, and recommendations.
The frontend polls GET /api/scan/{scan_id} until status is completed or failed, then shows the result.
User Interface
The UI is a single-page app with these parts (no logic here, only structure):
-
Header
- App name (e.g. “SecureScan AI”), short tagline, small “Secure” and “Deep Scan” badges.
-
Info banner
- “Educational Purpose Only” with a note to only scan targets you’re allowed to test.
-
Scan form (Start New Scan)
- Target URL — Text input, e.g.
https://example.com. - Scan depth — Slider (e.g. 1–5); low = faster, high = more thorough.
- Include subdomain discovery — Checkbox.
- Start Security Scan — Submit; while running it shows “Scanning in Progress…”.
- Target URL — Text input, e.g.
-
Results (after a scan is started)
- In progress:
- Spinner, “Scan in Progress”, current
status, short messages like “Analyzing website…”, “Checking security…”, “Generating report…”.
- Spinner, “Scan in Progress”, current
- Failed:
- Error icon and message.
- Completed:
- Success header — Target URL.
- Vulnerability summary — Counts by severity (e.g. Critical, High, Medium, Low, Info).
- Executive summary — Text block.
- Vulnerabilities list — Each with title, severity, description, affected URL, evidence, remediation, CWE if present.
- Recommendations — Numbered list.
- Export PDF — Button to download a report PDF (client-side generation from the same result data).
- In progress:
-
Empty state
- Shown when no scan has been started: “Ready to Scan” and short instructions.
-
Footer
- Copyright, “Educational Use Only”, and placeholders for Privacy, Terms, Docs.
Prerequisites
- Docker and Docker Compose (for the AI service).
- Node.js 18+ and npm (for the frontend, if you run it locally).
- API keys
OPENAI_API_KEY(required for LLM calls).OPENAI_BASE_URL(optional; defaulthttps://api.openai.com/v1; can point to Azure/other).GOOGLE_API_KEY(optional; for some agents/tools).
How to Run
1. AI service (backend + agents)
From the project root:
cd ai-service
Create env from the example:
cp .env.example .env
Edit .env and set at least:
OPENAI_API_KEY=your_keyOPENAI_BASE_URL(if you use a custom endpoint)GOOGLE_API_KEY(optional)
Start the stack:
docker compose up
- API: http://localhost:8000
- Docs: http://localhost:8000/docs
You may see:
the attribute version is obsolete
— that’s from the version field in docker-compose.yml; it’s safe to ignore or remove version to clear the warning.
2. Frontend (optional, for the UI)
In another terminal, from the project root:
cd frontend
Create env (so the app knows where the API is):
cp .env.example .env.local
Ensure .env.local has:
NEXT_PUBLIC_API_URL=http://localhost:8000
Install and run:
npm install
npm run dev
Open http://localhost:3000. The Next.js app rewrites /api/* to http://localhost:8000/api/*, so “Start Security Scan” will call the AI service.
3. Run order
- Start the AI service first:
cd ai-service && docker compose up - When agents are up, start the frontend:
cd frontend && npm run dev - Use the UI at
http://localhost:3000or call the API directly.
Example Execution
When the AI service starts, it loads agents and registers them:
INFO: Starting Vulnerability Scanner Backend
INFO: Loading and starting agents...
INFO: Discovered 5 agents: ['orchestrator', 'report_generator', 'vulnerability_analyzer', 'web_scanner', 'web_search']
INFO: → orchestrator will run at: http://0.0.0.0:8001
INFO: → report_generator will run at: http://0.0.0.0:8004
INFO: → vulnerability_analyzer will run at: http://0.0.0.0:8003
INFO: → web_scanner will run at: http://0.0.0.0:8002
INFO: → web_search will run at: http://0.0.0.0:8005
INFO: ✓ Started 5/5 agents
INFO: Application started successfully
When a scan is started (e.g. from the UI):
INFO: Started scan <scan_id> for https://www.example.com
INFO: Starting dynamic scan ... for https://www.example.com
INFO: Step 1: Web Search Intelligence
INFO: Calling web_search at http://localhost:8005/search
...
INFO: Search agent completed for https://www.example.com
INFO: Step 2: Web Scanner
INFO: Calling web_scanner at http://localhost:8002/scan
...
INFO: Web scanner completed for https://www.example.com: 50 pages discovered
INFO: Step 3: Vulnerability Analyzer
INFO: Calling vulnerability_analyzer at http://localhost:8003/analyze
...
INFO: Vulnerability analysis completed: 8 vulnerabilities found
INFO: Step 4: Report Generator
INFO: Calling report_generator at http://localhost:8004/generate
...
INFO: Report generation completed for scan <scan_id>
INFO: Scan <scan_id> completed successfully
The API returns the full report on GET /api/scan/{scan_id} when status is completed.
Configuration
AI service (.env in ai-service/)
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | OpenAI (or compatible) API key |
OPENAI_BASE_URL |
No | Base URL for chat API (default: https://api.openai.com/v1) |
GOOGLE_API_KEY |
No | For agents that use Google APIs |
LOG_DIR |
No | Log directory (default: logs) |
API_GATEWAY_HOST |
No | Bind host (default: 0.0.0.0) |
API_GATEWAY_PORT |
No | API port (default: 8000) |
Frontend (.env.local in frontend/)
| Variable | Description |
|---|---|
NEXT_PUBLIC_API_URL |
Backend base URL, e.g. http://localhost:8000 |
Project Structure
Vulnerability Scanner/
├── ai-service/
│ ├── agents/ # A2A agents (orchestrator, web_search, web_scanner, etc.)
│ │ ├── orchestrator/ # Runs the 4-step pipeline via A2A
│ │ ├── web_search/ # Threat intel via MCP (DuckDuckGo, CVE, etc.)
│ │ ├── web_scanner/ # Crawling, subdomains
│ │ ├── vulnerability_analyzer/
│ │ └── report_generator/
│ ├── mcp_servers/ # MCP client manager, tool servers (FastMCP)
│ ├── api/ # FastAPI app, /api/scan routes
│ ├── shared_utils/ # Base agents, models, tools
│ ├── main.py # Uvicorn app entry
│ ├── docker-compose.yml
│ ├── Dockerfile
│ └── .env.example
├── frontend/
│ ├── app/ # Next.js pages (page.tsx, layout, globals.css)
│ ├── components/ # ScanForm, ResultsDisplay, PDFExport
│ ├── next.config.ts # /api/* → backend
│ └── .env.example
└── README.md
MCP Servers
- MCP Config is in
ai-service/mcp_servers/servers.json
Custom MCP Server Addition:
Example: Weather MCP:
{
"Weather": {
"command": "python",
"args": ["-m", "a2a_server.mcp.servers.weather"],
"transport": "stdio"
},
"Weather (UV)": {
"command": "uv",
"args": ["run", "python", "-m", "a2a_server.mcp.servers.weather"],
"transport": "stdio"
}
}
External MCP Servers:
Example: DuckDuckGo (MCP):
The web_search agent uses the DuckDuckGo MCP server for threat and CVE search:
"ddg-search": {
"command": "uvx",
"args": ["duckduckgo-mcp-server"],
"transport": "stdio"
}
- Runtime:
uvx duckduckgo-mcp-server— requiresuvin the environment (the Docker image installs it).uvxfetches and runs theduckduckgo-mcp-serverpackage on first use. - No API key — DuckDuckGo HTML search is used; no key or env vars.
- Customizing: To change the command or args, edit
mcp_servers/servers.json. Theweb_searchagent loads theddg-searchentry from that file.
License
This project is for educational and authorized testing only. Only scan systems you own or have permission to test.
See the LICENSE file for terms. Parts of this project may use code under the Apache License 2.0 and other open-source licenses (e.g. A2A, LangGraph, MCP, FastMCP).