StackA2A
securitycrewaipython

Ziran

37

by taoq-ai

Ziran uses advanced attack methodologies including multi-phase trust exploitation and knowledge graph analysis to uncover vulnerabilities in AI agents before attackers do.

2 starsUpdated 2026-02-14Apache-2.0
Quality Score37/100
Community
11
Freshness
82
Official
30
Skills
10
Protocol
40
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/taoq-ai/ziran
2Navigate to the project
$ cd ziran
3Install dependencies
$ pip install -r requirements.txt
4Run the agent
$ python main.py

Or connect to the hosted endpoint: https://taoq-ai.github.io/ziran/

README

ZIRAN 🧘

AI Agent Security Testing

CI Lint PyPI License Python 3.11+

Find vulnerabilities in AI agents — not just LLMs, but agents with tools, memory, and multi-step reasoning.

ZIRAN Demo

Install · Quick Start · Examples · Docs


Why ZIRAN?

Most security tools test individual prompts or tools in isolation. ZIRAN discovers how tool combinations create attack paths — an agent with read_file and http_request has a critical data exfiltration vulnerability, even if neither tool is dangerous alone.

Capability ZIRAN Promptfoo Invariant (Snyk) Garak PyRIT Inspect AI
Tool chain discovery (graph-based) Yes Policy-based
Side-effect detection (execution-level) Yes Trace-based Sandbox
Multi-phase campaigns w/ graph feedback Yes Turn-level Flow analysis Composable Multi-turn
Autonomous pentesting agent Yes
Multi-agent coordination Yes
Knowledge graph tracking Yes Policy lang.
Agent-aware (tools + memory) Yes Partial Yes Partial
A2A protocol support Yes
MCP protocol support Yes Partial Yes
Encoding/obfuscation attacks Yes (12+)
Industry compliance plugins Yes (46)
Streaming (SSE/WebSocket) Yes
CI/CD quality gate Yes Yes
Open source Apache-2.0 MIT Partial Apache-2.0 MIT MIT

Key differentiators:

  • Tool Chain Discovery — Automatically detects dangerous tool combinations via NetworkX graph analysis (read_filehttp_request = data exfiltration). Discovery-based, not policy-based — finds what you didn't know to look for.
  • Side-Effect Detection — Catches when agents refuse in text but execute dangerous tools anyway. Priority-based conflict resolution between detectors gives execution-level visibility that text-only evaluation misses.
  • Multi-Phase Campaigns with Knowledge Graph Feedback — 8-phase trust exploitation where each phase updates a live knowledge graph, and results from phase N inform attack selection in phase N+1.
  • Autonomous Pentesting Agent — An LLM-driven agent that plans, executes, and adapts attack campaigns autonomously, with finding deduplication and interactive red-team mode.
  • Multi-Agent Coordination — Discovers topologies (supervisor, router, peer-to-peer) and tests cross-agent trust boundaries and delegation patterns.
  • A2A + MCP Protocol Depth — First security tool to test Agent-to-Agent agents, including Agent Card discovery, task lifecycle attacks, and multi-turn manipulation.
  • Framework Agnostic — LangChain, CrewAI, Bedrock, MCP, browser-based chat UIs, remote HTTPS agents, or write your own adapter.

What ZIRAN Is / What ZIRAN Is Not

ZIRAN is an agent security scanner that discovers dangerous tool compositions via graph analysis, detects execution-level side effects, and runs multi-phase campaigns that model real attacker behavior.

ZIRAN is not:

  • An LLM safety/alignment tool — for prompt injection breadth, jailbreak templates, encoding attacks, and compliance testing, use Promptfoo or Garak
  • A runtime guardrail — for real-time input/output protection, use NeMo Guardrails, Lakera Guard, or LLM Guard
  • A general-purpose eval framework — for model evaluation and benchmarking, use Inspect AI or Deepeval

Works With

ZIRAN is complementary to other tools in the AI security ecosystem:

  • Promptfoo for attack breadth (encoding strategies, jailbreak templates, compliance plugins) + ZIRAN for agent depth (tool chains, side-effects, campaigns)
  • Garak for LLM-layer vulnerability scanning + ZIRAN for agent-layer tool chain analysis
  • NeMo Guardrails / Lakera for runtime protection + ZIRAN for pre-deployment testing

Install

pip install ziran

# with framework adapters
pip install ziran[langchain]    # LangChain support
pip install ziran[crewai]       # CrewAI support
pip install ziran[a2a]          # A2A protocol support
pip install ziran[streaming]    # SSE/WebSocket streaming
pip install ziran[pentest]      # autonomous pentesting agent
pip install ziran[all]          # everything

Quick Start

CLI

# scan a LangChain agent (in-process)
ziran scan --framework langchain --agent-path my_agent.py

# scan a remote agent over HTTPS
ziran scan --target target.yaml

# adaptive campaign with LLM-driven strategy
ziran scan --target target.yaml --strategy llm-adaptive

# stream responses in real-time
ziran scan --target target.yaml --streaming

# scan a multi-agent system
ziran multi-agent-scan --target target.yaml

# discover capabilities of a remote agent
ziran discover --target target.yaml

# autonomous pentesting agent
ziran pentest --target target.yaml

# interactive red-team mode
ziran pentest --target target.yaml --interactive

# view the interactive HTML report
open reports/campaign_*_report.html

Python API

import asyncio
from ziran.application.agent_scanner.scanner import AgentScanner
from ziran.application.attacks.library import AttackLibrary
from ziran.infrastructure.adapters.langchain_adapter import LangChainAdapter

adapter = LangChainAdapter(agent=your_agent)
scanner = AgentScanner(adapter=adapter, attack_library=AttackLibrary())

result = asyncio.run(scanner.run_campaign())
print(f"Vulnerabilities found: {result.total_vulnerabilities}")
print(f"Dangerous tool chains: {len(result.dangerous_tool_chains)}")

See examples/ for 19 runnable demos — from static analysis to autonomous pentesting.


Remote Agent Scanning

ZIRAN can test any published agent over HTTPS — no source code or in-process access required. Define your target in a YAML file and ZIRAN handles the rest:

# target.yaml
name: my-agent
url: https://agent.example.com
protocol: auto  # auto | rest | openai | mcp | a2a

auth:
  type: bearer
  token_env: AGENT_API_KEY

tls:
  verify: true

Supported protocols:

Protocol Use Case Auto-detected via
REST Generic HTTP endpoints Fallback default
OpenAI-compatible Chat completions API (/v1/chat/completions) Path probing
MCP Model Context Protocol agents (JSON-RPC 2.0) JSON-RPC response
A2A Google Agent-to-Agent protocol /.well-known/agent.json
# auto-detect protocol and scan
ziran scan --target target.yaml

# force a specific protocol
ziran scan --target target.yaml --protocol openai

# A2A agent with Agent Card discovery
ziran scan --target a2a_target.yaml --protocol a2a

See examples/15-remote-agent-scan/ for ready-to-use target configurations.


What ZIRAN Finds

Prompt-level — injection, system prompt extraction, memory poisoning, chain-of-thought manipulation.

Tool-level — tool manipulation, privilege escalation, data exfiltration chains.

Tool chains (unique to ZIRAN) — automatic graph analysis of dangerous tool compositions:

┌──────────┬─────────────────────┬─────────────────────────────┬──────────────────────────────────────┐
│ Risk     │ Type                │ Tools                       │ Description                          │
├──────────┼─────────────────────┼─────────────────────────────┼──────────────────────────────────────┤
│ critical │ data_exfiltration   │ read_file → http_request    │ File contents sent to external server│
│ critical │ sql_to_rce          │ sql_query → execute_code    │ SQL results executed as code         │
│ high     │ pii_leakage         │ get_user_info → external_api│ User PII sent to third-party API     │
└──────────┴─────────────────────┴─────────────────────────────┴──────────────────────────────────────┘

How It Works

flowchart LR
    subgraph agent["🤖 Your Agent"]
        direction TB
        T["🔧 Tools"]
        M["🧠 Memory"]
        P["🔑 Permissions"]
    end

    agent -->|"adapter layer"| D

    subgraph ziran["⛩️ ZIRAN Pipeline"]
        direction TB
        D["1 · DISCOVER\nProbe tools, permissions,\ndata access"]
        MAP["2 · MAP\nBuild knowledge graph\n(NetworkX MultiDiGraph)"]
        A["3 · ANALYZE\nWalk graph for dangerous\nchains (30+ patterns)"]
        ATK["4 · ATTACK\nMulti-phase exploits\ninformed by the graph"]
        R["5 · REPORT\nScored findings with\nremediation guidance"]
        D --> MAP --> A --> ATK --> R
    end

    R --> HTML["📊 HTML\nInteractive graph"]
    R --> MD["📝 Markdown\nCI/CD tables"]
    R --> JSON["📦 JSON\nMachine-parseable"]

    style agent fill:#1a1a2e,stroke:#e94560,color:#fff,stroke-width:2px
    style ziran fill:#0f3460,stroke:#e94560,color:#fff,stroke-width:2px
    style D fill:#16213e,stroke:#0ea5e9,color:#fff
    style MAP fill:#16213e,stroke:#0ea5e9,color:#fff
    style A fill:#16213e,stroke:#0ea5e9,color:#fff
    style ATK fill:#16213e,stroke:#e94560,color:#fff
    style R fill:#16213e,stroke:#10b981,color:#fff
    style HTML fill:#1e293b,stroke:#10b981,color:#fff
    style MD fill:#1e293b,stroke:#10b981,color:#fff
    style JSON fill:#1e293b,stroke:#10b981,color:#fff
    style T fill:#2d2d44,stroke:#e94560,color:#fff
    style M fill:#2d2d44,stroke:#e94560,color:#fff
    style P fill:#2d2d44,stroke:#e94560,color:#fff

Multi-Phase Trust Exploitation

Phase Goal
Reconnaissance Discover capabilities and data sources
Trust Building Establish rapport with the agent
Capability Mapping Map tools, permissions, data access
Vulnerability Discovery Identify attack paths
Exploitation Setup Position without triggering defences
Execution Execute the exploit chain
Persistence Maintain access across sessions (opt-in)
Exfiltration Extract sensitive data (opt-in)

Each phase builds on the knowledge graph from previous phases.

Campaign Strategies

Strategy Description
fixed Sequential phases in order (default)
adaptive Rule-based adaptation — skips, repeats, or re-orders phases based on knowledge graph state
llm-adaptive LLM-driven strategy — uses an LLM to analyze findings and plan the next phase dynamically
ziran scan --target target.yaml --strategy adaptive
ziran scan --target target.yaml --strategy llm-adaptive

Autonomous Pentesting Agent

An LLM-powered agent that autonomously plans, executes, and adapts penetration testing campaigns:

# fully autonomous mode
ziran pentest --target target.yaml --max-iterations 5

# interactive red-team mode — collaborate with the agent
ziran pentest --target target.yaml --interactive

The pentesting agent:

  • Plans attack strategies using LLM reasoning and knowledge graph state
  • Executes multi-step exploit chains with real-time adaptation
  • Deduplicates findings using LLM embeddings to cluster related vulnerabilities
  • Reports with detailed HTML reports including OWASP LLM Top 10 mapping

See examples/19-pentesting-agent/ for a complete walkthrough.

Multi-Agent Scanning

Test coordinated multi-agent systems — supervisors, routers, peer-to-peer networks:

ziran multi-agent-scan --target target.yaml

ZIRAN discovers the agent topology, scans each agent individually, then runs cross-agent attacks targeting trust boundaries and delegation patterns.

Streaming

Monitor attack responses in real-time via SSE or WebSocket:

ziran scan --target target.yaml --streaming

Reports

Three output formats, generated automatically:

  • HTML — Interactive knowledge graph with attack path highlighting
  • Markdown — CI/CD-friendly summary tables
  • JSON — Machine-parseable for programmatic consumption

CI/CD Integration

Use ZIRAN as a quality gate in your pipeline:

Live scan (runs the full attack suite against your agent)

# .github/workflows/security.yml
- uses: taoq-ai/ziran@v0
  with:
    command: scan
    framework: langchain        # langchain | crewai | bedrock
    agent-path: my_agent.py     # OR use target: target.yaml for remote agents
    coverage: standard           # essential | standard | comprehensive
    gate-config: gate_config.yaml
  env:
    OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}   # or ANTHROPIC_API_KEY, etc.

Offline CI gate (evaluate a previous scan result)

- uses: taoq-ai/ziran@v0
  with:
    command: ci
    result-file: scan_results/campaign_report.json
    gate-config: gate_config.yaml

Outputs: status (passed/failed), trust-score, total-findings, critical-findings, sarif-file.

See the full example workflow or use the Python API.


Development

git clone https://github.com/taoq-ai/ziran.git && cd ziran
uv sync --group dev

uv run ruff check .            # lint
uv run mypy ziran/             # type-check
uv run pytest --cov=ziran      # test

Contributing

See CONTRIBUTING.md. Ways to help:


License

Apache License 2.0 — See NOTICE for third-party attributions.

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
agent-securityai-securitycrewailangchainllm-securitymcpowasppenetration-testingred-teaming

Part of these stacks

View on GitHub