StackA2A
orchestrationgoogle-adkpython

Multi Agent ADK

30

by TrongNV2003

A multi-agent e-commerce system built with Google ADK and MCP. Features microservice architecture with A2A protocol communication, enabling customer consultation, real-time inventory lookup, and automated order processing.

1 starsUpdated 2025-12-15
Quality Score30/100
Community
7
Freshness
60
Official
30
Skills
10
Protocol
30
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/TrongNV2003/Multi-Agent-ADK
2Navigate to the project
$ cd Multi-Agent-ADK
3Install dependencies
$ pip install -r requirements.txt
4Run the agent
$ python main.py

README

Multi-Agent System with MCP SSE Server

Python 3.11+ License: MIT

A multi-agent system for e-commerce consultation and order processing, powered by Google ADK and Model Context Protocol (MCP) with Server-Sent Events (SSE) architecture.:

  • A2A Pipeline – The A2A pipeline follows the Microservice Achitecture recommended by Google ADK: every specialist agent is deployed as its own uvicorn service via to_a2a(), and the orchestrator talks to them through the official A2A protocol using RemoteA2aAgent.

🎯 Overview

This system implements an sales Agent using sequential pattern with multiple agents communicate through A2A protocol that collaborate to:

  • Analyze customer inquiries and product requirements
  • Check real-time inventory availability and pricing
  • Process and persist customer orders
  • Provide natural language consultation

The agents communicate with an MCP SSE server that interfaces with MongoDB for product inventory and order management.

🏗️ Architecture

Agent Pipelines

Workflow Multi Agents

Workflow

Microservice Agent (A2A) Pattern

Pattern

Each agent runs in a separate uvicorn microservice, generating agent cards at /.well-known/agent-card.json.

The Orchestrator uses RemoteA2aAgent to send the A2A payload to each service, and the agent itself is responsible for calling the MCP tools.

Core Agents

  1. Intent Classification Agent: Parses customer intent and extracts product details
  2. Storage Management Agent: Extracts product parameters; in A2A mode the handler calls check_inventory_detail
  3. Order Placement Agent: Prepares order payload; in A2A mode the handler calls create_customer_order
  4. Consultant Agent: Generates natural language responses for customers

Technology Stack

  • Agent Framework: Google ADK (Agent Development Kit)
  • LLM Integration: LiteLLM with vLLM backend
  • Protocol: Model Context Protocol (MCP) with SSE transport
  • Database: MongoDB for inventory and order storage
  • UI: Streamlit for interactive chat interface
  • Containerization: Docker & Docker Compose
  • Agent-to-Agent: Agent Card registry with tool handlers invoking MCP directly

🚀 Features

  • A2A Communication: AgentRegistry + RemoteA2aAgent act as the phone book/client to the remote microservices (each service exposes an agent card via to_a2a).
  • Deterministic Tool Calls: Remote inventory/order services own the MCP calls, ensuring every request hits the SSE server the same way
  • MCP SSE Integration: Async communication with MCP server via Server-Sent Events
  • Real-time Inventory Lookup: Query product availability, pricing, and stock quantities
  • Order Management: Create, persist, and track customer orders
  • Robust Parsing: Resilient JSON/function-call extraction with structured fallbacks
  • Session Management: Maintain conversation context across multiple turns
  • Streamlit UI: User-friendly chat interface with agent traces and order cards

📋 Prerequisites

  • Python 3.11+
  • Docker & Docker Compose (for containerized deployment)
  • vLLM server running at http://localhost:8000/v1 (or configure your own endpoint)

🛠️ Installation

Local Development

  1. Clone the repository
git clone <your-repo-url>
cd agentADK
  1. Create and activate conda environment
conda create -n agentadk python=3.11
conda activate agentadk
  1. Install dependencies
pip install -r requirements.txt
  1. Start MongoDB
sudo systemctl start mongod
  • Insert some products to mongodb
python -m src.db.insert_data
  1. Configure environment (optional)
cp .env.example .env
# Edit .env with your settings

Docker Deployment

  1. Build and run all services
docker-compose up --build

This will start:

  • MongoDB on localhost:27017
  • MCP SSE Server on localhost:8000
  • Streamlit UI on localhost:8501

🚀 Usage

Start MCP SSE Server

conda activate agentadk && python mcp_server.py

The MCP server exposes:

  • SSE endpoint: http://localhost:8000/sse
  • Tools:
    • get_product_info: Query inventory by product name, storage, color
    • create_order: Persist customer orders
    • get_order: Retrieve order details

Start Remote A2A Microservices

Run each agent in a separate terminal. The default ports can be changed via the A2A_* environment variables in .env.

Terminal 1 – Intent Classification Agent:

conda activate agentadk && python -m uvicorn src.a2a_services.analysis_agent:app --host 0.0.0.0 --port 9101

Terminal 2 – Storage Management Agent:

conda activate agentadk && python -m uvicorn src.a2a_services.inventory_agent:app --host 0.0.0.0 --port 9102

Terminal 3 – Order Placement Agent:

conda activate agentadk && python -m uvicorn src.a2a_services.order_agent:app --host 0.0.0.0 --port 9103

Terminal 4 – Consultant Agent:

conda activate agentadk && python -m uvicorn src.a2a_services.consultant_agent:app --host 0.0.0.0 --port 9104

Check Agent Card:

# Analysis Agent card
curl http://localhost:9101/.well-known/agent-card.json

Run Multi-Agent Pipelines

Make sure the MCP SSE server and all A2A microservices are running before activating the pipeline.

conda activate agentadk && python main.py

Run Streamlit UI

conda activate agentadk && python -m streamlit run src/ui/app.py

Navigate to http://localhost:8501 in your browser.

UI Features:

  • Real-time chat with agent (uses Agent Card pipeline by default)
  • Order details display panel with MCP output
  • Agent trace expander showing per-agent JSON payloads
  • Session persistence across page refreshes

Acknowledgments

  • Google ADK: Agent framework and orchestration
  • Model Context Protocol (MCP): Standardized tool-calling protocol
  • LiteLLM: Unified LLM API interface
  • vLLM: High-performance inference server
  • Streamlit: UI prototyping

Output

Example

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
mcp-servermulti-agent-systems

Part of these stacks

View on GitHub