A2A Retail Demo
20by abdulzedan
A2A protocol demo: Orchestrated AI agents for retail using Google ADK, LangGraph, and Vertex AI Search
Getting Started
README
A2A Retail Demo 🛍️
📋 Project Purpose
This project demonstrates the power of Google's Agent-to-Agent (A2A) protocol in building sophisticated multi-agent AI systems. It showcases how specialized AI agents can collaborate seamlessly to solve complex business problems in a retail context.
Demo
Why This Matters
Traditional single-agent AI systems often struggle with diverse tasks that require different types of expertise. This demo illustrates how the A2A protocol enables:
- Modular AI Architecture: Each agent specializes in its domain (inventory management vs. customer service)
- Intelligent Orchestration: A host agent dynamically routes queries to the right specialist
- Parallel Processing: Complex queries are handled by multiple agents simultaneously
- Real-world Integration: Demonstrates integration with Google's Vertex AI Search for production-ready capabilities
This approach mirrors how human organizations work - with specialists handling their areas of expertise while collaborating on complex tasks.
🚀 Features
-
Multi-Agent Architecture: Three specialized agents working in harmony
- Host Agent: Intelligent query routing and orchestration
- Inventory Agent: Product search and stock management powered by Vertex AI Search
- Customer Service Agent: Handles store info, policies, and general inquiries
-
Advanced Capabilities:
- Semantic product search using Vertex AI Search
- Real-time inventory tracking
- Parallel agent execution for complex queries
- Streaming responses for better UX
- A2A protocol implementation for agent communication
🏗️ Architecture
Frontend → Host Agent → Inventory Agent
(Mesop) (Port 8000) (Port 8001)
↓ - Vertex AI Search
↓ - ADK Framework
↓
└──────────→ Customer Service
Agent (Port 8002)
- LangGraph
- Gemini Model
📋 Prerequisites
- Python 3.11+
- Google Cloud Project with:
- Vertex AI Search API enabled
- Gemini API access
- Application Default Credentials configured
- Environment variables:
GOOGLE_API_KEY: Your Gemini API keyVERTEX_SEARCH_SERVING_CONFIG: Your Vertex AI Search serving config path
🔐 Google Cloud Authentication
Setting up Authentication
-
Install Google Cloud CLI (if not already installed):
# macOS brew install google-cloud-sdk # Linux/WSL curl https://sdk.cloud.google.com | bash -
Authenticate with Google Cloud:
# Login to your Google account gcloud auth login # Set your project gcloud config set project YOUR_PROJECT_ID # Set up Application Default Credentials gcloud auth application-default login -
Enable Required APIs:
# Enable Vertex AI Search gcloud services enable discoveryengine.googleapis.com # Enable other required APIs gcloud services enable aiplatform.googleapis.com
Important Authentication Notes
- Application Default Credentials (ADC): The
gcloud auth application-default logincommand creates credentials that applications can use to authenticate as your user account - Service Account (Production): For production deployments, use a service account with appropriate permissions instead of user credentials
- Credentials Location: ADC credentials are stored at:
- macOS/Linux:
~/.config/gcloud/application_default_credentials.json - Windows:
%APPDATA%\gcloud\application_default_credentials.json
- macOS/Linux:
📊 Setting Up Vertex AI Search
Before running the demo, you need to populate the Vertex AI Search engine with inventory data. The inventory agent relies on this search data to function properly.
Quick Setup
-
Generate and upload inventory data:
# Navigate to the utils directory cd backend/utils # Follow the detailed instructions in the script python generate_inventory_jsonl.py --help -
Detailed Instructions:
- See
backend/utils/generate_inventory_jsonl.pyfor complete setup instructions - The script includes detailed usage examples and authentication steps
- It will generate sample products with embeddings and upload them to your Vertex AI Search data store
- See
-
Get your serving config: After setting up the data store, you'll need the serving config path for your
.envfile in the format:projects/{project}/locations/{location}/collections/{collection}/dataStores/{datastore}/servingConfigs/{config}
🛠️ Installation
-
Clone the repository:
git clone https://github.com/abdulzedan/a2a-retail-demo.git cd a2a-retail-demo -
Set up the environment:
make setupThis will:
- Create a virtual environment
- Install all dependencies
- Create a
.envfile from the example
-
Configure environment variables:
# Edit .env file vim .env # Add your configurations: GOOGLE_API_KEY=your-gemini-api-key VERTEX_SEARCH_SERVING_CONFIG=projects/YOUR_PROJECT/locations/YOUR_LOCATION/collections/default_collection/dataStores/YOUR_DATASTORE/servingConfigs/default_config -
Verify setup:
make check-setup
🚀 Running the Demo
Quick Start
make start
This starts all agents and the frontend automatically.
Individual Components
# Start specific agents
make start-host # Host agent on port 8000
make start-inventory # Inventory agent on port 8001
make start-customer-service # Customer service agent on port 8002
make start-frontend # Frontend on port 8080
Access the Application
Open your browser to: http://localhost:8080
🧪 Testing
# Run all tests
make test
# Run specific test suites
make test-unit # Unit tests only
make test-integration # Integration tests
make test-coverage # With coverage report
# Test A2A communication
make test-a2a
📁 Project Structure
a2a-retail-demo/
├── backend/
│ ├── agents/
│ │ ├── host_agent/ # Orchestrator using ADK
│ │ ├── inventory_agent_a2a/ # Inventory with Vertex Search
│ │ └── customer_service_a2a/ # Customer service with LangGraph
│ ├── utils/
│ │ └── vector_search_store.py # Vertex AI Search integration
│ └── tests/ # Test suites
├── frontend/
│ └── app.py # Mesop UI application
├── scripts/
│ ├── start_a2a_demo.sh # Startup script
│ └── test_a2a_agents.py # A2A testing utility
├── .env.example # Environment template
├── Makefile # Task automation
└── requirements.txt # Python dependencies
🔧 Development
Code Quality
# Lint code
make lint
# Format code
make format
Adding New Agents
- Create a new agent directory under
backend/agents/ - Implement the A2A protocol interface
- Register with the host agent
- Add tests
Debugging
- Check agent logs in the console
- Use
make check-setupto verify configuration - Test individual agents with
scripts/test_a2a_agents.py
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linting
- Submit a pull request
📝 Troubleshooting
Common Issues
-
"VERTEX_SEARCH_SERVING_CONFIG not set"
- Ensure you've configured the
.envfile - Format:
projects/{project}/locations/{location}/collections/{collection}/dataStores/{datastore}/servingConfigs/{config}
- Ensure you've configured the
-
Authentication errors
- Run
gcloud auth application-default login - Ensure your project has the necessary APIs enabled
- Run
-
Import errors
- Verify you're using Python 3.11+
- Run
make setupto install dependencies
-
Agents not responding
- Check if all agents are running:
make check-setup - Verify ports 8000-8002 are not in use
- Check if all agents are running:
📜 License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
🙏 Acknowledgments
- Google's A2A Protocol and ADK teams
- Vertex AI Search team
- LangGraph and LangChain communities
Built with ❤️ using Google's Agent-to-Agent protocol