A2A StoryLab
28by dondetir
An educational demonstration of Google's Agent-to-Agent (A2A) protocol v1 through a multi-agent storytelling system. Three AI agents collaborate iteratively to adapt and enhance stories, showcasing real-world A2A communication patterns.
Getting Started
README
A2A StoryLab - Learn Google's A2A Protocol by Example
An educational demonstration of Google's Agent-to-Agent (A2A) protocol v1 through a multi-agent storytelling system. Three AI agents collaborate iteratively to adapt and enhance stories, showcasing real-world A2A communication patterns.
🎓 Educational Purpose: This project is designed to teach the A2A protocol through hands-on exploration. Not intended for production use.
📚 What You'll Learn
This demo teaches A2A protocol concepts through working code:
- ✅ A2A Message Envelopes - Standardized communication format
- ✅ Conversation Threading - Tracking related messages with
conversation_id - ✅ Message Chains - Linking requests and responses with
in_reply_to - ✅ Agent Identity - How agents identify themselves
- ✅ Agent Cards - Discovery endpoint at
/.well-known/agent.json - ✅ Task States - Official A2A lifecycle (SUBMITTED → WORKING → COMPLETED)
- ✅ Message Parts - Typed content (TextPart, DataPart, FilePart)
- ✅ Multi-Agent Orchestration - Coordinating multiple agents
- ✅ Message Audit Trail - Logging and debugging agent communication
- ✅ Iterative Workflows - Building feedback loops between agents
What this is: A learning tool demonstrating A2A protocol implementation What this isn't: A production-ready storytelling application
📋 Table of Contents
🚀 Quick Start
Setup and Installation
# Clone the repository
git clone https://github.com/rdondeti/A2A-StoryLab.git
cd A2A-StoryLab
# 1. Prerequisites
# - Python 3.9+
# - Ollama with gemma3:1b model
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
ollama serve
ollama pull gemma3:1b
# 2. Setup Python environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# 3. Start all services
./start_all.sh
# 4. Verify services are running
curl http://localhost:8000/health
# 5. Stop services when done
./stop_all.sh
Services running:
- 🎯 Orchestrator: http://localhost:8000
- ✏️ Creator Agent: http://localhost:8001
- 🔍 Critic Agent: http://localhost:8002
- 🧠 Ollama LLM: http://localhost:11434
Try the Demo
Send a story adaptation request:
curl -X POST http://localhost:8000/adapt-story \
-H "Content-Type: application/json" \
-d '{
"base_story_id": "bear_loses_roar",
"variation_request": "scientist who lost his formulas"
}'
Watch what happens: The system demonstrates A2A protocol as agents collaborate through multiple iterations.
🏗️ Architecture: Educational Overview
System Design
This demo implements a three-agent system using A2A protocol for all communication:
┌─────────────────────────────────────────────────────────────────┐
│ USER REQUEST │
│ "Adapt story as scientist who lost formulas" │
└────────────────────────────┬────────────────────────────────────┘
│
▼
┌────────────────────────────────────────┐
│ ORCHESTRATOR (Port 8000) │
│ Coordinates the workflow │
│ Demonstrates: Agent coordination │
└──────┬─────────────────────┬────────────┘
│ │
│ A2A Messages │ A2A Messages
▼ ▼
┌──────────────┐ ┌──────────────┐
│ CREATOR │ │ CRITIC │
│ (Port 8001) │ │ (Port 8002) │
│ │ │ │
│ Demonstrates:│ │ Demonstrates:│
│ • Requests │ │ • Responses │
│ • Refinement │ │ • Feedback │
└──────┬───────┘ └──────┬───────┘
│ │
└─────────┬───────────┘
▼
┌──────────────┐
│ OLLAMA LLM │
│ (Port 11434) │
└──────────────┘
Agent Roles (Learning Purposes)
| Agent | Port | Educational Purpose |
|---|---|---|
| Orchestrator | 8000 | Demonstrates workflow coordination and multi-agent management |
| Creator | 8001 | Shows request handling and iterative refinement patterns |
| Critic | 8002 | Illustrates response generation and feedback provision |
| Ollama | 11434 | LLM backend (not part of A2A - just provides AI capability) |
A2A Message Flow Example
This is what you'll see in the logs (logs/a2a_messages.log):
conversation_id: "conv_abc123" ← Links all messages in this workflow
│
├─ msg_001: Orchestrator → Creator (remix_story)
│ └─ msg_002: Creator → Orchestrator (story v1.0)
│ └─ msg_003: Orchestrator → Critic (evaluate_story)
│ └─ msg_004: Critic → Orchestrator (score: 6.5/10)
│
├─ msg_005: Orchestrator → Creator (refine_story) ← in_reply_to: msg_004
│ └─ msg_006: Creator → Orchestrator (story v2.0)
│ └─ msg_007: Orchestrator → Critic (evaluate_story)
│ └─ msg_008: Critic → Orchestrator (score: 8.5/10 ✓)
│
└─ Complete: 8 messages demonstrate full A2A protocol features
Key Learning Points:
- Every message has unique
message_id - All messages share same
conversation_id - Responses link back via
in_reply_to - Complete audit trail for debugging
🧪 Testing & Exploring the System
Explore A2A Protocol in Action
1. Watch the message audit trail:
# Start services
./start_all.sh
# In another terminal, watch A2A messages in real-time
tail -f logs/a2a_messages.log
# In another terminal, make a request
curl -X POST http://localhost:8000/adapt-story \
-H "Content-Type: application/json" \
-d '{"base_story_id": "bear_loses_roar", "variation_request": "test"}'
You'll see every A2A message logged with:
- Message ID
- Conversation ID
- Sender/Recipient
- Timestamp
- Message type
2. Run the test suite:
# See 190+ tests demonstrating A2A patterns
pytest tests/ -v
# Focus on A2A protocol tests
pytest tests/ -k "a2a" -v
# See A2A compliance tests
pytest tests/ -k "compliance" -v
3. Test individual components:
# Health checks (all agents respond)
curl http://localhost:8000/health
curl http://localhost:8001/health
curl http://localhost:8002/health
# Discover agent capabilities via Agent Card
curl http://localhost:8001/.well-known/agent.json
Learning Exercises
Exercise 1: Trace a Conversation
# Make a request and note the session_id in response
curl -X POST http://localhost:8000/adapt-story ...
# Search logs for that conversation
grep "conv_SESSION_ID" logs/a2a_messages.log
# Count messages
grep "conv_SESSION_ID" logs/a2a_messages.log | wc -l
Exercise 2: Examine Message Structure
# View A2A message examples in the code
cat src/common/a2a_utils.py | grep -A 20 "def create_request_message"
cat src/common/a2a_utils.py | grep -A 20 "def create_response_message"
Exercise 3: Modify and Experiment
- Change quality threshold in
src/orchestrator/main.py(line ~60) - Add logging in
src/common/a2a_utils.py - Observe how agents communicate through breakpoints
📡 A2A Protocol - What You'll Learn
What is A2A Protocol?
Google's A2A protocol provides a standard way for AI agents to communicate, similar to how HTTP standardizes web requests.
Why standardize agent communication?
- 🔍 Traceability - Debug conversations by following message IDs
- 🧵 Threading - Group related messages together
- 🔗 Chaining - Link requests to responses
- 🤝 Interoperability - Any A2A-compliant agent can communicate
- 📊 Auditability - Complete logs for analysis
A2A Message Anatomy
Every agent communication uses this structure:
{
"protocol": "google.a2a.v1", // ← Protocol version
"message_id": "msg_abc123xyz789", // ← Unique ID for THIS message
"conversation_id": "conv_session_001", // ← Groups related messages
"timestamp": "2025-10-29T10:30:45Z", // ← When sent
"sender": { // ← Who's sending
"agent_id": "orchestrator-001",
"agent_type": "orchestrator",
"instance": "http://localhost:8000"
},
"recipient": { // ← Who should receive
"agent_id": "creator-agent-001",
"agent_type": "creator",
"instance": "http://localhost:8001"
},
"message_type": "request", // ← request/response/error
"in_reply_to": null, // ← Links to parent message
"payload": { // ← Your actual data
"action": "remix_story",
"parameters": {
"story_id": "bear_loses_roar",
"variation": "scientist who lost formulas"
}
}
}
Core Concepts Demonstrated
1. Message Envelope
- Wraps your data in standardized metadata
- Like addressing a letter: envelope (A2A) contains letter (your data)
2. Conversation Threading
conversation_idgroups related messages- Like email threads - keeps everything together
3. Message Chains
in_reply_tolinks responses to requests- Trace causality: "This response answers that request"
4. Agent Identity
- Each agent has unique ID, type, and location
- Clear accountability: who said what
5. Message Types
- REQUEST: Ask another agent to do something
- RESPONSE: Return results
- ERROR: Report problems
- HEALTH_CHECK: Verify agent is alive
Explore the Implementation
See A2A in code:
src/common/a2a_utils.py- Complete A2A utilities (550+ lines)src/common/conversation_manager.py- Tracks conversations (450+ lines)tests/test_a2a_compliance.py- Protocol validation testslogs/a2a_messages.log- Live message audit trail
Learn more:
- A2A Protocol Guide - Complete implementation guide
- Testing Guide - Testing patterns and examples
📚 API Documentation
Interactive Documentation
Once services are running, explore the auto-generated API docs:
- Orchestrator: http://localhost:8000/docs
- Creator Agent: http://localhost:8001/docs
- Critic Agent: http://localhost:8002/docs
Example: Adapt a Story
User-Facing Endpoint (simplified - for demo ease):
POST http://localhost:8000/adapt-story
{
"base_story_id": "bear_loses_roar",
"variation_request": "developer who lost motivation"
}
Behind the Scenes (A2A messages):
- Orchestrator sends A2A REQUEST to Creator
- Creator sends A2A RESPONSE with story
- Orchestrator sends A2A REQUEST to Critic
- Critic sends A2A RESPONSE with evaluation
- Loop continues until quality threshold met
Available Stories (For Demo)
| Story ID | Theme |
|---|---|
squirrel_and_owl |
Seeking wisdom |
bear_loses_roar |
Inner strength |
turtle_wants_to_fly |
Appreciating uniqueness |
lonely_firefly |
Finding community |
rabbit_and_carrot |
Discovery |
🔬 How It Works - Learning by Doing
The Demo Workflow
This system demonstrates a complete A2A workflow:
1. User Request → System receives story adaptation request
2. Orchestrator (Coordinator)
- Creates unique
conversation_id - Sends A2A REQUEST to Creator agent
3. Creator (Worker Agent)
- Receives A2A REQUEST
- Generates story
- Sends A2A RESPONSE back
4. Orchestrator → Critic
- Receives Creator's story
- Sends A2A REQUEST to Critic agent
5. Critic (Evaluator Agent)
- Receives A2A REQUEST
- Evaluates story (score 0-10)
- Sends A2A RESPONSE with feedback
6. Iteration Decision
- If score < 8.0: Return to step 2 with feedback
- If score ≥ 8.0: Complete
7. Result → Return final story to user
Quality Evaluation (Demo Feature)
The Critic evaluates on 4 dimensions to demonstrate iterative improvement:
| Dimension | Weight | Purpose in Demo |
|---|---|---|
| Moral Preservation | 30% | Shows validation logic |
| Structure Quality | 25% | Demonstrates scoring |
| Creativity | 25% | Illustrates subjective evaluation |
| Coherence | 20% | Tests consistency checking |
Threshold: 8.0/10 | Max Iterations: 5
🛠️ Project Structure
A2A-StoryLab/
├── src/
│ ├── common/
│ │ ├── a2a_utils.py # ⭐ Core A2A implementation
│ │ ├── conversation_manager.py # ⭐ Conversation tracking
│ │ ├── models.py # Data models
│ │ └── ollama_client.py # LLM integration
│ ├── orchestrator/main.py # Coordinator agent
│ ├── creator_agent/main.py # Worker agent example
│ └── critic_agent/main.py # Evaluator agent example
├── tests/
│ ├── test_a2a_utils.py # ⭐ A2A protocol tests
│ ├── test_conversation_manager.py
│ └── test_integration_a2a.py # ⭐ End-to-end A2A tests
├── docs/
│ ├── A2A_PROTOCOL_GUIDE.md # ⭐ Complete A2A guide
│ └── ARCHITECTURE_AND_USER_GUIDE.md
├── logs/
│ └── a2a_messages.log # ⭐ View all A2A messages here
├── requirements.txt # Python dependencies
├── start_all.sh # Start all services
├── stop_all.sh # Stop all services
└── README.md # You are here
⭐ = Essential for learning A2A protocol
📖 Documentation
Learning Resources
Start here:
- A2A Protocol Guide - Complete tutorial
- Architecture Guide - System design
- Testing Guide - Test patterns and examples
Code Examples
Key files to study:
src/common/a2a_utils.py- All A2A utilitiessrc/orchestrator/main.py- Coordinator patterntests/test_a2a_compliance.py- Protocol validationlogs/a2a_messages.log- Real message examples
🛠️ Technology Stack
| Component | Technology | Educational Purpose |
|---|---|---|
| Protocol | Google A2A v1 | ⭐ Main learning focus |
| Language | Python 3.9+ | Clear, readable implementation |
| Framework | FastAPI | Modern async Python, auto-docs |
| LLM | Ollama (gemma3:1b) | Local, no API keys needed |
| Testing | pytest | 190+ tests show proper usage |
🐛 Troubleshooting
Common Issues
Services won't start:
# Check if ports are in use
lsof -ti:8000 8001 8002 11434 | xargs kill -9
Ollama not responding:
# Check Ollama status
curl http://localhost:11434/api/tags
# Start if needed
ollama serve
# Pull model
ollama pull gemma3:1b
Can't see A2A messages:
# Ensure logs directory exists
mkdir -p logs
# Check file permissions
ls -la logs/
# Watch in real-time
tail -f logs/a2a_messages.log
🤝 Learning & Contributing
Suggested Learning Path
- Start: Run the system, make requests, watch logs
- Explore: Read
src/common/a2a_utils.py - Experiment: Modify code, add logging, test changes
- Study: Review test files to see proper A2A usage
- Build: Try creating a new agent type
Contribute Learning Materials
We welcome contributions that help others learn:
- 📝 Improved documentation
- 🎓 Tutorial additions
- 🧪 More test examples
- 💡 Better code comments
- 🐛 Bug fixes
See CONTRIBUTING.md for guidelines.
📊 Project Stats
Purpose: Educational A2A Protocol Demonstration
Metrics:
- 📝 ~4,000 lines of Python code
- 🧪 190+ tests demonstrating proper A2A usage
- 📚 Comprehensive documentation and examples
- 🔧 Core A2A protocol features (Agent Cards, Task States, Message Parts)
- ⭐ Focus: Teaching, not production
Note: This is an educational implementation focusing on core A2A concepts. For full protocol compliance (JSON-RPC 2.0, streaming, push notifications), see the official A2A specification.
📄 License
MIT License - See LICENSE file.
TL;DR: Free to use for learning and teaching. Modify as needed for educational purposes.
🙏 Credits
- Base Stories: SimpleMCP project
- A2A Protocol: Google A2A Protocol Specification
- LLM Backend: Ollama for local inference
📞 Questions & Discussion
- 🐛 Issues: Report bugs or problems
- 💬 Discussions: Ask questions about A2A protocol
- 📖 Documentation: See docs/ directory
Built to Teach Google's A2A Protocol Through Working Code
Learn by doing - explore, experiment, understand