StackA2A
enterpriseOfficialgoogle-adkpython

Adk Expense Reimbursement (Official Sample)

59

by A2A Project

Official A2A python sample agent: Adk Expense Reimbursement

1,329 starsUpdated 2026-02-22apache-2.0
Quality Score59/100
Community
70
Freshness
100
Official
100
Skills
10
Protocol
30
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/a2aproject/a2a-samples
2Navigate to the project
$ cd a2a-samples/samples/python/agents/adk_expense_reimbursement
3Install dependencies
$ pip install -r requirements.txt
4Run the agent
$ python main.py

README

ADK Expense Reimbursement Agent

This sample uses the Agent Development Kit (ADK) to create a simple "Expense Reimbursement" agent that is hosted as an A2A server.

This agent takes text requests from the client and, if any details are missing, returns a webform for the client (or its user) to fill out. After the client fills out the form, the agent will complete the task.

Prerequisites

  • Python 3.9 or higher
  • UV
  • Access to an LLM and API Key

Running the Sample

  1. Navigate to the samples directory:

    cd samples/python/agents/adk_expense_reimbursement
    
  2. Create an environment file with your API key:

    echo "GEMINI_API_KEY=your_api_key_here" > .env
    
  3. Run an agent:

    uv run .
    
  4. In a separate terminal, run the A2A client:

    # Connect to the agent (specify the agent URL with correct port)
    cd samples/python/hosts/cli
    uv run . --agent http://localhost:10002
    
    # If you changed the port when starting the agent, use that port instead
    # uv run . --agent http://localhost:YOUR_PORT
    

Disclaimer

Important: The sample code provided is for demonstration purposes and illustrates the mechanics of the Agent-to-Agent (A2A) protocol. When building production applications, it is critical to treat any agent operating outside of your direct control as a potentially untrusted entity.

All data received from an external agent—including but not limited to its AgentCard, messages, artifacts, and task statuses—should be handled as untrusted input. For example, a malicious agent could provide an AgentCard containing crafted data in its fields (e.g., description, name, skills.description). If this data is used without sanitization to construct prompts for a Large Language Model (LLM), it could expose your application to prompt injection attacks. Failure to properly validate and sanitize this data before use can introduce security vulnerabilities into your application.

Developers are responsible for implementing appropriate security measures, such as input validation and secure handling of credentials to protect their systems and users.

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
official-samplepython

Part of these stacks

View on GitHub