StackA2A
travelOfficialpython

Travel Planner Agent (Official Sample)

59

by A2A Project

Official A2A python sample agent: Travel Planner Agent

1,329 starsUpdated 2026-02-22apache-2.0
Quality Score59/100
Community
70
Freshness
100
Official
100
Skills
10
Protocol
30
🔒 Security
20

Getting Started

1Clone the repository
$ git clone https://github.com/a2aproject/a2a-samples
2Navigate to the project
$ cd a2a-samples/samples/python/agents/travel_planner_agent
3Install dependencies
$ pip install -r requirements.txt
4Run the agent
$ python main.py

README

travel planner example

This is a Python implementation that adheres to the A2A (Agent2Agent) protocol. It is a travel assistant in line with the specifications of the OpenAI model, capable of providing you with travel planning services. A travel assistant demo implemented based on Google's official a2a-python SDK.

Getting started

  1. update config.json with your own OpenAI API key etc.

You need to modify the values corresponding to model_name and base_url.

{
  "model_name":"qwen3-32b", //defaults to gpt-4o if empty
  "api_key": "API_KEY",
  "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1" //defaults to ChatGPT if empty
}
  1. Create an environment file with your API key:

You need to set the value corresponding to API_KEY.

echo "API_KEY=your_api_key_here" > .env
  1. Start the server

    uv run .
    
  2. Run the loop client

    uv run loop_client.py
    

License

This project is licensed under the terms of the Apache 2.0 License.

Contributing

See CONTRIBUTING.md for contribution guidelines.

Disclaimer

Important: The sample code provided is for demonstration purposes and illustrates the mechanics of the Agent-to-Agent (A2A) protocol. When building production applications, it is critical to treat any agent operating outside of your direct control as a potentially untrusted entity.

All data received from an external agent—including but not limited to its AgentCard, messages, artifacts, and task statuses—should be handled as untrusted input. For example, a malicious agent could provide an AgentCard containing crafted data in its fields (e.g., description, name, skills.description). If this data is used without sanitization to construct prompts for a Large Language Model (LLM), it could expose your application to prompt injection attacks. Failure to properly validate and sanitize this data before use can introduce security vulnerabilities into your application.

Developers are responsible for implementing appropriate security measures, such as input validation and secure handling of credentials to protect their systems and users.

Capabilities

StreamingPush NotificationsMulti-TurnAuth: none
official-samplepython

Part of these stacks

View on GitHub