Grafana Agent
27A2A agent server for grafana dashboards automation tasks
Getting Started
Or connect to the hosted endpoint: https://registry.inference-gateway.com/
README
Grafana-Agent
A2A agent server for grafana dashboards automation tasks
A enterprise-ready Agent-to-Agent (A2A) server that provides AI-powered capabilities through a standardized protocol.
Quick Start
# Run the agent
go run .
# Or with Docker
docker build -t grafana-agent .
docker run -p 8080:8080 grafana-agent
Quick Install
Add this agent to your Inference Gateway CLI:
infer agents add grafana-agent http://localhost:8080 \
--oci ghcr.io/inference-gateway/grafana-agent:latest \
--run
Features
- ✅ A2A protocol compliant
- ✅ AI-powered capabilities
- ✅ Streaming support
- ✅ Enterprise-ready
- ✅ Minimal dependencies
Endpoints
GET /.well-known/agent-card.json- Agent metadata and capabilitiesGET /health- Health check endpointPOST /a2a- A2A protocol endpoint
Available Skills
| Skill | Description | Parameters |
|---|---|---|
discover_metrics |
Discovers available metrics from a Prometheus endpoint with optional filtering | metric_type, name_pattern, prometheus_url |
generate_promql_queries |
Generates PromQL query suggestions for given metric names by querying Prometheus metadata | metric_names, prometheus_url |
validate_promql_query |
Validates a PromQL query against a Prometheus server | prometheus_url, query |
create_dashboard |
Creates a Grafana dashboard with specified panels, queries, and configurations | dashboard_title, deploy, description, grafana_url, panels, refresh_interval, tags, time_range, variables |
deploy_dashboard |
Deploys a dashboard JSON to Grafana (Cloud or self-hosted) | dashboard_json, folder_uid, grafana_url, message, overwrite |
Configuration
Configure the agent via environment variables:
Custom Configuration
The following custom configuration variables are available:
| Category | Variable | Description | Default |
|---|---|---|---|
| Grafana | GRAFANA_API_KEY |
ApiKey configuration | `` |
| Grafana | GRAFANA_DEPLOY_ENABLED |
DeployEnabled configuration | false |
| Grafana | GRAFANA_ORG_ID |
OrgID configuration | `` |
| Grafana | GRAFANA_URL |
Url configuration | `` |
| Category | Variable | Description | Default |
|---|---|---|---|
| Server | A2A_PORT |
Server port | 8080 |
| Server | A2A_DEBUG |
Enable debug mode | false |
| Server | A2A_AGENT_URL |
Agent URL for internal references | http://localhost:8080 |
| Server | A2A_STREAMING_STATUS_UPDATE_INTERVAL |
Streaming status update frequency | 1s |
| Server | A2A_SERVER_READ_TIMEOUT |
HTTP server read timeout | 120s |
| Server | A2A_SERVER_WRITE_TIMEOUT |
HTTP server write timeout | 120s |
| Server | A2A_SERVER_IDLE_TIMEOUT |
HTTP server idle timeout | 120s |
| Server | A2A_SERVER_DISABLE_HEALTHCHECK_LOG |
Disable logging for health check requests | true |
| Agent Metadata | A2A_AGENT_CARD_FILE_PATH |
Path to agent card JSON file | .well-known/agent-card.json |
| LLM Client | A2A_AGENT_CLIENT_PROVIDER |
LLM provider (openai, anthropic, azure, ollama, deepseek) |
`` |
| LLM Client | A2A_AGENT_CLIENT_MODEL |
Model to use | `` |
| LLM Client | A2A_AGENT_CLIENT_API_KEY |
API key for LLM provider | - |
| LLM Client | A2A_AGENT_CLIENT_BASE_URL |
Custom LLM API endpoint | - |
| LLM Client | A2A_AGENT_CLIENT_TIMEOUT |
Timeout for LLM requests | 30s |
| LLM Client | A2A_AGENT_CLIENT_MAX_RETRIES |
Maximum retries for LLM requests | 3 |
| LLM Client | A2A_AGENT_CLIENT_MAX_CHAT_COMPLETION_ITERATIONS |
Max chat completion rounds | 10 |
| LLM Client | A2A_AGENT_CLIENT_MAX_TOKENS |
Maximum tokens for LLM responses | 4096 |
| LLM Client | A2A_AGENT_CLIENT_TEMPERATURE |
Controls randomness of LLM output | 0.7 |
| Capabilities | A2A_CAPABILITIES_STREAMING |
Enable streaming responses | true |
| Capabilities | A2A_CAPABILITIES_PUSH_NOTIFICATIONS |
Enable push notifications | false |
| Capabilities | A2A_CAPABILITIES_STATE_TRANSITION_HISTORY |
Track state transitions | false |
| Task Management | A2A_TASK_RETENTION_MAX_COMPLETED_TASKS |
Max completed tasks to keep (0 = unlimited) | 100 |
| Task Management | A2A_TASK_RETENTION_MAX_FAILED_TASKS |
Max failed tasks to keep (0 = unlimited) | 50 |
| Task Management | A2A_TASK_RETENTION_CLEANUP_INTERVAL |
Cleanup frequency (0 = manual only) | 5m |
| Storage | A2A_QUEUE_PROVIDER |
Storage backend (memory or redis) |
memory |
| Storage | A2A_QUEUE_URL |
Redis connection URL (when using Redis) | - |
| Storage | A2A_QUEUE_MAX_SIZE |
Maximum queue size | 100 |
| Storage | A2A_QUEUE_CLEANUP_INTERVAL |
Task cleanup interval | 30s |
| Artifacts | ARTIFACTS_ENABLE |
Enable artifacts support | false |
| Artifacts | ARTIFACTS_SERVER_HOST |
Artifacts server host | localhost |
| Artifacts | ARTIFACTS_SERVER_PORT |
Artifacts server port | 8081 |
| Artifacts | ARTIFACTS_STORAGE_PROVIDER |
Storage backend (filesystem or minio) |
filesystem |
| Artifacts | ARTIFACTS_STORAGE_BASE_PATH |
Base path for filesystem storage | ./artifacts |
| Artifacts | ARTIFACTS_STORAGE_BASE_URL |
Override base URL for direct downloads | (auto-generated) |
| Artifacts | ARTIFACTS_STORAGE_ENDPOINT |
MinIO/S3 endpoint URL | - |
| Artifacts | ARTIFACTS_STORAGE_ACCESS_KEY |
MinIO/S3 access key | - |
| Artifacts | ARTIFACTS_STORAGE_SECRET_KEY |
MinIO/S3 secret key | - |
| Artifacts | ARTIFACTS_STORAGE_BUCKET_NAME |
MinIO/S3 bucket name | artifacts |
| Artifacts | ARTIFACTS_STORAGE_USE_SSL |
Use SSL for MinIO/S3 connections | true |
| Artifacts | ARTIFACTS_RETENTION_MAX_ARTIFACTS |
Max artifacts per task (0 = unlimited) | 5 |
| Artifacts | ARTIFACTS_RETENTION_MAX_AGE |
Max artifact age (0 = no age limit) | 168h |
| Artifacts | ARTIFACTS_RETENTION_CLEANUP_INTERVAL |
Cleanup frequency (0 = manual only) | 24h |
| Authentication | A2A_AUTH_ENABLE |
Enable OIDC authentication | false |
Development
# Generate code from ADL
task generate
# Run tests
task test
# Build the application
task build
# Run linter
task lint
# Format code
task fmt
Debugging
Use the A2A Debugger to test and debug your A2A agent during development. It provides a web interface for sending requests to your agent and inspecting responses, making it easier to troubleshoot issues and validate your implementation.
docker run --rm -it --network host ghcr.io/inference-gateway/a2a-debugger:latest --server-url http://localhost:8080 tasks submit "What are your skills?"
docker run --rm -it --network host ghcr.io/inference-gateway/a2a-debugger:latest --server-url http://localhost:8080 tasks list
docker run --rm -it --network host ghcr.io/inference-gateway/a2a-debugger:latest --server-url http://localhost:8080 tasks get <task ID>
Deployment
Docker
The Docker image can be built with custom version information using build arguments:
# Build with default values from ADL
docker build -t grafana-agent .
# Build with custom version information
docker build \
--build-arg VERSION=1.2.3 \
--build-arg AGENT_NAME="My Custom Agent" \
--build-arg AGENT_DESCRIPTION="Custom agent description" \
-t grafana-agent:1.2.3 .
Available Build Arguments:
VERSION- Agent version (default:0.1.0)AGENT_NAME- Agent name (default:grafana-agent)AGENT_DESCRIPTION- Agent description (default:A2A agent server for grafana dashboards automation tasks)
These values are embedded into the binary at build time using linker flags, making them accessible at runtime without requiring environment variables.
License
MIT License - see LICENSE file for details