Skip to content

Integrate A2A-exposed agents using the A2AClientAgent

You can use AI Refinery as the platform to communicate with agents that are exposed using the A2A protocol, and integrate them into your agentic teams and workflows. In this tutorial, we provide an example of using the A2AClientAgent to connect to a server exposing an agent with currency conversion capabilities.

Objective

Following this tutorial, you will learn how to host an A2A server locally, configure an A2AClientAgent in the AI Refinery to connect to the A2A server, and integrate the agent into your workflow.

Tutorial Description

The tutorial walks you through the end-to-end process of exposing an agent over A2A by spinning up and hosting locally the server that will expose it, creating an A2AClientAgent instance in the AI Refinery to connect to the server, and communicating with the agent to use its capabilities.

Tutorial Workflow

The tutorial consists of the following steps:

  1. Server Setup: Spinning up and hosting locally a server that exposes an agent over the A2A protocol.

  2. Client Setup and Utilization: Building an instance of the A2AClientAgent and connecting it to the running server to test its functionality.

Server Setup

In this tutorial, we will use an agent provided in the official A2A repository. This agent is a Langgraph-backed React application that provides simple currency conversion capabilities.

Step 1: Set up environment

1. Install dependencies

To be able to run the server, you need to install some dependencies that the server files have. It is recommended to use a dedicated virtual environment for the server's dependencies. Run the following commands in your terminal to create and activate a virtual environment:

python -m venv env_a2a_server
source env_a2a_server/bin/activate

Then, copy the following library versions and paste them in a file named requirements.txt in the folder with the server files:

a2a-sdk==0.2.8
httpx>=0.28.1
langchain-google-genai>=2.0.10
langgraph>=0.3.18
langchain-openai >=0.1.0
pydantic>=2.10.6
python-dotenv>=1.1.0
uvicorn>=0.34.2

Then, install the dependencies you copied previously in the new virtual environment by running the following command in your terminal:

pip install -r requirements.txt

2. Set up credentials as environment variables

With the dependencies installed, you now need to set up the credentials for your server to be able to access an LLM backbone. For that, you can use either a Google Gemini model (you can get a free API key following the instructions here) or an OpenAI model, or even a local LLM. You will need to save those credentials in a .env file in the folder with the server files.

  • If you're using a Google Gemini model (gemini-pro, etc.):

    echo "GOOGLE_API_KEY=your_api_key_here" > .env
    

  • If you're using OpenAI or any compatible API (e.g., local LLM via Ollama, LM Studio, etc.):

    echo "LLM_API_KEY=your_api_key_here" > .env  (when needed)
    echo "TOOL_LLM_URL=your_llm_url" > .env
    echo "TOOL_LLM_NAME=your_llm_name" > .env
    

Step 2: Server launching and testing

First, you need to download the files that run the server exposing the agent. To do so, download the following files from the corresponding subfolder in the A2A repository and save them in a folder:

  • __main__.py: The file that launches the server.
  • agent.py: The class containing the main logic of the agent.
  • agent_executor.py: The class containing the wrappers for the agent's functions.
  • test_client.py: A test script to verify that the server is running, accepting requests, and publishing responses.

After you have set up the dependencies and the environment variables, you can launch the server with the following command:

python __main__.py

If the server is launched succesfully, an output similar to this would appear on your terminal window:

INFO:     Started server process [1234]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:10000 (Press CTRL+C to quit)

By default, the server will start on http://localhost:10000.

After you launch the server, you can use the script test_client.py that you downloaded above to test its responsiveness. In a separate terminal, run the script to send a sample query to the agent:

python test_server.py

If the server works as expected, the test script should give you a JSON-formatted response in the terminal window that resembles the following:

{'id': 'a015b565-2ce4-44a3-bfeb-c03c619b55d0', 
'jsonrpc': '2.0', 
'result': {'artifacts': 
    [{'artifactId': '77430ce0-54c2-48ea-88a5-0d4308e98e5f', 
    'name': 'conversion_result', 
    'parts': 
        [{'kind': 'text', 
          'text': 'As of the latest available data, the exchange rate from USD to INR is 87.65. Therefore, 10 USD would be approximately 876.5 INR.'}]}], 
...
'status': {'state': 'completed', 'timestamp': '2025-08-04T17:29:29.735525+00:00'}}
}

Client Setup and Utilization

After you have successfully set up the A2A server, you can now configure a client to communicate with it and use its capabilities. To do so, you can use the A2AClientAgent of the AI Refinery.

A sample configuration of such an agent that connects to the above server is shown below:

orchestrator:
  agent_list:
    - agent_name: "Currency Converter"

utility_agents:
  - agent_class: A2AClientAgent
    agent_name: "Currency Converter"
    agent_description: "A currency-converter agent. Forward all currency-related queries to that one."
    config:
      base_url: 'http://0.0.0.0:10000' # Required, URL where the server is hosted
      agent_card:
        public:
          # Required, location where the agent card can be found 
          public_agent_card_path: "/.well-known/agent.json"
          # Required, RPC URL of the server, could be different than base_url  
          rpc_url: "http://0.0.0.0:10000"
      # Optional, response preferences for the agent such as tracing intermediate responses and streaming output
      response_prefs:
        tracing: False
        streaming: False
      wait_time: 300 # Optional, time in seconds to wait for an agent's response
      contexts: # Optional, additional contexts for the agent.
        - "date"
        - "chat_history"

With the above configuration, you list your A2AClientAgent under the orchestrator of the AI Refinery. In that way, if a query is identified as suitable to be handled by the agent that is exposed over A2A, the orchestrator will pass the query to the A2AClientAgent and, in turn, to the server where the agent is exposed.

After you configure your A2AClientAgent, you are ready to interact with the A2A-exposed agent through the AI Refinery platform. To do so, you can simply run the following code:

import asyncio
import os
from dotenv import load_dotenv
from air import DistillerClient, login

load_dotenv() # This loads your environment variables from the '.env' file

# Authenticate using environment variables or fallback values.
AUTH = login(
    account=str(os.getenv("ACCOUNT")),
    api_key=str(os.getenv("API_KEY")),
)

async def a2a_client_agent_demo():
    """
    Simple demo of communication between AIR and an A2A-exposed agent.
    The agent has currency conversion capabilities.
    """
    # Initialize an instance of the distiller client
    distiller_client = DistillerClient()
    distiller_client.create_project(config_path="example.yaml", project="example-a2a")

    # Define queries
    queries = [
        "How much is 10 euros in canadian dollars?",
    ]
    async with distiller_client(
        project="example-a2a",
        uuid="test_user",
    ) as dc:
        for query in queries: # Send in queries one by one
            responses = await dc.query(query=query)
            print(f"----\nQuery: {query}")
            async for response in responses:
                print(f"Response ({response['role']}): {response['content']}")


if __name__ == "__main__":
    print("A2A Client Agent Agent Demo")
    asyncio.run(a2a_client_agent_demo())