Enable Human-in-the-Loop Capabilities in Your AI Assistant¶
Overview¶
Human-in-the-loop interaction is an essential feature for building AI assistants that are adaptable and responsive to user intent. It enables systems to incorporate human feedbacks. In our framework, this is supported through the HumanAgent
, a built-in utility designed to gather feedback from users and pass it along to downstream agents in the pipeline. This tutorial will guide you through configuring and using the HumanAgent
to integrate human feedback into your assistant's workflow.
Goals¶
This tutorial will guide you through the following steps:
- Get an overview of
HumanAgent
and its role in integrating human feedback into your AI assistant. - Create or modify a YAML configuration file.
- Develop your assistant and observe how the
HumanAgent
:- Queries the user for feedback,
- Collects the feedback,
- Passes it to downstream agents.
HumanAgent Workflow¶
The HumanAgent
consists of two main components: preparing questions for the user and collecting user feedback.
For question preparation, it supports two modes:
- Structure Mode: A question schema is defined in the configuration, and the
HumanAgent
generates user-facing questions dynamically based on both the schema and the current context in the pipeline. - Free-form Mode: The query is a natural-language question, without a predefined schema. It is composed by an upstream agent—an agent at a preceding stage in the pipeline that invokes the
HumanAgent
.
For feedback collection, the HumanAgent
currently supports two input methods:
- Terminal: Prompts the user for input directly via the command line.
- Custom: Enables integration with customized external input interfaces (e.g., a web UI).
Configuration¶
To leverage human feedback in your assistant, you need to define a HumanAgent
in the YAML configuration. This configuration specifies how queries are prepared for the user and how user responses are collected.
Configuration Parameters¶
config
: Configuration for query and feedback collection.user_input_method
: Specifies how user responses are collected.
Options:"Terminal"
or"Custom"
.feedback_interpreter
(optional): Iftrue
, structured feedback will be interpreted into natural language before being passed to downstream agents.feedback_schema
(required for structure mode): A schema defining structured questions.
Each question in the schema includes:type
: The expected response type.
Options:"bool"
,"str"
,"int"
,"float"
.description
: A breif description of the query.required
: Whether a response is required for this question.
If no schema is defined in YAML configuration, the HumanAgent
will default to the Free-form Mode
.
Here’s an example configuration (config_structure.yaml
) for Structure Mode:
- agent_class: HumanAgent
agent_name: "Human Reviewer"
agent_description: "This agent interacts with the user to get feedback or additional information."
config:
user_input_method: "Terminal" # How the agent collects user feedback
wait_time: 300 # Maximum time in seconds to wait for user feedback
feedback_interpreter: true # Optional. If true, converts structured feedback into natural language
feedback_schema: # Schema definition for structured feedback (required if using Structure Mode)
is_answer_correct: # Question identifier
type: "bool" # Type of expected feedback
description: "Is the answer provided correct?" # Description of the question
required: true # Whether required
need_more_detail: # Question identifier
type: "bool" # Type of expected feedback
description: "Does the answer need more detail?" # Description of the question
required: true # Whether required
optional_comment: # Question identifier
type: "str" # Type of expected feedback
description: "Any additional comments or suggestions" # Description of the question
required: false # Whether required
Here’s an example configuration (config_free_form.yaml
) for Free-form Mode:
- agent_class: HumanAgent
agent_name: "User Feedback Agent"
agent_description: "Asks for user feedback on the proposed dinner plan."
config:
user_input_method: "Terminal" # How the agent collects user feedback
wait_time: 300 # Maximum time in seconds to wait for user feedback
Example Usage¶
This section demonstrates how to use HumanAgent
in your AI assistant through code examples.
1. YAML Configuration File¶
To enable HumanAgent
, you need to create a YAML file. Here are sample configuration files for Structure Mode
and Free-form Mode
:
a. Structure Mode¶
This configuration example supports the following scenario: A user conducts research. After the initial research is conducted, the HumanAgent
engages the user to evaluate the answer and provide suggestions. The feedback is then used to guide follow-up research, making human input an essential step in refining the final result.
Collecting feedback from terminal¶
orchestrator:
agent_list:
- agent_name: "Human in the Loop Advisor"
utility_agents:
- agent_class: SearchAgent
agent_name: "Initial Research Agent"
agent_description: "Performs the first phase of research."
- agent_class: HumanAgent
agent_name: "Human Reviewer"
agent_description: "This agent interacts with the user to get feedback or additional information."
config:
user_input_method: "Terminal" # How the agent collects user feedback
wait_time: 300 # Maximum time in seconds to wait for user feedback
feedback_interpreter: true # Optional. If true, converts structured feedback into natural language
feedback_schema: # Schema definition for structured feedback (required if using Structure Mode)
is_answer_correct: # Question identifier
type: "bool" # Type of expected feedback
description: "Is the answer provided correct?" # Description of the question
required: true # Whether required
need_more_detail: # Question identifier
type: "bool" # Type of expected feedback
description: "Does the answer need more detail?" # Description of the question
required: true # Whether required
optional_comment: # Question identifier
type: "str" # Type of expected feedback
description: "Any additional comments or suggestions" # Description of the question
required: false # Whether required
- agent_class: SearchAgent
agent_name: "Follow-up Research Agent"
agent_description: "Performs additional research based on human input."
super_agents:
- agent_class: FlowSuperAgent
agent_name: "Human in the Loop Advisor"
agent_description: "An advisor that incorporates human feedback into the research process."
config:
goal: "To conduct research, get human feedback, and then write a final report."
agent_list:
# Required. The list of agents to be added in the agent pool. Each agent listed here must be configured under `utility_agents` in the root of the project YAML file.
- agent_name: "Initial Research Agent" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "Human Reviewer"
- agent_name: "Human Reviewer" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "Follow-up Research Agent"
- agent_name: "Follow-up Research Agent" # Required. Exit agent that produces the summary output.
Collecting feedback from customized input method¶
orchestrator:
agent_list:
- agent_name: "Human in the Loop Advisor"
utility_agents:
- agent_class: SearchAgent
agent_name: "Initial Research Agent"
agent_description: "Performs the first phase of research."
- agent_class: HumanAgent
agent_name: "Human Reviewer"
agent_description: "This agent interacts with the user to get feedback or additional information."
config:
user_input_method: "Custom" # How the agent collects user feedback
wait_time: 300 # Maximum time in seconds to wait for user feedback
feedback_interpreter: true # Optional. If true, converts structured feedback into natural language
feedback_schema: # Schema definition for structured feedback (required if using Structure Mode)
is_answer_correct: # Question identifier
type: "bool" # Type of expected feedback
description: "Is the answer provided correct?" # Description of the question
required: true # Whether required
need_more_detail: # Question identifier
type: "bool" # Type of expected feedback
description: "Does the answer need more detail?" # Description of the question
required: true # Whether required
optional_comment: # Question identifier
type: "str" # Type of expected feedback
description: "Any additional comments or suggestions" # Description of the question
required: false # Whether required
- agent_class: SearchAgent
agent_name: "Follow-up Research Agent"
agent_description: "Performs additional research based on human input."
super_agents:
- agent_class: FlowSuperAgent
agent_name: "Human in the Loop Advisor"
agent_description: "An advisor that incorporates human feedback into the research process."
config:
goal: "To conduct research, get human feedback, and then write a final report."
agent_list:
# Required. The list of agents to be added in the agent pool. Each agent listed here must be configured under `utility_agents` in the root of the project YAML file.
- agent_name: "Initial Research Agent" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "Human Reviewer"
- agent_name: "Human Reviewer" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "Follow-up Research Agent"
- agent_name: "Follow-up Research Agent" # Required. Exit agent that produces the summary output.
b. Free-form Mode¶
This configuration example supports the following scenario: A user requests a dinner plan. The system generates an initial plan, gathers user feedback through the HumanAgent
, and refines the plan accordingly.
Just like in Structure Mode
, the feedback collection method can be modified as needed. The following example shows how to configure feedback collection via the terminal. To use a custom input method instead, change user_input_method: "Terminal"
to user_input_method: "Custom"
and define the customized input method in the corresponding python file.
orchestrator:
agent_list:
- agent_name: "Human in the Loop Dinner Planner"
utility_agents:
- agent_class: PlanningAgent
agent_name: "Dinner Planner Agent"
agent_description: "Generates a dinner plan."
- agent_class: HumanAgent
agent_name: "User Feedback Agent"
agent_description: "Asks for user feedback on the proposed dinner plan."
config:
user_input_method: "Terminal" # How the agent collects user feedback
wait_time: 300 # Maximum time in seconds to wait for user feedback
- agent_class: PlanningAgent
agent_name: "Dinner Planner Refinement Agent"
agent_description: "Refine the dinner plan with human feedback."
super_agents:
- agent_class: FlowSuperAgent
agent_name: "Human in the Loop Dinner Planner"
agent_description: "Plans a dinner with initial proposal and refinement after human feedback."
config:
goal: "To generate dinner plan, give an initial plan, get user feedback, and then write a final plan."
agent_list:
# Required. The list of agents to be added in the agent pool. Each agent listed here must be configured under `utility_agents` in the root of the project YAML file.
- agent_name: "Dinner Planner Agent" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "User Feedback Agent"
- agent_name: "User Feedback Agent" # Required.
next_step: # User design. Specifies next steps to run after this agent.
- "Dinner Planner Refinement Agent"
- agent_name: "Dinner Planner Refinement Agent" # Required. Exit agent that produces the summary output.
2. Python File¶
Now, you can start the development of your assistant using these lines of code:
Python Code for Collecting Feedback from Terminal¶
import asyncio
import os
from air import DistillerClient, login
from air.utils import async_print
from dotenv import load_dotenv
load_dotenv()
# Authenticate credentials
auth = login(
account=str(os.getenv("ACCOUNT")),
api_key=str(os.getenv("API_KEY")),
)
async def main():
"""
Runs the human-in-the-loop demo.
"""
client = DistillerClient()
project_name = "human_in_the_loop_project"
session_uuid = f"session_{os.getpid()}"
client.create_project(config_path="config.yaml", project=project_name)
async with client(project=project_name, uuid=session_uuid) as dc:
query = "What are the latest advancements in LLMs?"
responses = await dc.query(query=query)
print(f"--- Running Query: {query} ---")
async for response in responses:
await async_print(
f"Response from {response['role']}: {response['content']}"
)
await dc.reset_memory()
await async_print("--- Session Complete ---")
if __name__ == "__main__":
asyncio.run(main())
Python Code for Collecting Feedback from Customized Input Method¶
A customized input method can be defined to collect user feedback. The example below demonstrates a dummy implementation that reads feedback from a file. This can be easily adapted to suit real-world applications. The function is expected to return a string representing the user's feedback.
import asyncio
import os
from air import DistillerClient, login
from air.utils import async_print
from dotenv import load_dotenv
load_dotenv()
# Authenticate credentials
auth = login(
account=str(os.getenv("ACCOUNT")),
api_key=str(os.getenv("API_KEY")),
)
async def custom_input_method_from_file(query: str) -> str:
"""
Custom input method that reads user feedback from a file.
This function demonstrates a dummy implementation of a customized input method
for collecting human feedback. Given a query string, it asynchronously reads
the content from a local file named `custom_dummy_response.txt` and returns
the contents as a string.
Args:
query (str): The prompt or question to be presented to the user
(not used in this implementation but kept for consistency
with the input method interface).
Returns:
str: The content of the `custom_dummy_response.txt` file, or
"[No input found]" if the file does not exist.
"""
loop = asyncio.get_running_loop()
def read_file():
if not os.path.exists("custom_dummy_response.txt"):
return "[No input found]"
with open("custom_dummy_response.txt", "r", encoding="utf-8") as file:
return file.read()
return await loop.run_in_executor(None, read_file)
async def main():
"""
Runs the human-in-the-loop demo.
"""
client = DistillerClient()
project_name = "human_in_the_loop_project"
session_uuid = f"session_{os.getpid()}"
executor_dict = {"Human Reviewer": custom_input_method_from_file}
client.create_project(config_path="custom_example.yaml", project=project_name)
async with client(
project=project_name, uuid=session_uuid, executor_dict=executor_dict
) as dc:
query = "What are the latest advancements in LLMs?"
responses = await dc.query(query=query)
print(f"--- Running Query: {query} ---")
async for response in responses:
await async_print(
f"Response from {response['role']}: {response['content']}"
)
await dc.reset_memory()
await async_print("--- Session Complete ---")
if __name__ == "__main__":
asyncio.run(main())
Sample Outputs¶
a. Structure Mode Samples¶
--- Running Query: What are the latest advancements in LLMs? ---
Response from Human in the Loop Advisor: Search for the latest research papers and breakthroughs in Large Language Models (LLMs) within the past year, focusing on advancements in natural language understanding, generation capabilities, and applications.
Response from Initial Research Agent: Searching over Web Search
Response from Initial Research Agent:
# Recent Advancements in Large Language Models (LLMs)
## Natural Language Understanding
Recent research has made significant strides in enhancing the natural language understanding capabilities of LLMs. A study published in August 2023 [1] evaluated the confidence level process of LLMs, reflecting human self-assessment stages to guide accurate text interpretation and better judgment formation. This research highlights the importance of metacognitive stages in LLMs, enabling them to grasp semantics and nuances of human language more effectively.
[Output abbreviated]
Response from Human in the Loop Advisor: Please review the provided research on recent advancements in Large Language Models (LLMs) and provide feedback or additional information that can help guide further research, specifically highlighting areas that require more in-depth exploration or clarification.
We're conducting research on recent advancements in Large Language Models (LLMs) and would appreciate your feedback to guide further exploration. Please take a moment to review the provided information and answer the following questions:
Is the answer provided correct? (yes/no)
Does the answer need more detail? (yes/no)
Do you have any additional comments or suggestions about the answer?
> yes, no, add more discussions about the fairness concerns
Response from Human in the Loop Advisor: Perform additional research on the latest advancements in LLMs, focusing on fairness concerns and potential biases, to supplement the existing research findings.
Response from Follow-up Research Agent: Searching over Web Search
Response from Follow-up Research Agent:
# Fairness Concerns and Potential Biases in Large Language Models (LLMs)
Recent advancements in Large Language Models (LLMs) have led to significant improvements in natural language understanding and generation capabilities. However, these models also raise concerns about fairness and potential biases. This report aims to supplement existing research findings by exploring the latest developments in LLMs, with a focus on fairness concerns and potential biases.
## Fairness Notions and Bias Evaluation Metrics
Research has highlighted the importance of formulating fairness notions and bias evaluation metrics for LLMs [1]. A study published in 2024 proposed a taxonomy of fairness notions and bias evaluation metrics, categorizing them into three levels: embeddings, probabilities, and generated text [2]. This taxonomy provides a comprehensive framework for understanding and evaluating bias in LLMs.
[Output abbreviated]
Response from Human in the Loop Advisor: The original query "What are the latest advancements in LLMs?" has been completed. Please let me know if there is anything else that I can help you with.
--- Session Complete ---
b. Free-from Mode Samples¶
--- Running Query: What should I make for weekend dinner? ---
Response from Human in the Loop Dinner Planner: Generate a dinner plan based on popular weekend dinner options, considering a variety of cuisines and dietary preferences, to be used as an initial proposal for user feedback.
Response from Dinner Planner Agent: I'd love to help you plan a delicious dinner for the weekend. Here's an initial proposal that incorporates a variety of cuisines and dietary preferences. Feel free to give me your feedback, and we can adjust accordingly.
For a weekend dinner plan, I've considered a mix of popular options that cater to different tastes and dietary needs. Here are a few ideas:
**Option 1: Italian Night**
- Starter: Bruschetta with fresh tomatoes and basil (vegetarian, gluten-free option available)
- Main Course: Choose between classic spaghetti Bolognese, vegetarian lasagna, or gluten-free pasta with marinara sauce and roasted vegetables
- Dessert: Tiramisu or fresh fruit salad with whipped cream (dairy-free alternative available)
[Output abbreviated]
Response from Human in the Loop Dinner Planner: Based on the provided dinner plan with 5 options, please provide your feedback by selecting one of the options or describing any changes you would like to make to the plan.
Based on the provided dinner plan with 5 options, please provide your feedback by selecting one of the options or describing any changes you would like to make to the plan.
> add more diverse vegetables and fruits
Response from Human in the Loop Dinner Planner: Refine the initial dinner plan by incorporating diverse vegetables and fruits based on user feedback.
Response from Dinner Planner Refinement Agent: I'm glad we got some great feedback from our users on the initial dinner plan. Based on their suggestions, I think we can definitely incorporate a variety of colorful vegetables and fruits to make the meal more exciting and nutritious.
Let's start with the main course. Instead of just having a plain roasted chicken, we can add a medley of roasted vegetables like Brussels sprouts, sweet potatoes, and red onions. We can also toss in some fresh herbs like thyme and rosemary to give it a nice aroma.
For the sides, we can have a mixed greens salad with a variety of fruits like strawberries, blueberries, and pineapple. This will not only add natural sweetness but also provide a refreshing contrast to the rich flavors of the main course.
[Output abbreviated]
Response from Human in the Loop Dinner Planner: The original query "What should I make for weekend dinner?" has been completed. Please let me know if there is anything else that I can help you with.
--- Session Complete ---