Author Agent¶
The AuthorAgent
is a built-in utility agent within the AI Refinery SDK, specifically designed to format and refine publishable content based on the information you have collected so far. For instance, if you request, "Hey AuthorAgent, write me a good draft," the agent, leveraging the shared memory of all other agents, will generate a draft report.
Workflow Overview¶
The workflow of AuthorAgent
comprises of three key components:
-
Leading Questions: Leading questions are defined as pairs of questions and prompts that you specify within the
AuthorAgent
configuration (see the next section below). These questions serve to outline the content structure for your draft. By guiding theAuthorAgent
with these questions, you provide a clear framework for the draft. -
Memory Retrieval: The AI Refinery service maintains multiple memory modules that are accessible to various agents in your project. Using the leading questions as a guide, the
AuthorAgent
retrieves pertinent information from these shared memory modules. It then utilizes this information to generate the draft. -
Storing the Response: Once the draft is generated, it is stored in memory. In the future, if more information is gathered (e.g., through the
SearchAgent
), and you request a new draft, theAuthorAgent
will retrieve its previous response as well as all the other relevant information from the memory as the context. This ensures that the draft is enriched with both the new and previously stored relevant information.
By following this workflow, the AuthorAgent
efficiently produces well-structured, refined drafts tailored to the information and configurations provided.
Usage¶
As a built-in utility agent in the AI Refinery SDK, AuthorAgent
can be easily integrated into your project by adding the necessary configurations to your project YAML file. Specifically, ensure the following configurations are included:
- Add a utility agent with
agent_class: AuthorAgent
underutility_agents
. - Ensure the
agent_name
you chose for yourAuthorAgent
is listed in theagent_list
underorchestrator
.
Quickstart¶
To quickly set up a project with a AuthorAgent
, use the following YAML configuration. You can add more agents and/or leading questions as needed. Refer to the next section for a detailed overview of configurable options for AuthorAgent.
utility_agents:
- agent_class: AuthorAgent
agent_name: "My Author Agent" # Required. A name that you choose for your author agent. This needs to be listed under orchestrator.
config:
memory_attribute_key: "plan" # Required. Author agent will save the output based on the memory attribute key
leading_questions:
# Required. A list of <question, prompt> as the outline of the draft to be generated.
- question: "What is the name of the project?" # Example question 1
prompt: "Project name. This is usually specified by the background information." # Corresponding prompt for example question 1
- question: "Who is the audience?" # Example question 2
prompt: "Who exactly are we targeting? Detail the specific demographics, industries, or roles we aim to reach, emphasizing how our project aligns with their interests and needs." # # Corresponding prompt for example question 1
orchestrator:
agent_list:
- agent_name: "My Author Agent" # The name you chose for your ResearchAgent above.
Template YAML Configuration of AuthorAgent
¶
In addition to the configurations mentioned for the example above, the AuthorAgent
supports several other configurable options. See the template YAML configuration below for all available settings.
agent_class: AuthorAgent
agent_name: <name of the agent> # A name that you choose for your author agent
config:
memory_attribute_key: <must be `plan`> # Required. Author agent will save the output based on the `memory_attribute_key`. More options for the `memory_attribute_key` will be added in the future.
title: <Title of the output generated > # Optional. The title of the generated draft.
section_by_section: <True or False> # Optional. Whether to write the response section by section i.e., separate by each leading_question.
leading_questions:
# Required. A list of <question, prompt> as the outline of the draft to be generated.
- question: "<Question 1>" # Example question 1
prompt: "<Prompt 1>" # Corresponding prompt for example question 1
- question: "<Question 2>" # Example question 2
prompt: "<Prompt 2>" # Corresponding prompt for example question 2
output_style: <"markdown" or "conversational" or "html"> # Optional field
contexts: # Optional field
- "date"
- "chat_history"
- "chat_summary"
llm_config:
# Optional. Customized llm config (if you want the research agent to use a different LLM than the on in your base config)
model: <model_name>