Skip to content

Base Utility Agent

The UtilityAgent is a built-in agent in the AI Refinery SDK for general-purpose tasks. It uses a customizable magic_prompt to guide its behavior for simple use cases such as summarization or formatting.

Although this is user defined agent, it behaves as all other built-in agents — executing on the AI Refinery server and returning results to the SDK.

The UtilityAgent serves as a lightweight and adaptable tool — ideal for single-task prompts that require minimal structure but still benefit from memory access and customizable output formatting.

Workflow Overview

The workflow of UtilityAgent is simple and adaptable:

  1. Magic Prompt Construction: The core of the UtilityAgent is the magic_prompt, a templated prompt string that guides the agent’s behavior. This prompt is typically written to ask the agent to perform a specific action (e.g., "Please summarize the following content" or "Explain this concept in simple terms").

  2. Response Generation: The completed prompt is sent to the configured language model, and the resulting output is formatted according to the specified output_style (e.g., markdown, HTML, or conversational). Context such as chat history, environment variables, or dates can also be optionally included.

This lightweight, configurable workflow makes the UtilityAgent a versatile component in the AIRefinery platform.

Usage

As with other built-in agents in the AI Refinery SDK, UtilityAgent can be included by adding its configuration to your project YAML file. At minimum, you need to specify the agent_class, agent_name, and a magic_prompt string.

Quickstart

To quickly add a UtilityAgent to your project, here's a simple YAML example to create a summarization agent:

utility_agents:
  - agent_class: UtilityAgent
    agent_name: "Summarization Agent"  # Required. Name of the agent, referenced in the orchestrator.
    config:
      magic_prompt: |
        Please help me write a summary based on the user query.

        [ User Query ]
        {Query}

orchestrator:
  agent_list:
    - agent_name: "Summarization Agent"

Template YAML Configuration of UtilityAgent

The UtilityAgent also supports additional settings. See the template YAML below for all available options:

utility_agents:
  # Required
  - agent_class: UtilityAgent  # Required.
    agent_name: <A name that you choose for this agent, e.g., "Utility Agent".> # Required
    agent_description: <Description of the agent>  # Optional
    config:
      # Required. The main prompt the agent uses to generate a response.
      magic_prompt: <Your magic prompt string here>

      # Optional. Configuration options for the agent.
      output_style: <"markdown" or "conversational" or "html">  # Optional
      contexts:  # Optional list of memory contexts
        - "date"
        - "chat_history"  # The chat history up to a certain number (configured using memory_config) of rounds
        - "env_variable"
        - "relevant_chat_history"  # The chat history that is relevant to the current query

      llm_config:  # Optional. The LLM the agent should use. Defaults to base_config.llm_config if not provided.
        model: <An LLM from the model catalog>
        temperature: <A temperature value for the LLM inference>  # Optional. Defaults to 0.5
        top_p: <Top-p sampling value>  # Optional. Defaults to 1
        max_tokens: <Maximum token limit>  # Optional. Defaults to 2048

      self_reflection_config:  # Optional. Configuration for self-reflection.
        self_reflection: <true or false>  # Enable or disable self-reflection. Defaults to false.
        max_attempts: <number>  # Max times the agent may reflect. Defaults to 2.
        response_selection_mode: <"best" | "aggregate" | "auto">  # Strategy for final output. Defaults to "auto".
        return_internal_reflection_msg: <true or false>  # Whether to return internal messages. Defaults to false.