Advanced Features

Burro LLM Settings

Learn how to configure LLM providers (Google, OpenAI, Anthropic) for your agents, understand the secret resolution hierarchy, and how tasks are mapped to agent brains.

Burros are the "brains" of your agency. For an agent to function, it needs to be specialized with an LLM provider and model. This page explains how to configure these settings and how the system resolves them at runtime.

1. Specializing an Agent (Expert Brain)

When you first register a Burro, it is a "generalist" with no specific AI provider configured. You specialize it during the onboarding wizard or via the Burro Settings dialog.

  • Provider: Choose between Google (Gemini), OpenAI (GPT), or Anthropic (Claude).
  • Model: Specify the model name (e.g., gemini-1.5-pro, gpt-4o).
  • API Key: Provide the credential required to access the provider.

2. Secret Resolution Hierarchy

Burros.AI uses a cascading resolution logic to determine which LLM settings and API keys to use. This allows for global defaults that can be overridden at more granular levels.

The resolution order is:

Burro Vault (Agent) > Managed Vault (Corral) > Managed Vault (Organization) > Host Environment (.env) > Blueprint Defaults

SourceDescription
Burro VaultSettings saved directly to a specific agent's private vault. Highest priority.
Corral VaultShared secrets available to all agents in a specific department/corral.
Org VaultGlobal secrets shared across your entire organization.
Host EnvironmentVariables set directly on the machine running the Burro agent (e.g., in a .env file).
Blueprint DefaultsHardcoded defaults defined in the mission's Playbook/Blueprint.

#### Using Vault References

In the API Key field, you can use the syntax ${SECRET_NAME}. The system will automatically look up SECRET_NAME in the Org or Corral vaults at runtime. This is the recommended way to manage keys securely without duplicating them across agents.

3. Agentic Task Resolution

When a mission dispatches a task (e.g., "Write the API implementation"), the system must decide which Burro should execute it. This process follows these steps:

  1. Role Matching: The mission looks for any online Burro that has the required Role (e.g., developer).
  2. Capability Check: It verifies the Burro has the necessary technical capabilities (e.g., file_editor).
  3. Brain Injection: Once a Burro picks up the task, the portal injects the resolved LLM configuration (Provider, Model, and API Key) into the task's execution context.
  4. Local Execution: The Burro uses these settings to initialize its local AI engine and begins executing the instruction.

4. Configuration via Environment Variables

For advanced deployments, you can skip the portal UI and configure your agent's brain directly on the host machine using these environment variables:

  • LLM_PROVIDER: google, openai, or anthropic
  • MODEL_NAME: The specific model ID.
  • GEMINI_API_KEY: Your Google AI Studio or Vertex AI key.
  • OPENAI_API_KEY: Your OpenAI API key.
  • ANTHROPIC_API_KEY: Your Anthropic API key.
Note: If a setting is provided via the Portal UI (Burro Vault), it will override any local environment variables on the host machine.