OR
OpenRemedy

Agents

LLM-backed entities running the pipeline.

LLM-backed entities that run the incident pipeline. Each agent has a trust level, a role set, an LLM provider/model binding, and a system prompt.

List

Route: /agents Role gating: read for all; create / edit / delete require admin.

Card grid with name, enabled toggle, LLM provider/model, trust level, assigned roles, and a token-usage progress bar against the configured budget.

Create

Modal collects:

  • Name and description.
  • LLM provider (openai / deepseek / anthropic / kimi / llamacpp / vllm) and model.
  • Trust levelautonomous, supervised, manual. Determines the approval gate; see below.
  • Role checkboxes — which pipeline stages this agent can handle: triage, diagnose, validate, execute, review.
  • Patrol interval (minutes).
  • Monthly token budget.

Detail

Route: /agents/{id} Role gating: admin.

Configuration panel with the following sections.

Identity

Name, description.

LLM

Provider, model, Refresh Models button (pulls the live model list from the provider), temperature, max tokens, reasoning toggle for reasoning-capable models.

Pipeline

Trust level, role checkboxes, patrol interval, token budget.

System prompt

Free-form custom prompt prepended to every stage. If set, this overrides the default Jinja templates for this agent.

Skills

Markdown knowledge modules currently assigned to the agent. Remove button per row plus a dropdown to add more.

Stats (read-only)

  • Incidents resolved.
  • Currently active incidents.
  • Tokens used this period.
  • Budget remaining.

Trust gate

Trust level interacts with each recipe's risk level via the should_request_approval server-side gate.

Trust × risklowmediumhigh
autonomousautoapprovalapproval
supervisedapprovalapprovalapproval
manualapprovalapprovalapproval

The LLM cannot self-approve. Risk classification on a recipe is the operator's responsibility, not the agent's.

Related routes

  • skills — assignable knowledge modules
  • incidents — agents are assigned to incidents
  • settings — global LLM provider configuration and prompt templates