Agent Runtime

LLM mediation, recursive execution, and lifecycle APIs that keep Achilles Agents plans on track.

LLMAgent

LLMAgent centralises all communication with language models. It accepts an invoker strategy, which may route requests to OpenAI, Anthropic, a test stub, or any custom provider.

Constructor

new LLMAgent({
    name: 'PlanningLLM',
    invokerStrategy: async ({ prompt, context, history, mode }) => {
        // Dispatch to provider of your choice.
        return provider.invoke(prompt, { context, history, mode });
    },
});

Key Methods

The agent enforces JSON parsing where required, logs debug traces when LLMAgentClient_DEBUG=true, and supports request cancellation by cancelling the underlying invoker promise.

LLMAgent Sessions

Beyond single-call completions, LLMAgent can drive longer-lived agentic sessions that maintain context across multiple prompts while calling tools repeatedly.

These session APIs are useful when you want the LLM to iteratively choose tools, revise plans, and accumulate state rather than recomputing everything from a single natural language prompt.

RecursiveSkilledAgent

RecursiveSkilledAgent supervises complex, multi-step plans. It can execute skills directly, run orchestration scripts, or call itself recursively when LightSOPLang commands reference other skills.

Constructor

const recursiveAgent = new RecursiveSkilledAgent({
    llmAgent,              // optional LLMAgent instance (auto-created if omitted)
    llmAgentOptions: {},   // forwarded when LLMAgent is auto-created
    startDir: process.cwd(),
    searchUpwards: true,    // set false to scan only startDir + additionalSkillRoots (no parent directories)
    skillFilter,           // optional function to include/exclude skills
    dbAdapter,             // for DBTable skills
    promptReader,          // interactive input reader
    onProcessingBegin, onProcessingProgress, onProcessingEnd,
    additionalSkillRoots: [], // extra absolute/relative dirs to scan for .AchillesSkills alongside startDir
});

Lifecycle callbacks (onProcessingBegin, onProcessingProgress, onProcessingEnd) fire only for the outermost execution and are safe places to emit logs or UI signals; errors they throw are caught and logged as warnings.

Execution APIs

Execution Flow

  1. Discovers skills under .AchillesSkills/ (and additional roots) at construction time.
  2. If no skillName is provided, tries a FlexSearch-based orchestrator match; otherwise falls back to LLM selection across skills.
  3. Injects the task description into a skill's default argument (or input) when arguments are missing.
  4. Routes execution to the appropriate subsystem (claude/code/interactive/mcp/orchestrator/dbtable) and forwards any args/promptReader overrides.
  5. Returns the subsystem's result along with the chosen review mode.

Code Skills Handling

When RecursiveSkilledAgent discovers a code skill (cskill.md), it automatically invokes code generation through a specialized process:

  1. Detection: During skill registration, detects skillRecord.type === 'cskill'
  2. Invoke generate-code-skill: Calls generateCode(skillRecord, llmAgent, logger) from the hardcoded generate-code-skill.mjs module
  3. Timestamp comparison: The generator compares file timestamps to determine if regeneration is needed:
    • Finds the newest spec file (in specs/ folder)
    • Finds the oldest generated file (in src/ folder)
    • If src/ doesn't exist or newest spec is newer than oldest src → regenerate
    • Otherwise skip regeneration (fast path)
  4. Single-step LLM generation:
    • Recursively reads all .md files from specs/
    • Extracts Input Format, Output Format, Constraints, and Examples sections from cskill.md
    • Combines everything into one comprehensive prompt
    • Sends to LLM with mode: 'deep' and responseShape: 'text'
  5. Parse multi-file response: The LLM returns markdown with ## file-path: path/to/file.mjs headers, which are parsed to extract individual files
  6. Write to disk: Deletes the existing src/ folder, creates a new one, and writes all generated .mjs files
  7. Additional preparation: After code generation, calls subsystem.prepareSkill() for any subsystem-specific setup

This design keeps code generation logic centralized in generate-code-skill.mjs, making it reusable for all code skills while allowing the agent to handle discovery and invocation automatically.

Review Modes

Error Handling & Cancellation

Both LLMAgent and RecursiveSkilledAgent propagate structured errors. For orchestration flows: