Agent Runtime

LLM mediation, recursive execution, and lifecycle APIs that keep Achilles Agents plans on track.

LLMAgent

LLMAgent centralises all communication with language models. It accepts an invoker strategy, which may route requests to OpenAI, Anthropic, a test stub, or any custom provider.

Constructor

new LLMAgent({
    name: 'PlanningLLM',
    invokerStrategy: async ({ prompt, context, history, mode }) => {
        // Dispatch to provider of your choice.
        return provider.invoke(prompt, { context, history, mode });
    },
});

Key Methods

The agent enforces JSON parsing where required, logs debug traces when LLMAgentClient_DEBUG=true, and supports request cancellation by cancelling the underlying invoker promise.

LLMAgent Sessions

Beyond single-call completions, LLMAgent can drive longer-lived agentic sessions that maintain context across multiple prompts while calling tools repeatedly. For a detailed breakdown of session types, see the Agentic Sessions documentation.

These session APIs are useful when you want the LLM to iteratively choose tools, revise plans, and accumulate state rather than recomputing everything from a single natural language prompt.

RecursiveSkilledAgent

RecursiveSkilledAgent supervises complex, multi-step plans. It can execute skills directly, run orchestration scripts, or call itself recursively when LightSOPLang commands reference other skills.

Constructor

const recursiveAgent = new RecursiveSkilledAgent({
    llmAgent,              // optional LLMAgent instance (auto-created if omitted)
    llmAgentOptions: {},   // forwarded when LLMAgent is auto-created
    startDir: process.cwd(),
    searchUpwards: true,    // set false to scan only startDir + additionalSkillRoots (no parent directories)
    skillFilter,           // optional function to include/exclude skills
    dbAdapter,             // for DBTable skills
    onProcessingBegin, onProcessingProgress, onProcessingEnd,
    additionalSkillRoots: [], // extra absolute/relative dirs to scan for .AchillesSkills alongside startDir
    exposeInternalSkills: false, // when true, registers internal helper skills for direct invocation
});

Lifecycle callbacks (onProcessingBegin, onProcessingProgress, onProcessingEnd) fire only for the outermost execution and are safe places to emit logs or UI signals; errors they throw are caught and logged as warnings.

Execution APIs

Execution Flow

  1. Discovers skills under .AchillesSkills/ (and additional roots) at construction time.
  2. If no skillName is provided, tries a FlexSearch-based orchestrator match; otherwise falls back to LLM selection across skills.
  3. Injects the task description into a skill's default argument (or input) when arguments are missing.
  4. Routes execution to the appropriate subsystem (claude/code/mcp/orchestrator/dbtable) and forwards the prompt text.
  5. Returns the subsystem's result along with the chosen review mode.

Code Skills Handling

When RecursiveSkilledAgent discovers a code skill (cskill.md), it automatically invokes code generation through the shared mirror code generator (exported as generateMirrorCode):

  1. Detection: During skill registration, detects skillRecord.type === 'cskill'
  2. Invoke mirror generator: Calls generateMirrorCode(skillRecord.skillDir, llmAgent, logger)
  3. Up-to-date check: Compares the newest spec in specs/ with the oldest expected target file derived from those specs; missing targets trigger regeneration. There is no src/ convention.
  4. Single-step LLM generation:
    • Recursively reads all .md files from specs/
    • Builds one prompt from specs (the generator no longer depends on cskill.md sections)
    • Sends to the LLM with mode: 'deep' and responseShape: 'text'
  5. Parse multi-file response: Expects markdown with ## file-path: path/to/file.mjs headers and normalises paths (strips ./ or skill-name prefixes; prevents traversal).
  6. Write to disk: Writes generated files directly under the skill directory, mirroring the specs/ paths (e.g., specs/utils/foo.mjs.mdutils/foo.mjs).
  7. Additional preparation: After code generation, calls subsystem.prepareSkill() for any subsystem-specific setup.

This design keeps code generation centralized in the mirror code generator while allowing the agent to handle discovery and invocation automatically.

Review Modes

Internal Skills

When exposeInternalSkills: true is set, the agent registers built-in helper skills (such as mirror-code-generator) for direct invocation. These internal skills are implemented as orchestrator module skills and can be called via executeWithReviewMode() with an explicit skillName.

const agent = new RecursiveSkilledAgent({
    llmAgent,
    exposeInternalSkills: true,
});

// Invoke mirror-code-generator directly
const result = await agent.executeWithReviewMode(
    '/path/to/skill-with-specs',
    { skillName: 'mirror-code-generator' },
    'none'
);

Error Handling & Cancellation

Both LLMAgent and RecursiveSkilledAgent propagate structured errors. For orchestration flows: