Agent Runtime
LLM mediation, recursive execution, and lifecycle APIs that keep Achilles Agents plans on track.
LLMAgent
LLMAgent centralises all communication with language models. It accepts an invoker strategy,
which may route requests to OpenAI, Anthropic, a test stub, or any custom provider.
Constructor
new LLMAgent({
name: 'PlanningLLM',
invokerStrategy: async ({ prompt, context, history, mode }) => {
// Dispatch to provider of your choice.
return provider.invoke(prompt, { context, history, mode });
},
});
Key Methods
complete({ prompt, history = [], context = {}, mode = 'fast' }): Returns the raw model output string.executePrompt(promptText, { responseShape, mode, globalMemory, userMemory, sessionMemory }): Builds a task prompt with optional memory blocks and returns plain text or parsed JSON/code when aresponseShapeis provided.interpretMessage(message, { intents }): Classifies user replies intoaccept,cancel,update, orideas, using heuristics first and falling back to the model.parseMarkdownKeyValues(markdown): Converts LLM output into key-value maps (utility for parsing structured text).classifyMessage(message): Lightweight intent classifier when a full completion is not needed.
The agent enforces JSON parsing where required, logs debug traces when LLMAgentClient_DEBUG=true, and supports request
cancellation by cancelling the underlying invoker promise.
LLMAgent Sessions
Beyond single-call completions, LLMAgent can drive longer-lived agentic sessions that maintain
context across multiple prompts while calling tools repeatedly. For a detailed breakdown of session types, see the Agentic Sessions documentation.
startLoopAgentSession(tools, initialPrompt, options): creates anAgenticSessionthat orchestrates arbitrary tools. The session object exposesnewPrompt(prompt)to continue the conversation on the same context, andgetVariables()to inspect the final values or summary state.startSOPLangAgentSession(skillsDescription, initialPrompt, options): creates aSOPAgenticSessionthat uses LightSOPLang as the internal planning language. Each prompt refines or extends the underlying SOP script instead of starting from scratch;getPlan()returns the current source.
These session APIs are useful when you want the LLM to iteratively choose tools, revise plans, and accumulate state rather than recomputing everything from a single natural language prompt.
RecursiveSkilledAgent
RecursiveSkilledAgent supervises complex, multi-step plans. It can execute skills directly,
run orchestration scripts, or call itself recursively when LightSOPLang commands reference other skills.
Constructor
const recursiveAgent = new RecursiveSkilledAgent({
llmAgent, // optional LLMAgent instance (auto-created if omitted)
llmAgentOptions: {}, // forwarded when LLMAgent is auto-created
startDir: process.cwd(),
searchUpwards: true, // set false to scan only startDir + additionalSkillRoots (no parent directories)
skillFilter, // optional function to include/exclude skills
dbAdapter, // for DBTable skills
onProcessingBegin, onProcessingProgress, onProcessingEnd,
additionalSkillRoots: [], // extra absolute/relative dirs to scan for .AchillesSkills alongside startDir
exposeInternalSkills: false, // when true, registers internal helper skills for direct invocation
});
Lifecycle callbacks (onProcessingBegin, onProcessingProgress, onProcessingEnd) fire only for the outermost execution and are safe places to emit logs or UI signals; errors they throw are caught and logged as warnings.
Execution APIs
executePrompt(prompt, { skillName }): Primary entry point. ProvideskillNameto force a specific skill or omit to let the agent choose.executeWithReviewMode(prompt, options, reviewMode): Explicit review mode.executePromptWithReview(prompt, options)/executePromptWithHumanReview(prompt, options): Helpers for LLM or human review modes.cancel(): Propagates cancellation down the active stack.
Execution Flow
- Discovers skills under
.AchillesSkills/(and additional roots) at construction time. - If no
skillNameis provided, tries a FlexSearch-based orchestrator match; otherwise falls back to LLM selection across skills. - Injects the task description into a skill's default argument (or
input) when arguments are missing. - Routes execution to the appropriate subsystem (claude/code/mcp/orchestrator/dbtable) and forwards the prompt text.
- Returns the subsystem's result along with the chosen review mode.
Code Skills Handling
When RecursiveSkilledAgent discovers a code skill (cskill.md), it automatically invokes
code generation through the shared mirror code generator
(exported as generateMirrorCode):
- Detection: During skill registration, detects
skillRecord.type === 'cskill' - Invoke mirror generator: Calls
generateMirrorCode(skillRecord.skillDir, llmAgent, logger) - Up-to-date check: Compares the newest spec in
specs/with the oldest expected target file derived from those specs; missing targets trigger regeneration. There is nosrc/convention. - Single-step LLM generation:
- Recursively reads all
.mdfiles fromspecs/ - Builds one prompt from specs (the generator no longer depends on
cskill.mdsections) - Sends to the LLM with
mode: 'deep'andresponseShape: 'text'
- Recursively reads all
- Parse multi-file response: Expects markdown with
## file-path: path/to/file.mjsheaders and normalises paths (strips./or skill-name prefixes; prevents traversal). - Write to disk: Writes generated files directly under the skill directory, mirroring the
specs/paths (e.g.,specs/utils/foo.mjs.md→utils/foo.mjs). - Additional preparation: After code generation, calls
subsystem.prepareSkill()for any subsystem-specific setup.
This design keeps code generation centralized in the mirror code generator while allowing the agent to handle discovery and invocation automatically.
Review Modes
- none: Execute immediately without review interrupts.
- llm: Let the LLM generate a review before execution.
- human: Surface results for human review.
Internal Skills
When exposeInternalSkills: true is set, the agent registers built-in helper skills
(such as mirror-code-generator) for direct invocation. These internal skills are
implemented as orchestrator module skills and can be called via executeWithReviewMode()
with an explicit skillName.
const agent = new RecursiveSkilledAgent({
llmAgent,
exposeInternalSkills: true,
});
// Invoke mirror-code-generator directly
const result = await agent.executeWithReviewMode(
'/path/to/skill-with-specs',
{ skillName: 'mirror-code-generator' },
'none'
);
Error Handling & Cancellation
Both LLMAgent and RecursiveSkilledAgent propagate structured errors. For orchestration flows:
- Unavailable or unregistered skills raise explicit errors during selection.
- Subsystems surface their own validation or timeout errors (for example, DBTable validation failures).
cancel()forwards cancellation to the LLM invoker viaLLMAgent.cancel().