Agent Runtime
LLM mediation, recursive execution, and lifecycle APIs that keep Achilles Agents plans on track.
LLMAgent
LLMAgent centralises all communication with language models. It accepts an invoker strategy,
which may route requests to OpenAI, Anthropic, a test stub, or any custom provider.
Constructor
new LLMAgent({
name: 'PlanningLLM',
invokerStrategy: async ({ prompt, context, history, mode }) => {
// Dispatch to provider of your choice.
return provider.invoke(prompt, { context, history, mode });
},
});
Key Methods
complete({ prompt, history = [], context = {}, mode = 'fast' }): Returns the raw model output string.executePrompt(promptText, { responseShape, mode, globalMemory, userMemory, sessionMemory }): Builds a task prompt with optional memory blocks and returns plain text or parsed JSON/code when aresponseShapeis provided.interpretMessage(message, { intents }): Classifies user replies intoaccept,cancel,update, orideas, using heuristics first and falling back to the model.parseMarkdownKeyValues(markdown): Converts LLM output into key-value maps (used for argument extraction).classifyMessage(message): Lightweight intent classifier when a full completion is not needed.
The agent enforces JSON parsing where required, logs debug traces when LLMAgentClient_DEBUG=true, and supports request
cancellation by cancelling the underlying invoker promise.
LLMAgent Sessions
Beyond single-call completions, LLMAgent can drive longer-lived agentic sessions that maintain
context across multiple prompts while calling tools repeatedly.
startLoopAgentSession(tools, initialPrompt, options): creates anAgenticSessionthat orchestrates arbitrary tools. The session object exposesnewPrompt(prompt)to continue the conversation on the same context, andgetVariables()to inspect the final values or summary state.startSOPLangAgentSession(skillsDescription, initialPrompt, options): creates aSOPAgenticSessionthat uses LightSOPLang as the internal planning language. Each prompt refines or extends the underlying SOP script instead of starting from scratch;getPlan()returns the current source.
These session APIs are useful when you want the LLM to iteratively choose tools, revise plans, and accumulate state rather than recomputing everything from a single natural language prompt.
RecursiveSkilledAgent
RecursiveSkilledAgent supervises complex, multi-step plans. It can execute skills directly,
run orchestration scripts, or call itself recursively when LightSOPLang commands reference other skills.
Constructor
const recursiveAgent = new RecursiveSkilledAgent({
llmAgent, // optional LLMAgent instance (auto-created if omitted)
llmAgentOptions: {}, // forwarded when LLMAgent is auto-created
startDir: process.cwd(),
searchUpwards: true, // set false to scan only startDir + additionalSkillRoots (no parent directories)
skillFilter, // optional function to include/exclude skills
dbAdapter, // for DBTable skills
promptReader, // interactive input reader
onProcessingBegin, onProcessingProgress, onProcessingEnd,
additionalSkillRoots: [], // extra absolute/relative dirs to scan for .AchillesSkills alongside startDir
});
Lifecycle callbacks (onProcessingBegin, onProcessingProgress, onProcessingEnd) fire only for the outermost execution and are safe places to emit logs or UI signals; errors they throw are caught and logged as warnings.
Execution APIs
executePrompt(prompt, { skillName, args, promptReader }): Primary entry point. ProvideskillNameto force a specific skill or omit to let the agent choose.executeWithReviewMode(prompt, options, reviewMode): Explicit review mode.executePromptWithReview(prompt, options)/executePromptWithHumanReview(prompt, options): Helpers for LLM or human review modes.cancel(): Propagates cancellation down the active stack.
Execution Flow
- Discovers skills under
.AchillesSkills/(and additional roots) at construction time. - If no
skillNameis provided, tries a FlexSearch-based orchestrator match; otherwise falls back to LLM selection across skills. - Injects the task description into a skill's default argument (or
input) when arguments are missing. - Routes execution to the appropriate subsystem (claude/code/interactive/mcp/orchestrator/dbtable) and forwards any args/promptReader overrides.
- Returns the subsystem's result along with the chosen review mode.
Code Skills Handling
When RecursiveSkilledAgent discovers a code skill (cskill.md), it automatically invokes
code generation through a specialized process:
- Detection: During skill registration, detects
skillRecord.type === 'cskill' - Invoke generate-code-skill: Calls
generateCode(skillRecord, llmAgent, logger)from the hardcodedgenerate-code-skill.mjsmodule - Timestamp comparison: The generator compares file timestamps to determine if regeneration is needed:
- Finds the newest spec file (in
specs/folder) - Finds the oldest generated file (in
src/folder) - If
src/doesn't exist or newest spec is newer than oldest src → regenerate - Otherwise skip regeneration (fast path)
- Finds the newest spec file (in
- Single-step LLM generation:
- Recursively reads all
.mdfiles fromspecs/ - Extracts
Input Format,Output Format,Constraints, andExamplessections fromcskill.md - Combines everything into one comprehensive prompt
- Sends to LLM with
mode: 'deep'andresponseShape: 'text'
- Recursively reads all
- Parse multi-file response: The LLM returns markdown with
## file-path: path/to/file.mjsheaders, which are parsed to extract individual files - Write to disk: Deletes the existing
src/folder, creates a new one, and writes all generated.mjsfiles - Additional preparation: After code generation, calls
subsystem.prepareSkill()for any subsystem-specific setup
This design keeps code generation logic centralized in generate-code-skill.mjs, making it reusable
for all code skills while allowing the agent to handle discovery and invocation automatically.
Review Modes
- none: Execute immediately without review interrupts.
- llm: Let the LLM generate a review before execution.
- human: Surface results for human review.
Error Handling & Cancellation
Both LLMAgent and RecursiveSkilledAgent propagate structured errors. For orchestration flows:
- Unavailable or unregistered skills raise explicit errors during selection.
- Subsystems surface their own validation or timeout errors (for example, DBTable validation or interactive module loading failures).
cancel()forwards cancellation to the LLM invoker viaLLMAgent.cancel(); prompt readers must cooperate to stop interactive input.