LLMAgent
LLMAgent is the foundational mediation layer through which AchillesAgentLib interacts with language models. It is the component that turns provider-facing model calls into a reusable runtime service for prompting, output coercion, interpretation, and agentic session creation.
Controlled Model Mediation
Higher-level components in AchillesAgentLib are not expected to talk directly to providers. That work is concentrated in LLMAgent so that prompt execution, memory shaping, output coercion, and call logging remain consistent across the rest of the runtime. The architectural value of this class is therefore not only convenience. It is that it prevents every subsystem from inventing its own model-calling conventions.
This becomes especially important once the same runtime must support plain completions, structured JSON extraction, code-oriented responses, confirmation resolution, intent detection, and long-lived sessions. In a looser architecture, each of these concerns would drift into local conventions. In AchillesAgentLib they are pulled into one shared mediation surface. MainAgent and the skill subsystems then build on that common layer rather than bypassing it. MainAgent owns skill discovery, registration, and routing, while LLMAgent remains the dedicated model mediation layer.
The practical result is that LLMAgent sits between the rest of the runtime and the underlying invoker strategy. It does not remove the probabilistic nature of the model, but it does make the path to that model more explicit, more inspectable, and easier to keep stable.
Prompt Execution, Interpretation, and Sessions
The lowest direct method is complete(). It expects a prompt string and optional history, model, tags, context, and cancellation signal, forwards those values to the configured invoker strategy, records input and output character counts, logs per-call telemetry, and returns the provider output as a string. Model resolution happens inside the invoker strategy: when an explicit model is provided it is used directly; when only tags are provided, the invoker looks up the first tag in the agent's modelConfig mapping and uses the associated model name. If neither model nor tags resolve to a concrete name, the invoker falls back to the configured default model. This is the most direct general-purpose surface offered by the class.
executePrompt() builds on that lower layer. It can prepend memory segments such as global, user, session, or skill short memory to the prompt context and then routes the request through the task-execution helper. When responseShape is declared, it also performs bounded post-processing on the returned text. For json, it extracts JSON and throws if parsing fails. For code, it strips code fences. For json-code, it extracts a JSON object and requires that the object contain a code field. The method therefore does not “understand” arbitrary schemas, but it does centralize the output-coercion patterns used by the rest of the runtime.
const result = await llmAgent.executePrompt('Summarize the report', {
model: 'plan',
responseShape: 'json',
context: { intent: 'summarize-report' },
sessionMemory,
});
Alongside prompt execution, LLMAgent also exposes a small set of interpretation helpers. interpretMessage() converts free-form user replies into constrained outcomes such as accept, cancel, update, or idea-like payloads, using heuristics first and the model second. resolveConfirmation() resolves ambiguous yes-or-no language into yes, no, or unclear together with a confidence value. detectIntents() classifies a request against a described skill space and requires a JSON result. These methods are important because they allow natural language to be turned into bounded operational signals rather than leaving every caller to parse conversational text independently.
The third major responsibility is session creation. startLoopAgentSession() creates a loop-based session over a tools object and immediately starts it with the initial prompt. startSOPLangAgentSession() creates an SOP-based session over a skills description object and likewise starts it immediately, with optional planOnly behavior. Both entry points accept an abort signal and propagate it into the active session runtime. The detailed behavior of those session types is documented on Agentic Sessions, but their common entry point is LLMAgent.
const loopSession = await llmAgent.startLoopAgentSession(tools, 'Start the workflow');
const sopSession = await llmAgent.startSOPLangAgentSession(skillsDescription, 'Run the plan', {
planOnly: false,
});
Two additional operational details complete the picture. First, cancel() forwards cancellation to the underlying request layer and attempts to close active processing callbacks. Session runtimes also rely on this capability when the user interrupts a running turn. Second, getCallLog() returns the per-call telemetry accumulated by the agent, including input size, output size, model, requested tags, matched tags, duration, and intent. This makes LLMAgent not only a prompt wrapper, but the general LLM-facing runtime service on top of which the rest of the library is built.