Code Generation Skills Tutorial
Descriptors, sandbox entrypoints, and LLM-assisted execution patterns for dependable code skills.
Repository Layout
Code skills use the same repository conventions. The code skills test suite ships with:
tests/cgSkills/.AchillesSkills/
└── test1/
└── mathEval/
├── cgskill.md
└── mathEval.js
The descriptor drives prompt construction while the JavaScript entrypoint enforces sandbox rules and integrates with LLMAgent.
Descriptor: cgskill.md
# Math Expression Evaluator
Interpret natural-language requests that describe mathematical operations and produce precise numeric answers...
## Prompt
You are a careful mathematician who writes concise JavaScript to compute results. Analyse the user request and extract any numbers, series names, or operations that should be performed...
## LLM Mode
deep
Sections such as Prompt and LLM Mode guide the code skill so that the generated snippet matches domain expectations. Additional sections can define safety policies or output schemas.
Entrypoint: mathEval.js
The companion file supplies the execution harness. It calls the configured LLMAgent to synthesise code, unwraps fences, and evaluates the snippet inside a controlled environment.
// tests/cgSkills/.AchillesSkills/test1/mathEval/mathEval.js
export async function action(instruction, context = {}) {
if (typeof instruction !== 'string' || !instruction.trim()) {
throw new Error('mathEval skill requires a non-empty instruction string.');
}
const { llmAgent, prompt = '', skillName = 'math-expression-evaluator-code', llmMode = 'fast' } = context;
if (!llmAgent || typeof llmAgent.executePrompt !== 'function') {
throw new Error('mathEval skill requires an LLMAgent with an \"executePrompt\" method.');
}
const guidance = [
'# Code Synthesis for Math Evaluation',
prompt || 'Create JavaScript that fulfils the mathematical instruction.',
'',
'## Instruction',
instruction,
'',
'## Response Format',
'- Return a JSON object with keys \"code\" and \"summary\".',
'- \"code\" must contain JavaScript statements that end with `return \"\";`.',
'- The JavaScript should compute the requested result and embed it in the final string.',
'- Use only standard JavaScript (no external modules).',
].join('\\n');
const llmResponse = await llmAgent.executePrompt(guidance, {
mode: llmMode,
context: { intent: 'code-synthesis', skillName },
responseShape: 'json-code',
});
const payload = { ...llmResponse, code: unwrapCodeFence(llmResponse.code) };
const result = await executeSnippetWithFallback(payload.code, skillName, console.info);
return typeof result === 'string' ? result : JSON.stringify(result);
}
The full file includes helper functions that unwrap code fences, fall back to calling the last declared function, and update session memory. Use it as a template when adding new sandboxed capabilities.
Execution Checklist
- Define a clear prompt in
cgskill.mddescribing sandbox constraints and output expectations. - Implement an
actionexport that enforces safety (timeouts, read-only modes, sanitised results). - Optionally persist conversation snippets to
sessionMemoryfor audit trails. - Register the repository and call
RecursiveSkilledAgent.executePromptwith user instructions to trigger the sandbox.
See tests/cgSkills/cgSkills.test.mjs for a complete usage example that executes the skill with different prompts and validates the computed results.