Mirror Code Generator

The mirror code generator is the shared, internal utility that transforms specs/ markdown files into executable modules for code skills. It performs regeneration only when specs are newer than their target outputs and validates outputs through test generation, execution, and targeted repairs.

Regeneration Preconditions

The generator runs only when a skill directory contains a specs/ folder with at least one .md or .mds file. Each spec file maps directly to a target output path by removing the specs/ prefix and the .md / .mds extension. A file is regenerated when the spec modification time is newer than the corresponding output file, or the output is missing.

specs/index.mjs.mds  ->  index.mjs
specs/utils/foo.mjs.md  ->  utils/foo.mjs

Per-File Generation Workflow

For each spec file that requires regeneration, the generator performs a single-file LLM generation step, then validates the resulting code. Validation behavior depends on whether the spec includes a #Validation or #Testing section.

LLM Inputs for Code Generation

The code generation prompt is built from three sources:

Case A: Spec contains #Validation or #Testing

The generator derives positive test cases strictly from the testing section, then executes the generated code in a temporary runtime. Failures trigger a targeted repair pass. The repaired code is re-tested once; remaining failures only emit warnings and do not block writing outputs.

  1. Generate code with the LLM (inputs listed above).
  2. Generate positive tests using:
    • Testing section content.
    • The newly generated code.
  3. Execute tests in /tmp with the generated code.
  4. If tests fail, repair using:
    • Current spec content.
    • Backup spec content (when present).
    • Generated code (not the on-disk code).
    • Failure details (promptText, expectedOutput, actual).
  5. Re-run tests once. Log warnings if failures remain.

Case B: Spec does not contain #Validation or #Testing

The generator asks the LLM to review whether the generated code would pass positive tests derived from the spec and the generated code. A failing review triggers a repair, after which the code is written to disk.

  1. Generate code with the LLM (inputs listed above).
  2. Generate positive tests using:
    • The full spec content.
    • The newly generated code.
  3. Ask the LLM to review pass/fail using:
    • Generated code.
    • Generated tests.
    • Target file path.
  4. If the review fails, repair using:
    • Current spec content.
    • Backup spec content (when present).
    • Generated code (not the on-disk code).
    • Failure details (promptText, expectedOutput, reason).

Output and Backup Behavior

Generated output files are written into the skill directory, mirroring the specs/ tree. After all files are processed, the generator replaces specs/.backup with the current specs/ directory contents.

Flow Diagram (ASCII)

Start
  |
  v
Specs folder exists?
  |-- No --> Stop
  |
  v
Specs newer than outputs?
  |-- No --> Stop
  |
  v
For each stale spec
  |
  v
Generate code (spec + backup spec + existing code)
  |
  v
Has #Validation/#Testing?
  |-- Yes --> Generate positive tests (section + generated code)
  |           Run tests in /tmp
  |           Failures? -> Repair (spec + backup + generated code + failures)
  |                     -> Rerun -> Warn if still failing
  |
  |-- No --> Generate positive tests (spec + generated code)
              Review with LLM (tests + generated code + path)
              Fail? -> Repair (spec + backup + generated code + failures)
  |
  v
Write generated code to skill directory
  |
  v
Replace specs/.backup
  |
  v
Done