VSAVM

Large Language Model (LLM)

This wiki entry defines a term used across VSAVM and explains why it matters in the architecture.

The diagram has a transparent background and highlights the operational meaning of the term inside VSAVM.

Related wiki pages: VM, event stream, VSA, bounded closure, consistency contract.

Definition

A large language model is typically a neural network trained to predict the next token (or next segment) of text. In VSAVM, “LLM-like” describes the interface (interactive continuation), not the source of truth.

Role in VSAVM

VSAVM uses continuation prediction as a proposal mechanism, but correctness is owned by the VM and bounded closure:

Mechanics and implications

In this repository, two “LLM-like” paths exist:

The important implication is that fluency is never treated as truth. Continuation quality is measured as a language-model metric (perplexity, reference match, repetition), while correctness for claims is measured via VM/closure.

Practical evaluation (eval_tinyLLM)

The eval_tinyLLM suite exists to make “more realistic” comparisons reproducible while keeping the codebase dependency-light:

  1. Prepare a dataset split under a deterministic datasetId (size-based, keyed by maxBytes and split settings).
  2. Train VSAVM macro-units (streaming) and optionally persist facts.
  3. Train the TensorFlow baseline on the same dataset.
  4. Compare both engines under identical budgets per prompt and write a timestamped HTML report to eval_tinyLLM/results/.

Artifacts are cached under eval_tinyLLM/cache/datasets/ and eval_tinyLLM/cache/models/ so multiple dataset sizes and multiple trained models can coexist.

Further reading

LLMs are a fast-moving field. VSAVM’s design goal is to combine LLM-like interaction with an executable substrate and explicit boundary behavior.

llm diagram
VSAVM keeps LLM-like interaction but conditions continuations on executable state and closure checks.

References

Large language model (Wikipedia) Language model (Wikipedia) Natural language generation (Wikipedia)