Practical Tuning Guide

So, the model is hallucinating? Or maybe it's repeating "and the and the" forever? Here's how to fix it.

BSP is highly configurable. Unlike training a Transformer where you mostly tweak Learning Rate, here you tweak structural parameters of the graph and memory.

Symptom Checker

๐Ÿค’ Symptom: "Word Salad" / Incoherent Output

Diagnosis: The system is activating too many irrelevant groups, mixing concepts that shouldn't mix.

Fix:

  • Increase learner.activationThreshold (Default: 0.2 โ†’ Try 0.3). This forces the system to only activate groups that strongly match the input.
  • Check learner.mergeThreshold: If groups are too broad (merging "Dog" and "Car"), increase this (e.g., 0.85) to keep concepts distinct.

๐Ÿค’ Symptom: Repetitive Loops ("and then and then")

Diagnosis: The SequenceModel is stuck in a local probability maximum.

Fix:

  • Increase sequence.repeatPenalty: This penalizes reusing tokens in the same sentence.
  • Decrease deduction.decayFactor: The system might be holding onto the previous context too strongly, re-predicting the same concept.

๐Ÿค’ Symptom: Amnesia (Forgets context immediately)

Diagnosis: The ConversationContext or DeductionGraph decays too fast.

Fix:

  • Increase deduction.decayFactor (Default: 0.9 โ†’ Try 0.99). This keeps predicted groups active longer.
  • Check context.maxHistory: Ensure you are actually keeping enough tokens in the short-term memory.

The "Safe" vs. "Experimental" Configs

โœ… The Safe Baseline (Stability)

Use this for demos and standard chat.

{
  "learner": {
    "activationThreshold": 0.25,
    "newGroupThreshold": 0.6
  },
  "deduction": {
    "beamWidth": 5,
    "maxDepth": 2
  }
}

๐Ÿงช The Creative/Loose Mode (Brainstorming)

Use this if you want the model to make wild associations (high temperature equivalent).

{
  "learner": {
    "activationThreshold": 0.1,  // Activate everything remotely related
    "newGroupThreshold": 0.4     // Create groups easily
  },
  "deduction": {
    "beamWidth": 20,             // Explore many paths
    "maxDepth": 4                // Deep reasoning chains
  }
}

Debugging Tools

Use the /debug command in the chat interface to see "Under the Hood":