Practical Tuning Guide
So, the model is hallucinating? Or maybe it's repeating "and the and the" forever? Here's how to fix it.
BSP is highly configurable. Unlike training a Transformer where you mostly tweak Learning Rate, here you tweak structural parameters of the graph and memory.
Symptom Checker
๐ค Symptom: "Word Salad" / Incoherent Output
Diagnosis: The system is activating too many irrelevant groups, mixing concepts that shouldn't mix.
Fix:
- Increase
learner.activationThreshold(Default:0.2โ Try0.3). This forces the system to only activate groups that strongly match the input. - Check
learner.mergeThreshold: If groups are too broad (merging "Dog" and "Car"), increase this (e.g.,0.85) to keep concepts distinct.
๐ค Symptom: Repetitive Loops ("and then and then")
Diagnosis: The SequenceModel is stuck in a local probability maximum.
Fix:
- Increase
sequence.repeatPenalty: This penalizes reusing tokens in the same sentence. - Decrease
deduction.decayFactor: The system might be holding onto the previous context too strongly, re-predicting the same concept.
๐ค Symptom: Amnesia (Forgets context immediately)
Diagnosis: The ConversationContext or DeductionGraph decays too fast.
Fix:
- Increase
deduction.decayFactor(Default:0.9โ Try0.99). This keeps predicted groups active longer. - Check
context.maxHistory: Ensure you are actually keeping enough tokens in the short-term memory.
The "Safe" vs. "Experimental" Configs
โ The Safe Baseline (Stability)
Use this for demos and standard chat.
{
"learner": {
"activationThreshold": 0.25,
"newGroupThreshold": 0.6
},
"deduction": {
"beamWidth": 5,
"maxDepth": 2
}
}
๐งช The Creative/Loose Mode (Brainstorming)
Use this if you want the model to make wild associations (high temperature equivalent).
{
"learner": {
"activationThreshold": 0.1, // Activate everything remotely related
"newGroupThreshold": 0.4 // Create groups easily
},
"deduction": {
"beamWidth": 20, // Explore many paths
"maxDepth": 4 // Deep reasoning chains
}
}
Debugging Tools
Use the /debug command in the chat interface to see "Under the Hood":
activeGroups: What concepts triggered?surprise: How unexpected was your input? (High surprise = Learning moment).deductionPath: Why did it predict X? (Trace the graph links).