These guides explain key concepts and patterns for building trustworthy AI systems with AGISystem2. Each guide includes theory, examples, and best practices.

Available Guides

Geometric Reasoning

Understanding how knowledge is represented and queried in high-dimensional vector spaces.

  • Why high dimensions work
  • The binding equation
  • Query mechanics
  • Confidence interpretation

Theory Layering

Building modular, composable knowledge bases with stacked theories.

  • Theory architecture
  • Import and export
  • Namespace management
  • Versioning strategies

Explainability

Generating human-readable explanations with full provenance tracking.

  • Proof trace structure
  • Natural language generation
  • Audit trail management
  • Replay and verification

LLM Integration

Combining AGISystem2 with Large Language Models for enhanced capabilities.

  • Complementary strengths
  • Trust boundaries
  • Elaboration patterns
  • Hybrid architectures

HDC Strategies

Creating custom hyperdimensional computing implementations.

  • Strategy contract
  • Implementation guide
  • Validation and testing
  • Benchmarking

Guide: Geometric Reasoning

Why High Dimensions Work

In high-dimensional spaces (32,768+ dimensions), random vectors are almost orthogonal to each other. This "blessing of dimensionality" provides:

Similarity Distribution in High Dimensions
0.0 0.5 1.0 Similarity mean = 0.5 99% within [0.492, 0.508]

The Binding Equation

All queries reduce to the fundamental equation:

Answer = KB XOR Partial_Query

where Partial_Query = everything known in the query
      Answer = the unknown bindings

Query Mechanics Step by Step

  1. Parse: Identify holes (?variables) in the query
  2. Build Partial: Bind operator with known arguments (skip holes)
  3. Unbind: XOR the partial from the KB
  4. Extract: Remove position markers from candidates
  5. Match: Find most similar atoms in vocabulary
  6. Score: Calculate confidence from similarities

Confidence Interpretation

Score Meaning Recommended Action
> 0.80 Strong match Trust the result
0.65 - 0.80 Good match Probably correct, verify if critical
0.55 - 0.65 Weak match Multiple interpretations possible
< 0.55 No match Query failed, don't use result

Guide: Theory Layering

Theory Architecture

Theories are self-contained knowledge modules that can be stacked:

Theory Stack
Domain Theory Intermediate Theory Core Theory Commerce, Physics... Types, Roles... Atoms, Logic...

Best Practices

Guide: Explainability

Proof Trace Structure

Every proof generates a complete trace that can be replayed:

ProofTrace {
  goal: "isA Socrates Mortal",
  method: "rule",
  rule: "humans_are_mortal",
  premises: [
    {
      goal: "isA Socrates Human",
      method: "direct",
      kbMatch: true,
      confidence: 0.95
    }
  ],
  timestamp: "2024-01-15T10:30:00Z",
  duration_ms: 12
}

Generating Explanations

  1. Decode: Extract structure from result vector
  2. Template: Look up phrasing template for operator
  3. Fill: Replace slots with decoded arguments
  4. Refine: Optionally improve fluency with LLM

Guide: LLM Integration

Complementary Strengths

Capability LLM AGISystem2
Natural Language Excellent Template-based
Determinism None 100%
Explainability Limited Full traces
Factual Accuracy Hallucinations Verified KB
Creativity Excellent Rule-based

Integration Pattern

// Use LLM for natural language input
const userQuery = "Who sold the car to Bob?";
const dsl = await llm.translate(userQuery);  // LLM generates DSL

// Use AGISystem2 for reasoning
const result = session.query(dsl);  // Deterministic, verifiable

// Use LLM for fluent output
const explanation = session.elaborate(result, { useLLM: true });
// LLM improves style only, cannot change facts

Trust Boundaries

LLM is for STYLE only, never CONTENT

Coming Soon