Channels
SynaptoClaw
Advanced Cognitive Memory for AI Agents: 7-Channel Hybrid Scoring, Knowledge Graph, and LLM Smart Capture.
Install
npm install
```
Configuration Example
Score =
0.42 * VectorSimilarity +
0.16 * Importance +
0.1 * Temporal +
0.1 * Recency +
0.08 * Reinforcement +
0.08 * Graph +
0.06 * Emotional;
README
# SynaptoClaw (Cognitive Memory Plugin for OpenClaw)
> **"Code without memory is just a calculator. Code with SynaptoClaw is an entity."**
## ๐ง SynaptoClaw: The Cognitive Architecture Parallel
SynaptoClaw is more than a vector database; it is a technical attempt to replicate the core neurological functions of the human brain within an AI agent.
### 1. The Hippocampus (Working Memory Buffer)
Humans do not store every trivial "okay" or "thanks". Our brains filter noise. **SynaptoClaw's Working Memory Buffer** (`buffer.ts`) mimics this by staging facts and only "promoting" them to long-term storage (LanceDB) if they cross importance thresholds or are reinforced by repetition.
### 2. Associative Thinking (AMHR & Knowledge Graph)
Human memory is associative, not just semantic. When you think of "Coffee", you might recall "that cafe in Kyiv". **SynaptoClaw's Associative Multi-Hop Retrieval** (`index.ts`) traverses the Knowledge Graph to surface connected facts even when mathematical vector similarity is low.
### 3. Synaptic Reinforcement (7-Channel Scoring)
The more you think about something, the stronger the neural pathway becomes. **SynaptoClaw's 7-Channel Scoring** (`recall.ts`) directly applies this. Facts that are frequently recalled (**Reinforcement**), emotionally charged (**Emotional Tone**), or recent (**Recency**) naturally rise to the surface of the agent's consciousness.
### 4. Continuous Reflection (User Profiling)
Just as humans build a self-identity from accumulated experiences, SynaptoClaw's **Reflection Engine** (`reflection.ts`) analyzes the entire memory pool to generate a psychological persona and deep behavioral patterns.
---
## Why SynaptoClaw?
Standard RAG (Retrieval-Augmented Generation) systems use simple vector similarity. They miss context, forget old facts, and don't understand _relationships_. SynaptoClaw solves this while remaining a drop-in replacement for `memory-lancedb`.
## Features
| Feature | memory-lancedb | **SynaptoClaw** |
| -------------------------------- | -------------- | -------------------- |
| Vector search (LanceDB) | โ
| โ
|
| Google Gemini (free!) | โ | โ
|
| OpenAI support | โ
| โ
|
| Knowledge Graph | โ | โ
|
| AMHR (Associative Retrieval) | โ | โ
|
| Smart Capture (LLM) | โ | โ
|
| Hybrid Scoring (7-channel) | โ | โ
|
| Conversation Stack (Compression) | โ | โ
|
| Memory Reflection / User Profile | โ | โ
|
| Memory Consolidation | โ | โ
|
| Contradiction Resolution | โ | โ
(PHOENIX Logic) |
| JSON-Mode API Optimization | โ | โ
(Gemma 3 Fix) |
| Working Memory Buffer | โ | โ
|
| JSONL Observability Tracer | โ | โ
(Deep Monitoring) |
| Prompt injection protection | โ
| โ
|
| GDPR-compliant forget | โ
| โ
|
| **Background Orchestration** | โ | โ
(429 Optimizer) |
| **Batch Summarization** | โ | โ
(RPM Optimizer) |
### ๐ง 7-Channel Hybrid Recall Scoring
Instead of pure vector similarity, memories are ranked by a 7-channel combined mathematical scoring system. This directly mirrors how human neurology prioritizes thoughts:
```javascript
Score =
0.42 * VectorSimilarity +
0.16 * Importance +
0.1 * Temporal +
0.1 * Recency +
0.08 * Reinforcement +
0.08 * Graph +
0.06 * Emotional;
```
1. **Vector (0.42)** โ Pure semantic and contextual similarity.
2. **Importance (0.16)** โ Facts with high emotional or practical weight (injected during Smart Capture) rise to the top.
3. **Recency (0.10)** โ Uses Exponential Decay (`Math.exp(-decay * days)`). Old memories naturally fade unless reinforced.
4. **Temporal (0.10)** โ Aligns "today" with memories matching the current date/context.
5. **Graph (0.08)** โ Multi-hop connections in the Knowledge Graph. Highly connected nodes (like your name or core skills) naturally trigger associated memories.
6. **Reinforcement (0.08)** โ Boosts for frequently recalled facts. The more a memory is accessed, the stronger its neural pathway.
7. **Emotional (0.06)** โ Matches the emotional tone of the current conversation to the original memory's tone.
### ๐ธ๏ธ Knowledge Graph & AMHR
When you store a memory, SynaptoClaw uses an LLM to extract entities and relationships:
```
Memory: "I use Python for my web projects at Acme Corp"
โ Nodes: [User (Person), Python (Language), Acme Corp (Company)]
โ Edges: [User --uses--> Python, User --works_at--> Acme Corp]
```
**Associative Multi-Hop Retrieval (AMHR)** allows the system to traverse this graph when recalling. Asking about "Python" surfaces "Acme Corp" purely through associative graph links, even if the vector similarity is low.
### ๐ Conversation Stack
To understand full context without blowing up the 15k context window (e.g. Gemma 3 limits), SynaptoClaw utilizes a `ConversationStack`.
It compresses each user/assistant turn into a ~30-word summary, accumulating them into a session-scoped stack.
**Result:** ~17x token compression with full context retention.
### ๐ก Smart Capture & Working Memory Buffer
Traditional regex capture only catches obvious patterns like "I prefer X".
Smart Capture routes via an LLM to extract facts, placing them in a **Working Memory Buffer**. The buffer requires patterns (e.g. `importance >= 0.7`, or `mentioned > 3 times`) before promoting facts to the permanent LanceDB database. This is the exact mechanism of the human Hippocampus.
### ๐ช Memory Reflection
Generates a high-level "user profile" from all stored memories using LLM analysis. Instead of searching raw facts, it summarizes patterns.
```
Summary: "User is a Ukrainian developer who is self-teaching programming
through AI tools, focusing on practical projects like Telegram bots."
Patterns:
- Prefers hands-on learning over theory
- Focuses on Python ecosystem
```
### ๐งน Memory Consolidation
Merges duplicate or similar memories into stronger consolidated facts via the `openclaw ltm consolidate` CLI command.
## Installation
As an open-source `OpenClaw` plugin, installation is simple:
1. **Clone into `extensions/`**:
Navigate to your OpenClaw root directory and clone SynaptoClaw:
```bash
git clone https://github.com/HollyLight28/SynaptoClaw.git extensions/memory-hybrid
```
2. **Install dependencies**:
```bash
pnpm install
```
3. **Configure Settings**:
Add the following to your `~/.openclaw/config.json`.
### Configuration (Google Gemini Free Tier)
```json
{
"plugins": {
"slots": { "memory": "memory-hybrid" },
"entries": {
"memory-hybrid": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "${GEMINI_API_KEY}",
"model": "gemini-embedding-002"
},
"autoRecall": true,
"autoCapture": true,
"smartCapture": true
}
}
}
}
}
```
### Configuration (OpenAI)
```json
{
"embedding": {
"apiKey": "${OPENAI_API_KEY}",
"model": "text-embedding-3-small"
},
"autoCapture": true,
"autoRecall": true
}
```
### All Config Options
| Option | Default | Description |
| ------------------ | ---------------------------- | ---------------------------------------------------------------------- |
| `embedding.apiKey` | _required_ | API key (OpenAI or Google) |
| `embedding.model` | `text-embedding-004` | Latest Google embedding model (768 dims) |
| `chatModel` | auto | LLM for graph/capture (auto: `gemini-3.1-flash-lite` or `gpt-4o-mini`) |
| `dbPath` | `~/.openclaw/memory/lancedb` | Database path |
| `autoCapture` | `true` | Auto-capture from conversations |
| `autoRecall` | `true` | Auto-inject memories into context |
| `smartCapture` | `true` | Use LLM for intelligent fact extraction |
| `captureMaxChars` | `500` | Max message length for capture |
## Tools
The plugin registers four tools for the AI agent:
| Tool | Description |
| ---------------- | ----------------------------------------------------- |
| `memory_recall` | Search memories (hybrid scoring + graph enrichment) |
| `memory_store` | Store memory (graph extraction + contradiction check) |
| `memory_forget` | Delete memory (GDPR-compliant) |
| `memory_reflect` | Generate user profile from all memories |
## CLI Commands
```bash
openclaw ltm list # Show memory count
openclaw ltm search <query> # Search memories with hybrid scoring
openclaw ltm graph # Show knowledge graph stats
openclaw ltm stats # Show overall statistics
openclaw ltm consolidate # Merge similar memories
openclaw ltm reflect # Generate user profile
# NEW: Real-time Observability Dashboard
bun extensions/memory-hybrid/scripts/monitor.ts
```
## ๐ ๏ธ Observability & Monitoring
SynaptoClaw provides deep, non-blocking observability into every thought and recall.
### 1. The Trace Log
All critica
... (truncated)
channels
Comments
Sign in to leave a comment