Tools
Supermemory Openclaw
🧠 Graph-based memory implementation for OpenClaw 🦞
Install
openclaw plugins install openclaw-memory-supermemory
README
# 🧠 Supermemory OpenClaw Plugin
Local graph-based memory plugin for [OpenClaw](https://github.com/nichochar/openclaw) — inspired by [Supermemory](https://supermemory.ai). Runs entirely on your machine with no cloud dependencies.
> **Disclaimer:** This is an independent project. It is not affiliated with, endorsed by, or maintained by the Supermemory team. The name reflects architectural inspiration, not a partnership.
## Features
- **LLM Fact Extraction** — Extracts discrete, entity-centric facts from each conversation turn via an LLM subagent, matching Supermemory's cloud approach locally.
- **Graph Memory** — Automatic entity extraction, relationship tracking (Updates / Extends / Derives), memory versioning with `parent_memory_id` chains.
- **User Profiles** — Static long-term facts + dynamic recent context, automatically maintained and injected into system prompt. Static memories (`is_static`) are protected from decay.
- **Automatic Forgetting** — Temporal expiration for time-bound facts (including absolute dates like "January 15"), decay for low-importance unused memories, contradiction resolution.
- **Hybrid Search** — BM25 keyword (FTS5) + graph-augmented multi-hop retrieval with MMR diversity re-ranking. Superseded memories are filtered at the query level. Vector similarity (sqlite-vec) used when available.
- **Auto-Recall** — Injects relevant memories + user profile before every AI turn via the `before_prompt_build` hook.
- **OpenClaw Runtime Integration** — Registers memory tools, a built-in memory search manager, and a pre-compaction memory flush plan when the host API supports them.
## How It Works
```mermaid
flowchart LR
subgraph input ["💬 Conversation"]
A[User message] --> B[AI response]
end
subgraph extract ["🧠 Memory Engine"]
C[Extract discrete facts via LLM]
C --> D[Deduplicate]
D --> E[Classify & embed]
end
subgraph graph ["🔗 Knowledge Graph"]
F["Link entities\n(people, projects)"]
F --> G{Relationship detection}
G --> H["🔄 Updates — new fact\nsupersedes old"]
G --> I["➕ Extends — enriches\nexisting fact"]
G --> J["🔮 Derives — inferred\nconnection"]
end
subgraph recall ["🔎 Recall"]
K["User Profile\n(static + dynamic facts)"]
L["Hybrid Search\n(vector + keyword + graph)"]
K --> M[Inject into next AI turn]
L --> M
end
B --> C
E --> F
J --> K
H --> K
I --> K
```
1. **You talk to your AI normally.** Share preferences, mention projects, discuss problems.
2. **Auto-capture** uses your configured LLM to extract discrete facts from the last conversation turn (both user and assistant messages).
3. **Graph engine** links each extracted fact to entities and detects relationships:
- **Updates** — "Iván moved to Copenhagen" supersedes "Iván lives in Madrid"
- **Extends** — "Iván leads a research team of 4" enriches "Iván is an AI Scientist at Santander"
- **Derives** — Inferred connections from shared entities
4. **Auto-recall** injects your user profile + relevant memories before each AI turn.
5. **Automatic forgetting** cleans up expired time-bound facts and decays unused low-importance memories.
## Quick Start
### Step 1: Install the plugin
```bash
openclaw plugins install openclaw-memory-supermemory
```
### Step 2: Configure OpenClaw
Edit `~/.openclaw/openclaw.json` and add **both** the memory slot and the plugin entry:
```json5
{
plugins: {
// REQUIRED: Assign this plugin to the memory slot
slots: {
memory: "openclaw-memory-supermemory"
},
// RECOMMENDED: Suppress the auto-load security warning
allow: ["openclaw-memory-supermemory"],
// Plugin configuration
entries: {
"openclaw-memory-supermemory": {
enabled: true,
config: {
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}" // reads from env var
},
autoRecall: true,
autoCapture: true
}
}
}
}
}
```
> **Important:** The `slots.memory` line is required — without it, OpenClaw won't use the plugin even if it's installed.
### Step 3: Restart OpenClaw
Restart the OpenClaw gateway for the plugin to load.
### Step 4: Verify it works
```bash
openclaw supermemory stats
```
You should see output like:
```
Total memories: 0
Active memories: 0
Superseded memories: 0
Entities: 0
Relationships: 0
Vector search: unavailable
```
Zero counts are normal on first run. `Vector search: unavailable` is expected — see [Vector Search](#vector-search) below.
## Embedding Providers
You need an embedding provider for semantic search. Choose one:
### OpenAI (recommended for simplicity)
```json5
embedding: {
provider: "openai",
model: "text-embedding-3-small",
apiKey: "${OPENAI_API_KEY}"
}
```
Set the environment variable before starting OpenClaw:
```bash
export OPENAI_API_KEY="sk-..."
```
### Ollama (fully local, no API key)
Install [Ollama](https://ollama.ai) and pull a model:
```bash
ollama pull nomic-embed-text
```
```json5
embedding: {
provider: "ollama",
model: "nomic-embed-text"
}
```
### Other OpenAI-compatible providers
Any provider with an OpenAI-compatible `/v1/embeddings` endpoint works:
```json5
embedding: {
provider: "openai",
model: "your-model-name",
apiKey: "${YOUR_API_KEY}",
baseUrl: "https://your-provider.com/v1"
}
```
### Supported models (auto-detected dimensions)
| Model | Provider | Dimensions |
|-------|----------|-----------|
| `nomic-embed-text` | Ollama | 768 |
| `text-embedding-3-small` | OpenAI | 1536 |
| `text-embedding-3-large` | OpenAI | 3072 |
| `mxbai-embed-large` | Ollama | 1024 |
| `all-minilm` | Ollama | 384 |
| `snowflake-arctic-embed` | Ollama | 1024 |
For other models, set `embedding.dimensions` explicitly.
## AI Tools
The AI uses these tools autonomously:
| Tool | Description |
|------|-------------|
| `memory_search` | Hybrid search across all memories (vector + keyword + graph) |
| `memory_store` | Save information with automatic entity extraction, relationship detection, and optional `isStatic` flag for permanent facts |
| `memory_forget` | Delete memories by ID or search query |
| `memory_profile` | View/rebuild the automatically maintained user profile |
## CLI Commands
```bash
openclaw supermemory stats # Show memory statistics
openclaw supermemory search <query> # Search memories
openclaw supermemory search "rust" --limit 5
openclaw supermemory profile # View user profile
openclaw supermemory profile --rebuild # Force rebuild profile
openclaw supermemory wipe --confirm # Delete all memories
```
## Verifying Memories
After chatting with the AI, you can verify memories are being captured:
```bash
# Check memory counts increased
openclaw supermemory stats
# Search for something you mentioned
openclaw supermemory search "your topic"
# View your auto-built profile
openclaw supermemory profile
```
## Vector Search
The plugin uses FTS5 keyword search + graph traversal by default. Vector similarity search requires `sqlite-vec`, which is bundled with OpenClaw's built-in memory system but not automatically available to external plugins.
If your OpenClaw build includes `sqlite-vec`, the plugin will detect and use it automatically.
## Troubleshooting
### "plugins.allow is empty" warning
Suppress it by adding:
```json5
plugins: {
allow: ["openclaw-memory-supermemory"]
}
```
## Configuration Reference
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `embedding.provider` | string | `"ollama"` | Embedding provider (`ollama`, `openai`, etc.) |
| `embedding.model` | string | `"nomic-embed-text"` | Embedding model name |
| `embedding.apiKey` | string | — | API key (cloud providers only, supports `${ENV_VAR}` syntax) |
| `embedding.baseUrl` | string | — | Custom API base URL |
| `embedding.dimensions` | number | auto | Vector dimensions (auto-detected for known models) |
| `autoCapture` | boolean | `true` | Auto-capture memories from conversations |
| `captureMode` | string | `"extract"` | `"extract"` (LLM fact extraction) or `"off"` (disable auto-capture) |
| `autoRecall` | boolean | `true` | Auto-inject memories + profile into context |
| `profileFrequency` | number | `50` | Rebuild user profile every N interactions |
| `entityExtraction` | string | `"pattern"` | Current implementation is pattern-based. `"llm"` is reserved and currently behaves the same as `"pattern"`. |
| `forgetExpiredIntervalMinutes` | number | `60` | Minutes between forgetting cleanup runs |
| `temporalDecayDays` | number | `90` | Days before low-importance unused memories decay |
| `maxRecallResults` | number | `10` | Max memories injected per auto-recall |
| `vectorWeight` | number | `0.5` | Weight for vector similarity in hybrid search |
| `textWeight` | number | `0.3` | Weight for BM25 keyword search |
| `graphWeight` | number | `0.2` | Weight for graph-augmented retrieval |
| `dbPath` | string | `~/.openclaw/memory/supermemory.db` | SQLite database path |
| `captureMaxChars` | number | `2000` | Max message length for auto-capture |
| `debug` | boolean | `false` | Enable verbose logging |
## Fact Extraction
By default, the plugin uses your configured LLM to extract discrete, entity-centric facts from each conversation turn.
**Input conversation:**
> "Caught up with Iván today. He's working at Santander as an AI Scientist now, doing research on knowledge graphs. He lives in Madrid and mentioned a deadline next Tuesday for a paper submission."
**Extracted memories:**
- Iván works at Santander as an AI Scientist
- Iván researches knowledge graphs
- Iván lives in Madrid
- Iván has a paper submission deadline next Tuesday
Each fact is stored as a separate memory with automatic entity linking, relationship detection (Updates/Extends
... (truncated)
tools
Comments
Sign in to leave a comment