Tools
Memory V2
Memory-v2 plugin for OpenClaw: JSONL storage, semantic search, local embeddings
Install
npm install @openclaw/memory-v2
Configuration Example
{
"plugins": {
"load": {
"paths": ["./node_modules/@openclaw/memory-v2"]
},
"entries": {
"memory-v2": {
"enabled": true,
"config": {
"indexPath": "~/.openclaw/workspace/memory/index/memory-index.jsonl",
"embedding": {
"provider": "local",
"modelName": "Xenova/all-MiniLM-L6-v2"
}
}
}
}
}
}
README
# @openclaw/memory-v2
A structured memory system for AI agents with semantic search, graph relations, and time-decay importance.
## Features
- **Typed Memories**: `learning`, `decision`, `interaction`, `event`, `insight`
- **Importance Scoring**: 1-10 scale with time-decay and access boosting
- **Graph Relations**: Link memories with `caused`, `related`, `supersedes`, `contradicts`, `elaborates`
- **Hybrid Search**: Combines keyword matching with semantic similarity
- **Local Embeddings**: Uses `@xenova/transformers` (all-MiniLM-L6-v2) — no API keys needed
- **Binary Storage**: Base64-encoded Float32 embeddings (~42% smaller than JSON arrays)
- **Auto-Linking**: New memories automatically link to similar existing ones
## Installation
```bash
npm install @openclaw/memory-v2
```
## Configuration
Add to your OpenClaw config:
```json
{
"plugins": {
"load": {
"paths": ["./node_modules/@openclaw/memory-v2"]
},
"entries": {
"memory-v2": {
"enabled": true,
"config": {
"indexPath": "~/.openclaw/workspace/memory/index/memory-index.jsonl",
"embedding": {
"provider": "local",
"modelName": "Xenova/all-MiniLM-L6-v2"
}
}
}
}
}
}
```
## Tools
### `memory_v2_search`
Search memories with hybrid keyword + semantic matching.
```
memory_v2_search query="project decisions" maxResults=5
```
### `memory_v2_add`
Create a new memory entry (auto-embeds and auto-links).
```
memory_v2_add type="decision" content="Switched to TypeScript for type safety" importance=8 tags=["code","typescript"]
```
### `memory_v2_stats`
Show index statistics: total memories, embedding coverage, type distribution.
### `memory_v2_embed`
Generate embeddings for memories without them.
```
memory_v2_embed limit=100 force=false
```
### `memory_v2_link`
Manually create relations between memories.
```
memory_v2_link sourceId="mem_123" targetId="mem_456" relationType="related"
```
### `memory_v2_migrate`
Migrate from v2 (JSON arrays) to v3 (binary base64). Reduces file size ~42%.
### `memory_v2_get`
Read a memory file directly.
## Index Format (v3 JSONL)
```jsonl
{"_meta":true,"version":"3.0","lastUpdated":"2026-02-20T...","embeddingFormat":"base64-f32","dimensions":384}
{"id":"mem_1234","type":"learning","importance":8,"content":"...","embedding":"base64...","relations":[{"id":"mem_5678","type":"related"}]}
```
## Embedding Models
| Model | Size | Quality | Speed |
|-------|------|---------|-------|
| `Xenova/all-MiniLM-L6-v2` | ~22MB | Good | Fast (~2.4s load) |
| `Xenova/all-mpnet-base-v2` | ~110MB | Better | Slower |
## Effective Importance Formula
```
effectiveImportance = baseImportance × decayFactor × accessBoost
decayFactor = max(0.3, 1 - daysSinceCreation/365)
accessBoost = min(2.0, 1 + accessCount × 0.1)
```
Recent memories rank higher. Frequently accessed memories get boosted.
## Hybrid Search Scoring
```
score = (semanticScore × 0.6) + (keywordScore × 0.4) + (effectiveImportance/10 × 0.1)
```
## License
MIT
tools
Comments
Sign in to leave a comment