Tools
Memory Lancedb Context
ContextEngine plugin for OpenClaw with retrieval-augmented context management and memory-aware compaction
Install
npm install memory-lancedb-context
Configuration Example
{
"plugins": {
"slots": {
"memory": "memory-lancedb-pro",
"contextEngine": "memory-lancedb-context"
},
"entries": {
"memory-lancedb-pro": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "your-api-key",
"model": "text-embedding-3-small"
}
}
},
"memory-lancedb-context": {
"enabled": true,
"config": {
"autoCapture": true,
"enableMemoryInjection": true,
"maxMemoriesToInject": 5,
"minInjectionScore": 0.4,
"compactionStrategy": "balanced",
"enableDuplicateDetection": true,
"duplicateThreshold": 0.92,
"enableSmartSummarization": true,
"maxBatchSize": 100
}
}
}
}
}
README
# Memory LanceDB Context Engine
[](https://www.npmjs.com/package/memory-lancedb-context)
[](https://opensource.org/licenses/MIT)
A smart ContextEngine plugin for [OpenClaw](https://github.com/openclaw/openclaw) that integrates with [memory-lancedb-pro](https://github.com/openclaw/memory-lancedb-pro) for retrieval-augmented context management, auto-capture, intelligent memory injection, batch import, and memory-aware compaction.
## Features
### ๐ง Intelligent Memory Injection
- Automatically retrieves relevant memories during context assembly
- Injects historical context into system prompt with relevance scores
- Supports hybrid search (vector + BM25 + reranking)
### ๐ Auto-Capture
- Detects important user messages using pattern matching
- Supports Chinese, English, and Czech trigger phrases
- Categories: preference, decision, fact, entity, other
### ๐ฆ Batch Import (`ingestBatch`)
- Import multiple messages at once
- Configurable batch size (default: 100)
- Automatic duplicate detection
- Progress tracking with detailed results
### ๐ Historical Session Bootstrap
- Import historical sessions into memory on startup
- Archive file support (.archive, .bak, history-*)
- Preserves context across sessions
### ๐๏ธ Custom Compaction Strategies
| Strategy | Description |
|----------|-------------|
| **aggressive** | Maximum compression, minimal preservation |
| **balanced** | Default, good balance between compression and context |
| **conservative** | Minimal compression, maximum preservation |
| **custom** | User-defined instructions |
### โก Smart Summarization
- Auto-generates summaries from recent messages
- Extracts decisions, facts, entities, preferences
- Stores summaries for future context recovery
## Installation
```bash
# Install via npm
npm install memory-lancedb-context
# Or clone to your OpenClaw plugins directory
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context
npm install
```
## Configuration
Add to your `openclaw.json`:
```json
{
"plugins": {
"slots": {
"memory": "memory-lancedb-pro",
"contextEngine": "memory-lancedb-context"
},
"entries": {
"memory-lancedb-pro": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "your-api-key",
"model": "text-embedding-3-small"
}
}
},
"memory-lancedb-context": {
"enabled": true,
"config": {
"autoCapture": true,
"enableMemoryInjection": true,
"maxMemoriesToInject": 5,
"minInjectionScore": 0.4,
"compactionStrategy": "balanced",
"enableDuplicateDetection": true,
"duplicateThreshold": 0.92,
"enableSmartSummarization": true,
"maxBatchSize": 100
}
}
}
}
}
```
## Configuration Options
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `autoCapture` | boolean | `true` | Auto-capture user messages that match memory triggers |
| `enableMemoryInjection` | boolean | `true` | Inject relevant memories into context during assemble |
| `maxMemoriesToInject` | integer | `5` | Maximum number of memories to inject (1-10) |
| `minInjectionScore` | number | `0.4` | Minimum relevance score for memory injection (0-1) |
| `defaultAgentId` | string | `"main"` | Default agent ID for scope resolution |
| `enableMemoryCompaction` | boolean | `true` | Enable memory-based intelligent compaction |
| `preserveImportantTurns` | boolean | `true` | Store important turns to memory during compaction |
| `maxPreservedTurns` | integer | `20` | Maximum recent turns to analyze (5-50) |
| `compactionStrategy` | string | `"balanced"` | Compaction strategy (aggressive/balanced/conservative/custom) |
| `customCompactionInstructions` | string | `""` | Custom instructions for custom strategy |
| `enableMemoryDecay` | boolean | `false` | Enable memory decay (future feature) |
| `memoryDecayDays` | integer | `30` | Memory decay threshold in days |
| `enableDuplicateDetection` | boolean | `true` | Enable duplicate detection |
| `duplicateThreshold` | number | `0.92` | Similarity threshold for duplicates (0.8-1) |
| `enableSmartSummarization` | boolean | `true` | Enable smart summarization |
| `maxBatchSize` | integer | `100` | Maximum batch size for ingestBatch (10-500) |
## API
### `ingestBatch(params)`
Batch import messages into memory.
```typescript
const result = await contextEngine.ingestBatch({
messages: [
{ role: "user", content: "I prefer American coffee without sugar" },
{ role: "user", content: "Project codename is Phoenix" },
],
scope: "agent:main", // optional
skipDuplicates: true, // default: true
});
// Result:
{
total: 2,
ingested: 2,
skipped: 0,
failed: 0,
details: [
{ text: "I prefer American coffee...", status: "ingested" },
{ text: "Project codename is Phoenix", status: "ingested" },
]
}
```
### `bootstrap(params)`
Import historical session on startup.
```typescript
const result = await contextEngine.bootstrap({
sessionId: "session-123",
sessionFile: "/path/to/session.json",
});
// Result:
{
bootstrapped: true,
reason: "Imported 15 messages from session, 5 from archives",
data: {
totalMessages: 50,
ingested: 15,
skipped: 35,
failed: 0,
importedArchives: 5
}
}
```
### Compaction Strategies
```typescript
// Aggressive - maximum compression
{
"compactionStrategy": "aggressive"
}
// Balanced - default
{
"compactionStrategy": "balanced"
}
// Conservative - minimal compression
{
"compactionStrategy": "conservative"
}
// Custom - user-defined
{
"compactionStrategy": "custom",
"customCompactionInstructions": "Focus on technical decisions. Preserve all code snippets."
}
```
## How It Works
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ User sends message โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ingest() โ Detect triggers โ Auto-store important info โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ assemble() โ Retrieve memories โ Inject into system prompt โ
โ <relevant-memories> โ
โ [HISTORICAL CONTEXT] โ
โ - [preference:agent:violet] User likes Americano... โ
โ - [decision:agent:violet] Project codename Phoenix... โ
โ </relevant-memories> โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Model generates response โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ afterTurn() โ Smart summary โ Store for future context โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ compact() โ Strategy-based compression โ Memory preserved โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
## Memory Triggers
The plugin automatically captures messages containing:
### English
- "remember", "prefer", "I like/hate/want"
- "we decided", "switch to", "migrate to"
- "important", "always", "never"
- Phone numbers, email addresses
### Chinese
- "่ฎฐไฝ", "่ฎฐไธไธ", "ๅซๅฟไบ"
- "ๅๅฅฝ", "ๅๆฌข", "่ฎจๅ"
- "ๅณๅฎ", "ๆน็จ", "ไปฅๅ็จ"
- "ๆ็...ๆฏ", "้่ฆ", "ๅ
ณ้ฎ"
### Czech
- "zapamatuj si", "preferuji", "radลกi"
- "rozhodli jsme", "budeme pouลพรญvat"
## Comparison with Legacy ContextEngine
| Feature | Legacy | Memory-LanceDB-Context |
|---------|--------|------------------------|
| `ingest` | โ no-op | โ
Auto-capture important messages |
| `ingestBatch` | โ N/A | โ
Batch import with duplicate detection |
| `assemble` | โ pass-through | โ
Retrieve & inject memories |
| `bootstrap` | โ no-op | โ
Historical session import |
| `compact` | โ
LLM summarization | โ
Strategy-based + memory preservation |
| `afterTurn` | โ no-op | โ
Smart summarization + storage |
## Dependencies
- [OpenClaw](https://github.com/openclaw/openclaw) >= 2026.3.7
- [memory-lancedb-pro](https://github.com/openclaw/memory-lancedb-pro) - Required for memory storage
## Development
```bash
# Clone the repository
git clone https://github.com/2951461586/memory-lancedb-context.git
cd memory-lancedb-context
# Install dependencies
npm install
# Test
npm test
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'feat: Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## Acknowledgments
- [OpenClaw](https://github.com/openclaw/openclaw) - The AI agent framework this plugin is designed for
- [LanceDB](https://lancedb.github.io/lancedb/) - Serverless vector database
- [memory-lancedb-pro](https://github.com/openclaw/memory-lancedb-pro) - The memory storage backend
tools
Comments
Sign in to leave a comment