Tools
Remnic
Local-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.
Install
npm install -g
Configuration Example
{
"plugins": {
"allow": ["openclaw-engram"],
"slots": { "memory": "openclaw-engram" },
"entries": {
"openclaw-engram": {
"enabled": true,
"config": {
// Option 1: Use OpenAI for extraction:
"openaiApiKey": "${OPENAI_API_KEY}"
// Option 2: Use Engram's local LLM path (plugin mode only; no API key needed):
// "localLlmEnabled": true,
// "localLlmUrl": "http://localhost:1234/v1",
// "localLlmModel": "qwen2.5-32b-instruct"
// Option 3: Use the gateway model chain (primary path in gateway mode):
// "modelSource": "gateway",
// "gatewayAgentId": "engram-llm",
// "fastGatewayAgentId": "engram-llm-fast"
}
}
}
}
}
README
# Engram
**Persistent, private memory for AI agents.** Your agents forget everything between sessions — Engram fixes that.
Engram gives AI agents long-term memory that survives across conversations. Decisions, preferences, project context, personal details, past mistakes — everything your agent learns persists and resurfaces exactly when it's needed. All data stays on your machine as plain markdown files. No cloud services, no subscriptions, no sharing your data with third parties.
[](https://www.npmjs.com/package/@joshuaswarren/openclaw-engram)
[](LICENSE)
[](https://github.com/sponsors/joshuaswarren)
> **Engram is now a monorepo.** For standalone (non-OpenClaw) use, install the scoped packages:
> [`@engram/core`](https://www.npmjs.com/package/@engram/core),
> [`@engram/server`](https://www.npmjs.com/package/@engram/server),
> [`@engram/cli`](https://www.npmjs.com/package/@engram/cli).
> The `openclaw-engram` and `@joshuaswarren/openclaw-engram` packages remain the OpenClaw plugin entry point.
> Python users: [`engram-hermes`](https://pypi.org/project/engram-hermes/) on PyPI.
## Support Engram
Every bit of support is genuinely appreciated and helps keep this project alive and free for everyone.
If you're able to, [sponsoring on GitHub](https://github.com/sponsors/joshuaswarren) or sending a Lightning donation to `[email protected]` directly funds continued development, new integrations, and keeping Engram open source.
[](https://github.com/sponsors/joshuaswarren)
If financial support isn't an option, you can still make a big difference — [star the repo on GitHub](https://github.com/joshuaswarren/openclaw-engram), share it on social media, or recommend it to a friend or colleague. Word of mouth is how most people find Engram, and it means the world.
## The Problem
Every AI agent session starts from zero. Your agent doesn't know your name, your projects, the decisions you've already made, or the bugs you already debugged. Whether it's a personal assistant, a coding agent, a research agent, or a multi-agent team — they all forget everything between conversations. You re-explain the same context over and over, and your agents still make the same mistakes.
OpenClaw's built-in memory works for simple cases, but it doesn't scale. It lacks semantic search, lifecycle management, entity tracking, and governance. Third-party memory services exist, but they cost money and require sending your private data to someone else's servers.
## The Solution
Engram is an open-source, local-first memory system that replaces OpenClaw's default memory with something much more capable — while keeping everything on your machine. It watches your agent conversations, extracts durable knowledge, and injects the right memories back at the start of every session. Use OpenAI or a **local LLM** (Ollama, LM Studio, etc.) for extraction — your choice.
Engram is the **universal memory layer for AI agents**. It works natively with **[OpenClaw](https://github.com/openclaw/openclaw)**, **[Claude Code](https://docs.anthropic.com/en/docs/claude-code)**, **[Codex CLI](https://github.com/openai/codex)**, **[Hermes Agent](https://github.com/hermes-agent/hermes)**, and any **MCP-compatible client** (Replit, Cursor, etc.). When you tell any agent a preference, every agent knows it — they share one memory store.
| Without Engram | With Engram |
|---|---|
| Re-explain who you are and what you're working on | Agent recalls your identity, projects, and preferences automatically |
| Repeat context for every task | Entity knowledge surfaces people, projects, tools, and relationships on demand |
| Lose debugging and research context between sessions | Past root causes, dead ends, and findings are recalled — no repeated work |
| Manually restate preferences every session | Preferences persist across sessions, agents, and projects |
| Context-switching tax when resuming work | Session-start recall brings you back to speed instantly |
| Default OpenClaw memory doesn't scale | Hybrid search, lifecycle management, namespaces, and governance |
| Third-party memory services cost money and share your data | Everything stays local — your filesystem, your rules |
## Installation
### Option 1: Install from the CLI
```bash
openclaw plugins install @joshuaswarren/openclaw-engram --pin
```
### Option 2: Ask your OpenClaw agent to install it
Tell any OpenClaw agent:
> Install the openclaw-engram plugin and configure it as my memory system.
Your agent will run the install command, update `openclaw.json`, and restart the gateway for you.
### Option 3: Developer install from source
```bash
git clone https://github.com/joshuaswarren/openclaw-engram.git \
~/.openclaw/extensions/openclaw-engram
cd ~/.openclaw/extensions/openclaw-engram
npm ci && npm run build
```
### Option 4: Standalone (no OpenClaw)
**From npm (recommended):**
```bash
npm install -g @engram/cli # Installs the `engram` binary
engram init # Create engram.config.json
export OPENAI_API_KEY=sk-...
export ENGRAM_AUTH_TOKEN=$(openssl rand -hex 32)
engram daemon start # Start background server
engram status # Verify it's running
engram query "hello" --explain # Test query with tier breakdown
```
**From source** (requires [Node.js](https://nodejs.org/) 22.12+ and [pnpm](https://pnpm.io/)):
```bash
git clone https://github.com/joshuaswarren/openclaw-engram.git
cd openclaw-engram
pnpm install && pnpm run build
cd packages/engram-cli && pnpm link --global # Makes `engram` available on PATH
cd ../..
engram init
```
> **Note:** The `engram` binary (`packages/cli/bin/engram.cjs`) is a CJS wrapper that auto-locates `tsx` from `node_modules` (falling back to a global `tsx`). Running `npm link` from `packages/cli/` (not the repo root) makes the CLI globally available — the root package only exposes `engram-access`. Alternatively, invoke directly: `npx tsx packages/cli/src/index.ts <command>`.
The standalone CLI provides 15+ commands for memory management, project onboarding, curation, diff-aware sync, dedup, connectors, spaces, and benchmarks -- all without requiring OpenClaw. See the [Platform Migration Guide](docs/guides/platform-migration.md) for the full command reference.
### Option 5: Connect Other AI Agents
Once the Engram daemon is running, connect any supported agent:
```bash
engram connectors install claude-code # Claude Code (hooks + MCP)
engram connectors install codex-cli # Codex CLI (hooks + MCP)
engram connectors install replit # Replit (MCP only)
pip install engram-hermes # Hermes Agent (Python MemoryProvider)
```
Each connector generates a unique auth token, installs the appropriate plugin/hooks, and verifies the connection. All agents share the same memory store — tell one agent your preference, and every agent remembers it.
| Platform | Integration | Auto-recall | Auto-observe |
|----------|------------|-------------|--------------|
| **OpenClaw** | Memory slot plugin | Every session | Every response |
| **Claude Code** | Native hooks + MCP | Every prompt | Every tool use |
| **Codex CLI** | Native hooks + MCP | Every prompt | Every tool use |
| **Hermes** | Python MemoryProvider | Every LLM call | Every turn |
| **Replit** | MCP only | On demand | On demand |
### Configure
After installation, add Engram to your `openclaw.json`:
```jsonc
{
"plugins": {
"allow": ["openclaw-engram"],
"slots": { "memory": "openclaw-engram" },
"entries": {
"openclaw-engram": {
"enabled": true,
"config": {
// Option 1: Use OpenAI for extraction:
"openaiApiKey": "${OPENAI_API_KEY}"
// Option 2: Use Engram's local LLM path (plugin mode only; no API key needed):
// "localLlmEnabled": true,
// "localLlmUrl": "http://localhost:1234/v1",
// "localLlmModel": "qwen2.5-32b-instruct"
// Option 3: Use the gateway model chain (primary path in gateway mode):
// "modelSource": "gateway",
// "gatewayAgentId": "engram-llm",
// "fastGatewayAgentId": "engram-llm-fast"
}
}
}
}
}
```
> **Gateway model source:** When `modelSource` is `"gateway"`, Engram routes all LLM calls (extraction, consolidation, reranking) through an OpenClaw agent persona's model chain instead of its own config. Extraction starts on the `gatewayAgentId` chain directly in this mode; `localLlm*` settings do not control primary extraction order. Define agent personas in `openclaw.json → agents.list[]` with a `primary` model and `fallbacks[]` array — Engram tries each in order until one succeeds. This lets you build multi-provider fallback chains like Fireworks → local LLM → cloud OpenAI. See the [Gateway Model Source](docs/config-reference.md#gateway-model-source) guide for full setup.
Restart the gateway:
```bash
launchctl kickstart -k gui/$(id -u)/ai.openclaw.gateway # macOS
# or: systemctl restart openclaw-gateway # Linux
```
Start a conversation — Engram begins learning immediately.
> **Note:** This shows only the minimal config. Engram has 60+ configuration options for search backends, capture modes, memory OS features, and more. See the [full config reference](docs/config-reference.md) for every setting.
### Verify installation
```bash
openclaw engram setup --json # Validates config, scaffolds directories
openclaw engram doctor --json # Health diagnostics with remediation hints
openclaw engram config-review --json # Opinionated config tuning recommendations
```
## Using Engram with Codex CLI
Start the Engram HTTP server:
```bash
# Generate a token
export OPENCLAW_ENGR
... (truncated)
tools
Comments
Sign in to leave a comment