← Back to Plugins
Tools

Memory Memu

murasame-desu-ai By murasame-desu-ai 👁 649 views ▲ 0 votes

MemU plugin implementation for OpenClaw - Long-term memory management with semantic search, embedding, and categorization

GitHub

Install

npm install &&

Configuration Example

{
  "plugins": {
    // ⚠️ IMPORTANT: This line activates the plugin as the memory backend!
    "slots": {
      "memory": "memory-memu"
    },

    "entries": {
      // ... your existing plugins stay here ...

      "memory-memu": {
        "enabled": true,
        "config": {
          "geminiApiKey": "YOUR_GEMINI_API_KEY_HERE"
          // That's it! Anthropic token is auto-resolved from OpenClaw's own auth.
          // See the Authentication section below for details.
        }
      }
    }
  }
}

README

# openclaw-memory-memu

OpenClaw memory plugin using the [memU](https://github.com/murasame-desu-ai/memU) framework (fork with Anthropic/Gemini multi-provider support).

Provides long-term memory for OpenClaw agents: auto-capture conversations, recall relevant context, and manage memories through agent tools.

## Prerequisites

- **Python 3.13+** (`python3 --version` to check)
- **Node.js 18+** with npm (`node --version`)
- **OpenClaw** installed and running
- **Gemini API key** — free from [Google AI Studio](https://aistudio.google.com/apikey)

## Dependencies

### Python (for memU backend)

You must install the **forked memU** — the original does not support Anthropic/Gemini providers.

```bash
git clone https://github.com/murasame-desu-ai/memU.git
cd memU
pip install -e .

# Verify:
python3 -c "from memu.app import MemoryService; print('OK')"
```

Key Python packages (installed by memU):
- `httpx` — API client for Anthropic/Gemini
- `pendulum` — datetime handling
- `numpy` — vector operations
- `aiosqlite` — async SQLite

### Node.js

- `typescript` (dev only, for building)
- No runtime npm dependencies — the plugin uses OpenClaw's SDK and Node.js built-ins

## Installation

### Option A: Install script (recommended)

```bash
curl -fsSL https://raw.githubusercontent.com/murasame-desu-ai/openclaw-memory-memu/main/install.sh | bash
```

This downloads the latest release tarball and installs to `~/.openclaw/extensions/memory-memu/`.

### Option B: From release tarball

Download a `.tar.gz` from [Releases](https://github.com/murasame-desu-ai/openclaw-memory-memu/releases), then:

```bash
mkdir -p ~/.openclaw/extensions/memory-memu
tar xzf memory-memu-*.tar.gz -C ~/.openclaw/extensions/memory-memu --strip-components=1
cd ~/.openclaw/extensions/memory-memu
npm install && npm run build
```

### Option C: Git clone (development)

```bash
mkdir -p ~/.openclaw/extensions
cd ~/.openclaw/extensions/
git clone https://github.com/murasame-desu-ai/openclaw-memory-memu.git memory-memu
cd memory-memu
npm install
npm run build
```

## Configuration

Open your OpenClaw config file (typically `~/.openclaw/openclaw.json` or run `openclaw config path` to find it).

Add the plugin configuration. **Do not replace your existing config** — merge these sections:

```jsonc
{
  "plugins": {
    // ⚠️ IMPORTANT: This line activates the plugin as the memory backend!
    "slots": {
      "memory": "memory-memu"
    },

    "entries": {
      // ... your existing plugins stay here ...

      "memory-memu": {
        "enabled": true,
        "config": {
          "geminiApiKey": "YOUR_GEMINI_API_KEY_HERE"
          // That's it! Anthropic token is auto-resolved from OpenClaw's own auth.
          // See the Authentication section below for details.
        }
      }
    }
  }
}
```

> **⚠️ Don't forget `plugins.slots.memory`!** Without this line, the plugin will be installed but not used as the memory backend.

**Minimum required config is just `geminiApiKey`.** All other options have sensible defaults. See the [full Config reference](#config) below for advanced options.

### Restart OpenClaw

```bash
openclaw gateway restart
```

Done! The plugin will now:
- **Auto-recall**: Search relevant memories before each agent turn and inject them as context
- **Auto-capture**: Summarize and store important information after each agent turn
- **Periodic cleanup**: Remove old unreinforced memories automatically

### Verify it works

Option A — Check OpenClaw logs for:
```
memory-memu: initialized (Anthropic LLM + Gemini embeddings)
```

Option B — Test the Python wrapper directly:
```bash
cd ~/.openclaw/extensions/memory-memu
ANTHROPIC_TOKEN="your-token" GEMINI_API_KEY="your-key" \
  python3 memu_wrapper.py list
# Should output: {"success": true, "count": 0, "total": 0, "items": []}
```

Option C — Chat with your agent and ask: "What do you remember about me?"
After a few conversations, the agent should start recalling past context automatically.

## Example Usage

Here's what the plugin does behind the scenes:

**Day 1 - Initial conversation:**
```
You: I prefer working late at night, around 2-3 AM.
Agent: Got it! [Plugin auto-captures: "User prefers working late at night, 2-3 AM"]
```

**Day 3 - Agent recalls automatically:**
```
You: Should I start that new project now?
Agent: [Plugin auto-recalls: "User prefers working late at night, 2-3 AM"]
       Given your late-night work preference, you might want to wait until 
       later tonight when you're most productive.
```

**Using tools explicitly:**
```
You: Remember this: my dog's name is Moka, she's a Shiba Inu.
Agent: I'll memorize that for you.
       [Uses memory_memorize tool → stores with context]

You: What do you remember about my pets?
Agent: [Uses memory_list tool]
       I remember Moka, your Shiba Inu!
```

**Image memorization:**
```
You: [Sends a photo of a lakeside sunset]
     Remember this place, it's where I go hiking.
Agent: [Plugin uses Claude Vision to describe the image]
       [Stores: "Lakeside sunset location where user goes hiking"]
       
Later...
You: Where was that hiking spot I showed you?
Agent: [Retrieves: "Lakeside sunset location..."]
       The lakeside with the beautiful sunset view!
```

## How It Works

```
User message → [Auto-Recall] search memories → inject relevant context
                                                    ↓
Agent processes message with memory context → generates response
                                                    ↓
Agent turn ends → [Auto-Capture] summarize conversation → store memory
```

### Auto-Recall (`before_agent_start`)

Before each agent turn, the plugin searches for memories related to the user's prompt and injects them as `<relevant-memories>` context. This gives the agent access to past conversations and facts without manual lookup.

### Auto-Capture (`agent_end`)

After each successful agent turn, the plugin extracts the current conversation turn (last user + assistant messages, with 2 messages of prior context), summarizes it via LLM, and stores it as a memory item.

### Periodic Cleanup

On each `agent_end`, the plugin checks if enough time has passed since the last cleanup. If so, it removes old unreinforced memories automatically.

## Architecture

```
index.ts (OpenClaw plugin)
    ↓ subprocess
memu_wrapper.py (Python bridge)
    ↓ imports
memU MemoryService (Python library)
    ↓
SQLite database (~/.openclaw/memory/memu.sqlite)
```

The TypeScript plugin communicates with the Python memU library via a subprocess wrapper (`memu_wrapper.py`). Each tool call or lifecycle hook spawns a Python process with the appropriate command and environment variables.

## Authentication

### Anthropic Token Resolution

The plugin automatically resolves the Anthropic API token in this order:

1. **OpenClaw auth profiles** (recommended): Reads `~/.openclaw/agents/main/agent/auth-profiles.json` → uses the `lastGood.anthropic` profile's token
2. **Any Anthropic profile**: Falls back to any profile starting with `anthropic:` in auth-profiles.json
3. **Static config**: Uses the `anthropicToken` value from plugin config as final fallback

This means if OpenClaw's built-in authentication is active, **the plugin picks up the token automatically** — no manual configuration needed.

### Gemini API Key

The `geminiApiKey` must be set explicitly in the plugin config. Get one from [Google AI Studio](https://aistudio.google.com/apikey).

## Tools

| Tool | Description |
|------|-------------|
| `memory_memorize` | Ingest a resource (file/URL/image) through the full memU pipeline: ingest → extract → embed → store |
| `memory_list` | List recent memories sorted by creation date (newest first) |
| `memory_delete` | Delete a specific memory by UUID |
| `memory_categories` | List all memory categories with descriptions and summaries |
| `memory_cleanup` | Remove old unreinforced memories older than N days |

## Memory Categories

The plugin creates 4 default categories:

| Category | Description |
|----------|-------------|
| User Profile | User information and identity |
| Preferences | User preferences and settings |
| Facts | Important facts and knowledge |
| Events | Notable events and occurrences |

Category summaries are generated automatically by memU's LLM as memories accumulate in each category.

## Config

```jsonc
// openclaw.json → plugins.entries.memory-memu.config
{
  // --- Authentication ---
  "anthropicToken": "sk-ant-...",   // Auto-resolved from OpenClaw auth if omitted
  "geminiApiKey": "AIza...",        // Required: Gemini API key for embeddings

  // --- Feature Toggles ---
  "autoCapture": true,              // Auto-capture conversations (default: true)
  "autoRecall": true,               // Auto-inject relevant memories (default: true)

  // --- LLM Provider ---
  "llmProvider": "anthropic",       // "anthropic" | "openai" | "gemini" (default: "anthropic")
  "llmBaseUrl": "",                 // Custom API base URL (uses provider default if empty)
  "llmModel": "",                   // Chat model (default: claude-haiku-4-5 for anthropic)

  // --- Embedding Provider ---
  "embedProvider": "gemini",        // "gemini" | "openai" (default: auto based on llmProvider)
  "embedBaseUrl": "",               // Custom embedding API URL
  "embedModel": "",                 // Embedding model (default: gemini-embedding-001)

  // --- Retrieval Settings ---
  "routeIntention": true,           // LLM judges if retrieval is needed & rewrites query (default: true)
  "sufficiencyCheck": true,         // LLM checks if results are sufficient (default: true)
  "recallTopK": 3,                  // Number of memories to retrieve per recall (default: 3)
  "rankingStrategy": "salience",    // "similarity" | "salience" (default: "salience")
  "recencyDecayDays": 30,           // Half-life for recency scoring in salience ranking (default: 30)

  // --- Capture Settings ---
  "captureDetail": "medium",        // "low" | "medium" | "high" — how agg

... (truncated)
tools

Comments

Sign in to leave a comment

Loading comments...