Channels
Omem
Shared Memory That Never Forgets โ persistent memory for AI agents with Space-based sharing across agents and teams. Plugins for OpenCode, Claude Code, OpenClaw, MCP Server.
Install
openclaw plugins install @ourmem/openclaw`
Configuration Example
{
"mcpServers": {
"ourmem": {
"command": "npx",
"args": ["@ourmem/mcp"],
"env": {
"OMEM_API_URL": "https://api.ourmem.ai",
"OMEM_API_KEY": "your-api-key"
}
}
}
}
README
<p align="center">
<strong>๐ง ourmem</strong><br/>
Shared Memory That Never Forgets
</p>
<p align="center">
<a href="https://github.com/ourmem/omem/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-Apache--2.0-blue.svg" alt="License"></a>
<a href="https://ourmem.ai"><img src="https://img.shields.io/badge/hosted-api.ourmem.ai-green.svg" alt="Hosted"></a>
<a href="https://github.com/ourmem/omem"><img src="https://img.shields.io/github/stars/ourmem/omem?style=social" alt="Stars"></a>
</p>
<p align="center">
<strong>English</strong> | <a href="README_CN.md">็ฎไฝไธญๆ</a>
</p>
---
## The Problem
Your AI agents have amnesia โ and they work alone.
- ๐ง **Amnesia** โ every session starts from zero. Preferences, decisions, context โ all gone.
- ๐๏ธ **Silos** โ your Coder agent can't access what your Writer agent learned.
- ๐ **Local lock-in** โ memory tied to one machine. Switch devices, lose everything.
- ๐ซ **No sharing** โ team agents can't share what they know. Every agent re-discovers the same things.
- ๐ **Dumb recall** โ keyword match only. No semantic understanding, no relevance ranking.
- ๐งฉ **No collective intelligence** โ even when agents work on the same team, there's no shared knowledge layer.
**ourmem fixes all of this.**
## What is ourmem
ourmem gives AI agents shared persistent memory โ across sessions, devices, agents, and teams. One API key reconnects everything.
๐ **Website:** [ourmem.ai](https://ourmem.ai)
<table>
<tr>
<td width="50%" valign="top">
### ๐งโ๐ป I use AI coding tools
Install the plugin for your platform. Memory works automatically โ your agent recalls past context on session start and captures key info on session end.
**โ Jump to [Quick Start](#quick-start)**
</td>
<td width="50%" valign="top">
### ๐ง I'm building AI products
REST API with 35 endpoints. Docker one-liner for self-deploy. Embed persistent memory into your own agents and workflows.
**โ Jump to [Self-Deploy](#self-deploy)**
</td>
</tr>
</table>
## Core Capabilities
<table>
<tr>
<td width="25%" align="center">
<h4>๐ Shared Across Boundaries</h4>
Three-tier Spaces โ Personal, Team, Organization โ let knowledge flow across agents and teams with full provenance tracking.
</td>
<td width="25%" align="center">
<h4>๐ง Never Forget</h4>
Weibull decay model manages the memory lifecycle โ core memories persist, peripheral ones gracefully fade. No manual cleanup.
</td>
<td width="25%" align="center">
<h4>๐ Deep Understanding</h4>
11-stage hybrid retrieval: vector search, BM25, RRF fusion, cross-encoder reranking, and MMR diversity for precise recall.
</td>
<td width="25%" align="center">
<h4>โก Smart Evolution</h4>
7-decision reconciliation โ CREATE, MERGE, SUPERSEDE, SUPPORT, CONTEXTUALIZE, CONTRADICT, or SKIP โ makes memories smarter over time.
</td>
</tr>
</table>
## Feature Overview
| Category | Feature | Details |
|----------|---------|---------|
| **Platforms** | 4 platforms | OpenCode, Claude Code, OpenClaw, MCP Server |
| **Sharing** | Space-based sharing | Personal / Team / Organization with provenance |
| | Provenance tracking | Every shared memory carries full lineage |
| | Quality-gated auto-sharing | Rules filter by importance, category, tags |
| | Cross-space search | Search across all accessible spaces at once |
| **Ingestion** | Smart dedup | 7 decisions: CREATE, MERGE, SKIP, SUPERSEDE, SUPPORT, CONTEXTUALIZE, CONTRADICT |
| | Noise filter | Regex + vector prototypes + feedback learning |
| | Admission control | 5-dimension scoring gate (utility, confidence, novelty, recency, type prior) |
| | Dual-stream write | Sync fast path (<50ms) + async LLM extraction |
| | Privacy protection | `<private>` tag redaction before storage |
| **Retrieval** | 11-stage pipeline | Vector + BM25 โ RRF โ reranker โ decay โ importance โ MMR diversity |
| | User Profile | Static facts + dynamic context, <100ms |
| | Retrieval trace | Per-stage explainability (input/output/score/duration) |
| **Lifecycle** | Weibull decay | Tier-specific ฮฒ (Core=0.8, Working=1.0, Peripheral=1.3) |
| | Three-tier promotion | Peripheral โ Working โ Core with access-based promotion |
| | Auto-forgetting | TTL detection for time-sensitive info ("tomorrow", "next week") |
| **Multi-modal** | File processing | PDF, image OCR, video transcription, code AST chunking |
| | GitHub connector | Real-time webhook sync for code, issues, PRs |
| **Deploy** | Open source | Apache-2.0 (plugins + docs) |
| | Self-hostable | Single binary, Docker one-liner, ~$5/month |
| | musl static build | Zero-dependency binary for any Linux x86_64 |
| | Hosted option | api.ourmem.ai โ nothing to deploy |
## From Isolated Agents to Collective Intelligence
Most AI memory systems trap knowledge in silos. ourmem's three-tier Space architecture enables knowledge flow across agents and teams โ with provenance tracking and quality-gated sharing.
> *Research shows collaborative memory reduces redundant work by up to 61% โ agents stop re-discovering what their teammates already know.*
> โ Collaborative Memory, ICLR 2026
| | Personal | Team | Organization |
|---|----------|------|--------------|
| **Scope** | One user, multiple agents | Multiple users | Company-wide |
| **Example** | Coder + Writer share preferences | Backend team shares arch decisions | Tech standards, security policies |
| **Access** | Owner's agents only | Team members | All org members (read-only) |
**Provenance-tracked sharing** โ every shared memory carries its lineage: who shared it, when, and where it came from.
**Quality-gated auto-sharing** โ rules filter by importance, category, and tags. Only high-value insights cross space boundaries.
## How It Works
```
Your AI Agent (OpenCode / Claude Code / OpenClaw / Cursor)
โ auto-recall + auto-capture
ourmem Plugin (thin HTTP client)
โ REST API (X-API-Key auth)
ourmem Server
โ
โโโ Smart Ingest โโโ LLM extraction โ noise filter โ admission โ 7-decision reconciliation
โโโ Hybrid Search โโ vector + BM25 โ RRF fusion โ reranker โ decay boost โ MMR (11 stages)
โโโ User Profile โโโ static facts + dynamic context, <100ms
โโโ Space Sharing โโ Personal / Team / Organization with provenance tracking
โโโ Lifecycle โโโโโโ Weibull decay, 3-tier promotion (Core/Working/Peripheral), auto-forgetting
```
## Quick Start
### Agent Install (recommended)
One message to your AI agent. It handles everything โ API key, plugin install, config, verification.
**Hosted (api.ourmem.ai โ nothing to deploy):**
| Platform | Copy this to your agent |
|----------|------------------------|
| **OpenClaw** | `Read https://ourmem.ai/SKILL.md and follow the instructions to install and configure ourmem for OpenClaw` |
| **Claude Code** | `Read https://ourmem.ai/SKILL.md and follow the instructions to install and configure ourmem for Claude Code` |
| **OpenCode** | `Read https://ourmem.ai/SKILL.md and follow the instructions to install and configure ourmem for OpenCode` |
| **Cursor / VS Code** | `Read https://ourmem.ai/SKILL.md and follow the instructions to install and configure ourmem as MCP Server` |
**Self-hosted (your own server):**
| Platform | How to install |
|----------|---------------|
| **OpenClaw** | Run `openclaw skills install ourmem`, then tell your agent: `setup ourmem in self-hosted mode` |
| **Claude Code** | `Read https://raw.githubusercontent.com/ourmem/omem/main/skills/ourmem/SKILL.md and install ourmem for Claude Code, self-hosted mode` |
| **OpenCode** | `Read https://raw.githubusercontent.com/ourmem/omem/main/skills/ourmem/SKILL.md and install ourmem for OpenCode, self-hosted mode` |
That's it. Your agent handles the rest.
---
<details>
<summary><b>Manual Install</b> (without agent assistance)</summary>
### 1. Get an API Key
**Hosted:**
```bash
curl -sX POST https://api.ourmem.ai/v1/tenants \
-H "Content-Type: application/json" \
-d '{"name": "my-workspace"}' | jq .
# โ {"id": "xxx", "api_key": "xxx", "status": "active"}
```
**Self-deploy:**
```bash
docker run -d -p 8080:8080 -e OMEM_EMBED_PROVIDER=bedrock ourmem:latest
curl -sX POST http://localhost:8080/v1/tenants \
-H "Content-Type: application/json" \
-d '{"name": "my-workspace"}' | jq .
```
Save the returned `api_key` โ this reconnects you to the same memory from any machine.
### 2. Install Plugin
**OpenCode:** Add `"plugin": ["@ourmem/opencode"]` to `opencode.json` + set `OMEM_API_URL` and `OMEM_API_KEY` env vars.
**Claude Code:** `/plugin marketplace add ourmem/omem` + set env vars in `~/.claude/settings.json`.
**OpenClaw:** `openclaw plugins install @ourmem/openclaw` + configure `openclaw.json` with apiUrl and apiKey.
**MCP (Cursor / VS Code / Claude Desktop):**
```json
{
"mcpServers": {
"ourmem": {
"command": "npx",
"args": ["@ourmem/mcp"],
"env": {
"OMEM_API_URL": "https://api.ourmem.ai",
"OMEM_API_KEY": "your-api-key"
}
}
}
}
```
### 3. Verify
```bash
curl -sX POST "$OMEM_API_URL/v1/memories" \
-H "X-API-Key: $OMEM_API_KEY" -H "Content-Type: application/json" \
-d '{"content": "I prefer dark mode", "tags": ["preference"]}'
curl -s "$OMEM_API_URL/v1/memories/search?q=dark+mode" -H "X-API-Key: $OMEM_API_KEY"
```
</details>
## What Your Agent Gets
| Tool | Purpose |
|------|---------|
| `memory_store` | Save facts, decisions, preferences |
| `memory_search` | Semantic + keyword hybrid search |
| `memory_get` | Retrieve by ID |
| `memory_update` | Modify existing memory |
| `memory_delete` | Remove a memory |
| Hook | Trigger | Effect |
|------|---------|--------|
| SessionStart | New session | Recent memories auto-injected into context |
| SessionEnd | Session ends | Key information auto-captured |
## Memory Space
Browse, search, and manage your agent's memories visually at **[ourmem.ai/space](https://ourmem.ai/space)** โ see how memories connect, evolve, and decay over time.
## Security & Privacy
| | |
|---|---|
|
... (truncated)
channels
Comments
Sign in to leave a comment