Tools
Mem7
Memory layer for AI Agents and OpenClaw powered by Rust
Install
npm install @mem7ai/mem7
Configuration Example
# Cargo.toml
mem7 = { version = "0.2", features = ["fastembed"] }
README
<p align="center">
<img src="docs/logo.png" alt="mem7" width="120">
</p>
<h1 align="center">mem7</h1>
<p align="center">LLM-powered long-term memory engine โ Rust core with multi-language bindings.</p>
Deeply inspired by [Mem0](https://mem0.ai/), mem7 reimplements the core memory pipeline in Rust and adds an **Ebbinghaus forgetting curve** โ stale memories naturally decay over time while frequently recalled facts grow stronger, just like human memory.
mem7 extracts factual statements from conversations, deduplicates them against existing memories, and stores the results in vector + graph databases with full audit history.
## Install
```bash
pip install mem7 # Python
npm install @mem7ai/mem7 # Node.js / TypeScript
cargo add mem7 # Rust
```
## Architecture
```
Python / TypeScript / Rust API
โ PyO3 (sync + async) / napi-rs / native
โผ
Rust Core (tokio async runtime)
โโโ mem7-llm โ OpenAI-compatible LLM client
โโโ mem7-embedding โ Embedding client (OpenAI-compatible / FastEmbed)
โโโ mem7-vector โ Vector index (FlatIndex / Upstash)
โโโ mem7-graph โ Graph store (FlatGraph / Kuzu / Neo4j)
โโโ mem7-history โ SQLite audit trail
โโโ mem7-dedup โ LLM-driven memory deduplication
โโโ mem7-reranker โ Search reranking (Cohere / LLM-based)
โโโ mem7-telemetry โ OpenTelemetry tracing (OTLP export)
โโโ mem7-store โ Pipeline orchestrator (MemoryEngine)
```
## Quick Start (Python โ Sync)
```python
from mem7 import Memory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = Memory(config=config)
m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = m.search("What sports does Alice play?", user_id="alice")
```
## Quick Start (Python โ Async)
```python
import asyncio
from mem7 import AsyncMemory
from mem7.config import MemoryConfig, LlmConfig, EmbeddingConfig
async def main():
config = MemoryConfig(
llm=LlmConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="qwen2.5:7b",
),
embedding=EmbeddingConfig(
base_url="http://localhost:11434/v1",
api_key="ollama",
model="mxbai-embed-large",
dims=1024,
),
)
m = await AsyncMemory.create(config=config)
await m.add("I love playing tennis and my coach is Sarah.", user_id="alice")
results = await m.search("What sports does Alice play?", user_id="alice")
asyncio.run(main())
```
## Quick Start (TypeScript)
```typescript
import { MemoryEngine } from "@mem7ai/mem7";
const engine = await MemoryEngine.create(JSON.stringify({
llm: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "qwen2.5:7b" },
embedding: { base_url: "http://localhost:11434/v1", api_key: "ollama", model: "mxbai-embed-large", dims: 1024 },
}));
await engine.add([{ role: "user", content: "I love playing tennis and my coach is Sarah." }], "alice");
const results = await engine.search("What sports does Alice play?", "alice");
```
## Supported Providers
mem7 uses a single **OpenAI-compatible client** for both LLM and Embedding, which covers any service that exposes the OpenAI API format. This includes most major providers out of the box.
### LLMs
| Provider | Status | Notes |
| ------------ | ------------------ | ------------------------- |
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| vLLM | :white_check_mark: | Via OpenAI-compatible API |
| Groq | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| DeepSeek | :white_check_mark: | Via OpenAI-compatible API |
| xAI (Grok) | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| Anthropic | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LiteLLM | :x: | Python proxy |
| Sarvam | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
### Embeddings
| Provider | Status | Notes |
| ------------ | ------------------ | ----------------------------------------------- |
| OpenAI | :white_check_mark: | Native support |
| Ollama | :white_check_mark: | Via OpenAI-compatible API |
| Together | :white_check_mark: | Via OpenAI-compatible API |
| LM Studio | :white_check_mark: | Via OpenAI-compatible API |
| Azure OpenAI | :white_check_mark: | Via OpenAI-compatible API |
| FastEmbed | :white_check_mark: | Local ONNX inference (feature flag `fastembed`) |
| Hugging Face | :x: | Requires native SDK |
| Gemini | :x: | Requires native SDK |
| Vertex AI | :x: | Requires native SDK |
| AWS Bedrock | :x: | Requires native SDK |
| LangChain | :x: | Python framework |
### Vector Stores
| Provider | Status | Notes |
| ----------------------- | ------------------ | ---------------------- |
| In-memory (FlatIndex) | :white_check_mark: | Built-in, good for dev |
| Upstash Vector | :white_check_mark: | REST API, serverless |
| Qdrant | :x: | |
| Chroma | :x: | |
| pgvector | :x: | |
| Milvus | :x: | |
| Pinecone | :x: | |
| Redis | :x: | |
| Weaviate | :x: | |
| Elasticsearch | :x: | |
| OpenSearch | :x: | |
| FAISS | :x: | |
| MongoDB | :x: | |
| Supabase | :x: | |
| Azure AI Search | :x: | |
| Vertex AI Vector Search | :x: | |
| Databricks | :x: | |
| Cassandra | :x: | |
| S3 Vectors | :x: | |
| Baidu | :x: | |
| Neptune | :x: | |
| Valkey | :x: | |
| LangChain | :x: | |
### Rerankers
| Provider | Status | Notes |
| ------------- | ------------------ | ------------------------- |
| Cohere | :white_check_mark: | Cohere v2 rerank API |
| LLM-based | :white_check_mark: | Any OpenAI-compatible LLM |
| Jina AI | :x: | Planned |
| Cross-encoder | :x: | Planned |
### Graph Stores
| Provider | Status | Notes |
| --------------------- | ------------------ | ---------------------------------------------------- |
| In-memory (FlatGraph) | :white_check_mark: | Built-in, good for dev/testing |
| Kuzu (embedded) | :white_check_mark: | Cypher-based, no server needed (feature flag `kuzu`) |
| Neo4j | :white_check_mark: | Production-grade, Bolt protocol |
| Memgraph | :x: | Planned |
| Amazon Neptune | :x: | Planned |
### Language Bindings
| Language | Status |
| --------------------- | -------------------------------------------------- |
| Python (sync + async) | :white_check_mark: PyPI: `pip install mem7` |
| TypeScript / Node.js | :white_check_mark: npm: `npm install @mem7ai/mem7` |
| Rust | :white_check_mark: crates.io: `cargo add mem7` |
| Go | Planned |
## Vector Store Backends
**Built-in FlatIndex** (default) โ in-memory brute-force, good for development:
```python
from mem7.config import VectorConfig
VectorConfig(provider="flat", dims=1024)
```
**Upstash Vector** โ managed cloud vector database:
```python
VectorConfig(
provider="upstash",
collection_name="my-namespace",
dims=1024,
upstash_url="https://your-index.upstash.io",
upstash_token="your-token",
)
```
## Local Embedding (FastEmbed)
mem7 supports fully
... (truncated)
tools
Comments
Sign in to leave a comment