← Back to Plugins
Tools

Atlas Memory Binary

dddabtc By dddabtc 👁 58 views ▲ 0 votes

Binary-installable Atlas Memory + integrated OpenClaw plugin

GitHub

README

# atlas-memory-binary

[![CI](https://github.com/dddabtc/atlas-memory-binary/actions/workflows/postgres-v4-ci.yml/badge.svg?branch=main)](https://github.com/dddabtc/atlas-memory-binary/actions/workflows/postgres-v4-ci.yml)
[![Release](https://img.shields.io/github/v/release/dddabtc/atlas-memory-binary?display_name=tag)](https://github.com/dddabtc/atlas-memory-binary/releases)

> 中文文档:**[README.zh-CN.md](README.zh-CN.md)**

## TL;DR

**Atlas Memory** is a **self-hosted memory service** for AI agents, with both HTTP APIs and **MCP** tools. This repo packages Atlas Memory as a deployable binary/service with install/uninstall guidance, OpenClaw plugin examples, and benchmark references.

Keywords: **Atlas Memory, memory service, MCP, OpenClaw plugin, self-hosted**.

## 1. What this repository provides

`atlas-memory-binary` helps teams run Atlas Memory in production and avoid fragile manual setup.

It is designed for common agent-memory failures:
- losing user facts/preferences across sessions
- unstable recall quality
- difficult deployment and cleanup operations without standardized procedures

This repository includes:
- Atlas Memory Python package (`atlas-mem`)
- HTTP API server for write/search/read/health/stats
- MCP server for agent-tool integration over stdio
- install script + service setup script
- OpenClaw memory plugin integration example

## 2. Use cases

- Persistent user memory for AI assistants and copilots
- Searchable memory layer for multi-session workflows
- Self-hosted memory backend for privacy-sensitive deployments
- MCP-enabled memory tools inside OpenClaw / MCP clients
- Operational deployments that require predictable install and clean uninstall

## 3. API overview

Base URL is usually:
- `http://127.0.0.1:6420`

Core endpoints:

| Method | Path | Purpose |
|---|---|---|
| `POST` | `/memories` | Write/create a memory record (`content`, optional `title`/`labels`). |
| `POST` | `/memories/search` | Semantic search for relevant memories by query text. |
| `GET` | `/memories/{memory_id}` | Read a single memory by ID. |
| `GET` | `/health` | Health check (service status, version, embedding status). |
| `GET` | `/memories/stats` | Memory statistics (counts, embedding coverage, thread-linked info). |

Useful additional endpoints:
- `GET /memories` (list recent memories)
- `GET /health/api-stats` (API monitor stats)
- `POST /memories/reindex` (rebuild embeddings)

## API Documentation

- English: **[docs/API.md](docs/API.md)**
- 中文:**[docs/API.zh-CN.md](docs/API.zh-CN.md)**

## Configuration Documentation

- English: **[docs/CONFIG.md](docs/CONFIG.md)**
- 中文:**[docs/CONFIG.zh-CN.md](docs/CONFIG.zh-CN.md)**

## CI coverage scope (API + OpenClaw integration)

CI workflow `postgres-v4-ci` now includes contract/integration coverage for the service API and OpenClaw-compatible HTTP chain.

Covered API routes:
- Health & status: `GET /health`, `GET /models/status`, `GET /health/api-stats`
- Config: `GET /config`, `PATCH /config`, `POST /restart`, `POST /export/openclaw`
- Memories: `POST /memories`, `GET /memories`, `GET /memories/stats`, `POST /memories/reindex`, `GET/PATCH/DELETE /memories/{id}`, `POST /memories/search`, `POST /memories/distill`
- Threads: `POST /threads`, `GET /threads`, `GET/DELETE /threads/{id}`, `POST /threads/{id}/append`
- DAG: `GET /dag/stats`, `GET /dag/nodes`, `GET /dag/nodes/{id}`
- OpenAI-compat: `GET /v1/models`, `POST /v1/embeddings`

OpenClaw integration coverage (minimal viable):
- Start Atlas service process in test
- Simulate plugin key chain over HTTP:
  - `memory_store` -> `POST /memories`
  - `memory_search` -> `POST /memories/search`
- Validate response shape and endpoint usability

This gives executable CI protection for all publicly exposed core API routes, plus a real-process OpenClaw-compatible integration path.

Reference tests: `tests/test_api_contract_ci.py`, `tests/test_openclaw_http_integration.py`, and `tests/openclaw/plugin_e2e.mjs`.

## 4. MCP integration

Atlas Memory ships with an MCP stdio server and can be exposed as MCP tools in OpenClaw or other MCP clients.

### Start MCP server

```bash
atlas-mem mcp-serve
```

### MCP tools currently exposed

From `src/atlas_memory/mcp/server.py`:
- `memory_search` — search memories (`query`, `top_k`)
- `memory_store` — create/store memory (`content`, optional `title`, `labels`)
- `memory_list` — list recent memories (`limit`)
- `memory_compress` — trigger compression for a thread (`thread_id`)
- `dag_expand` — expand DAG node to source message IDs (`node_id`)

### OpenClaw plugin integration

This repository includes OpenClaw integration examples under `integrations/` and runtime API compatibility in the Atlas service itself.

## 5. WebUI snapshots

### Dashboard
![Atlas Memory WebUI dashboard](docs/images/webui-dashboard.jpg)

### API Monitor
![Atlas Memory WebUI API Monitor](docs/images/webui-api-monitor.jpg)

Version note: **v4.1 original baseline** for binary release interpretation.
(If historical UI strings differ, use this baseline for release interpretation.)

## 6. Install

- Quick install docs: **[docs/INSTALL.md](docs/INSTALL.md)**
- Includes:
  - local pip/CLI installation
  - optional `systemd --user` service installation (`atlas-memory-binary.service`)
  - post-install verification commands

Quick command (installer script):

```bash
curl -fsSL https://raw.githubusercontent.com/dddabtc/atlas-memory-binary/main/scripts/install-atlas-memory-binary.sh | bash
```

## 7. Uninstall

- Full uninstall docs: **[docs/UNINSTALL.md](docs/UNINSTALL.md)**
- Includes executable cleanup steps for:
  - stop/disable service
  - remove package/binary
  - keep data vs full data purge

## 8. Notes on version switching (secondary)

If you need to change to another released version, use the same installer script with `--version`.
Version switching is documented as operational reference, but this README prioritizes install/uninstall.

## 9. Benchmark

Scope used in this README:
- dataset: LoCoMo 3-conv subset
- top-k: 10
- systems compared: `v4-latest`, `v4-off`, `v3`, `mem0`, `A-Mem`
- source file: `v4/reports/locomo/unified_compare/unified_stats_5sys.json`
- summary file: `v4/reports/locomo/unified_compare/unified_report_5sys.md`

### Main comparison table (coverage + fair-view metrics)

Fair comparison view (same denominator) is **391 non-adversarial QA**.

| System | Coverage | Hit@10 | F1 | Judge/Quality | Sample / Notes |
|---|---|---:|---:|---:|---|
| v4-latest | Full | 42.97% | 28.00% | 87.98% | 502 QA total; fair-view metrics on 391 non-adversarial QA |
| v4-off | Full | 34.78% | 14.90% | 88.24% | 502 QA total; fair-view metrics on 391 non-adversarial QA |
| v3 | Partial vs full set | 31.20% | 13.22% | 41.94% | 391 QA (non-adversarial subset) |
| mem0 | Partial vs full set | 20.46% | 11.91% | 28.13% | 391 QA (non-adversarial subset) |
| A-Mem | Partial checkpoint | 21.03% | 2.32% | 59.66% | n=233; not full 391 |

### Bucket details (non-adversarial, part 1/2)

| Bucket | System | Hit@10 | F1 | Judge/Quality | Sample / Notes |
|---|---|---:|---:|---:|---|
| single-hop | v4-latest | 16.22% | 22.56% | 83.78% | n=74 |
| single-hop | v4-off | 13.51% | 10.09% | 82.43% | n=74 |
| single-hop | v3 | 14.86% | 11.80% | 31.08% | n=74 |
| single-hop | mem0 | 6.76% | 5.70% | 10.81% | n=74 |
| single-hop | A-Mem | 9.52% | 2.04% | 52.38% | n=42 |
| multi-hop | v4-latest | 35.29% | 26.71% | 80.00% | n=85 |
| multi-hop | v4-off | 29.41% | 14.96% | 87.06% | n=85 |
| multi-hop | v3 | 4.71% | 2.93% | 18.82% | n=85 |
| multi-hop | mem0 | 4.71% | 4.01% | 15.29% | n=85 |
| multi-hop | A-Mem | 3.77% | 0.59% | 39.62% | n=53 |

### Bucket details (non-adversarial, part 2/2)

| Bucket | System | Hit@10 | F1 | Judge/Quality | Sample / Notes |
|---|---|---:|---:|---:|---|
| temporal | v4-latest | 13.33% | 6.40% | 80.00% | n=15 |
| temporal | v4-off | 13.33% | 2.98% | 80.00% | n=15 |
| temporal | v3 | 13.33% | 3.60% | 13.33% | n=15 |
| temporal | mem0 | 13.33% | 6.20% | 13.33% | n=15 |
| temporal | A-Mem | 12.50% | 1.20% | 50.00% | n=8 |
| open-domain | v4-latest | 57.14% | 31.85% | 93.09% | n=217 |
| open-domain | v4-off | 45.62% | 17.35% | 91.24% | n=217 |
| open-domain | v3 | 48.39% | 18.39% | 56.68% | n=217 |
| open-domain | mem0 | 31.80% | 17.51% | 40.09% | n=217 |
| open-domain | A-Mem | 32.31% | 3.18% | 70.77% | n=130 |

### Original reports (traceability)

- In-repo docx: `v4/reports/locomo/strict_same_condition_rerun.docx`
- In-repo rigorous report: `v4/reports/locomo/v4_vs_mem0_rigorous_20260301T122954Z.md`
- In-repo rigorous JSON: `v4/reports/locomo/v4_vs_mem0_rigorous_20260301T122954Z.json`
- In-repo unified markdown report: `v4/reports/locomo/unified_compare/unified_report_5sys.md`
- In-repo unified machine-readable source: `v4/reports/locomo/unified_compare/unified_stats_5sys.json`
- Uploaded external docx (source ID from user session): `file_242---d15a2b2e-d100-4c2f-8d1c-4dfecc811434.docx`

Notes:
- Numbers above are copied from repository reports only.
- No Qwen experimental run is used as the primary benchmark baseline.
- For A-Mem, sample size is smaller (233), so interpret with caution.
tools

Comments

Sign in to leave a comment

Loading comments...