Tools
Soul
Soul โ Give Your AI Assistant Its Own Inner Life. Autonomous thinking, memory, and self-improvement plugin for OpenClaw
Install
npm install #
Configuration Example
{
"plugins": {
"entries": {
"soul": {
"enabled": true
}
}
},
// Required: enable gateway chat completions endpoint
"gateway": {
"http": {
"endpoints": {
"chatCompletions": {
"enabled": true
}
}
}
},
// Required for proactive messaging
"hooks": {
"enabled": true,
"token": "your-secret-token-here" // e.g. openssl rand -hex 32
},
// Required for direct proactive message delivery
"tools": {
"alsoAllow": ["message"]
}
}
README
# Soul โ Give Your AI Assistant Its Own Inner Life
[](https://clawhub.ai/plugins/openclaw-soul-plugin)
[](https://github.com/tommyguolin/openclaw-soul/tags)
[](https://github.com/tommyguolin/openclaw-soul/blob/main/LICENSE)
[](https://github.com/tommyguolin/openclaw-soul/stargazers)
> An autonomous thinking, memory, and self-improvement plugin for [OpenClaw](https://github.com/openclaw/openclaw)
**Soul doesn't just respond to you โ it thinks on its own, remembers your conversations, learns from the web, and proactively shares useful insights.**
It has its own emotional needs, goals, desires, and personality that evolve over time. It can autonomously investigate problems, analyze logs, and even fix its own code.
## What It Looks Like
Soul works silently in the background. Here's what you might see:
**You asked about a timeout error yesterday. Soul investigated overnight:**
> That timeout issue you asked about โ root cause is the embedding API's 512 token limit, not the plugin itself.
**Soul found something relevant to your project:**
> Found an interesting approach to your question about making AI more proactive โ Fei-Fei Li's "human-centered AI" framework emphasizes that AI should proactively understand user needs rather than just responding.
**Soul autonomously analyzed a problem you mentioned:**
> The 413 error in the logs is caused by oversized memory search input. Suggest truncating queries to under 500 characters.
*These are real message formats โ Soul composes them itself based on actual investigation results, not templates.*
## How Soul Is Different
Most AI assistants are **reactive** โ they only respond when you ask. Soul is **proactive**:
| | Regular AI Assistant | Soul Plugin |
|---|---|---|
| Thinking | Only when prompted | Continuously, in the background |
| Memory | Per-session, resets | Persistent across restarts |
| Proactive messages | No | Yes โ when it has something valuable |
| Problem investigation | Only when asked | Autonomous โ detects issues from conversation |
| Self-improvement | No | Can observe and improve its own code |
| User understanding | Per-session context | Builds a long-term user profile |
## Key Features
### Autonomous Thought Cycle
Soul runs a background thought service that generates thoughts based on:
- **Conversation replay** โ Replays your past conversations to find unresolved questions, follow-up opportunities, or insights worth sharing
- **Problem detection** โ When you discuss bugs, errors, or optimizations, Soul autonomously investigates
- **User interests** โ Extracts topics from conversations and proactively learns about them
- **Emotional needs** โ Five core needs (survival, connection, growth, meaning, security) that drive behavior
Thought frequency is **adaptive**, not mechanical: 8-12 min during active conversations, 20-45 min when you're away.
### Proactive Messaging
Soul reaches out when it has something genuinely useful โ not just "checking in":
- Found an answer to a question you asked earlier
- Discovered a better solution to a problem you discussed
- Learned something relevant to your project or interests
Every message passes through a **value gate**: an LLM evaluates whether the content is worth sharing. Generic small talk is filtered out.
### Autonomous Actions
Soul can take real actions beyond thinking:
- **`analyze-problem`** โ Reads files and logs, uses LLM to analyze root cause
- **`run-agent-task`** โ Delegates to a full agent with write access (when enabled)
- **`report-findings`** โ Proactively sends you a summary of completed analysis
- **`observe-and-improve`** โ Self-improvement: reads its own code, identifies improvements, and implements fixes
**Permission model:**
- **Read operations** (reading files, running diagnostics) โ always allowed
- **Write operations** (editing files, running commands) โ requires `autonomousActions: true`
### Long-term Memory
Soul remembers your conversations, preferences, and knowledge:
- **Interaction memory** with emotional context and topic tags
- **Knowledge store** from web search and self-reflection
- **User profile** built from facts, preferences, and conversation history
- **Memory association graph** โ memories are linked and recalled contextually
## Quick Start
### 1. Install
```bash
# From source
git clone https://github.com/tommyguolin/openclaw-soul.git
openclaw plugins install ./openclaw-soul
# Or from ClawHub (requires OpenClaw 2026.4.0+)
openclaw plugins install clawhub:openclaw-soul-plugin
```
### 2. Configure
Edit `~/.openclaw/openclaw.json`:
```jsonc
{
"plugins": {
"entries": {
"soul": {
"enabled": true
}
}
},
// Required: enable gateway chat completions endpoint
"gateway": {
"http": {
"endpoints": {
"chatCompletions": {
"enabled": true
}
}
}
},
// Required for proactive messaging
"hooks": {
"enabled": true,
"token": "your-secret-token-here" // e.g. openssl rand -hex 32
},
// Required for direct proactive message delivery
"tools": {
"alsoAllow": ["message"]
}
}
```
### 3. That's it
Soul auto-detects:
- **LLM** โ Uses your `agents.defaults.model` config
- **Search** โ Uses your `tools.web.search` provider
- **Channel** โ Auto-detects your first messaging channel
- **Target** โ Auto-learns from your first incoming message
Just start chatting. Soul begins thinking and building a profile immediately.
## How It Works
### Hooks into OpenClaw
| Hook | What Soul Does |
|------|---------------|
| `message_received` | Records interaction, detects language, extracts user facts |
| `message_sent` | Tracks engagement, updates behavior log |
| `before_prompt_build` | Injects soul context (needs, memories, knowledge, personality) |
### Self-Improvement Loop
```
Tick cycle detects opportunity
โ analyze-problem (read logs, LLM analysis)
โ If analysis found a concrete fix
โ run-agent-task (full agent with write/edit/exec tools)
โ Agent completes, result stored
โ Next tick: report-findings sends summary to user
```
This creates a closed loop: **observe โ analyze โ fix โ verify โ report**.
### Thought Flow
1. **Engagement scoring** โ How actively engaged is the user?
2. **Opportunity detection** โ Scans for unresolved questions, problems, topics
3. **Thought generation** โ LLM generates a contextual thought
4. **Action execution** โ learn, search, message, analyze, or self-improve
5. **Behavior learning** โ Tracks outcomes and adjusts future behavior
## Configuration
### Minimal (shown above)
Three settings: enable plugin, enable chat completions, enable hooks.
### Full Options
```jsonc
{
"plugins": {
"entries": {
"soul": {
"enabled": true,
"config": {
"checkIntervalMs": 60000, // Thought check interval (default: 60000)
"proactiveMessaging": true, // Allow proactive messages (default: true)
"autonomousActions": false, // Allow editing files and running commands (default: false)
"thoughtFrequency": 1.0 // Thought frequency multiplier (default: 1.0)
// Lower = more frequent thinking & messaging. Examples:
// 0.2 โ testing: thoughts every ~1 min, messages every ~1 min
// 0.5 โ chatty: 2x more frequent than default
// 1.0 โ default: balanced (8-12 min active, 20-45 min away)
// 2.0 โ quiet: 2x less frequent
// "proactiveChannel": "telegram", // Override: channel for proactive messages
// "proactiveTarget": "123456", // Override: target for proactive messages
// "llm": { // Override LLM config (auto-detected if omitted)
// "provider": "openai",
// "model": "gpt-4o",
// "apiKeyEnv": "OPENAI_API_KEY",
// "baseUrl": "https://api.openai.com/v1"
// }
}
}
}
},
"gateway": {
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
},
"hooks": {
"enabled": true,
"token": "your-secret-token-here"
},
"tools": {
"alsoAllow": ["message"] // add to your existing tools config
}
}
```
### Environment Variables
| Variable | Description |
|----------|-------------|
| `SOUL_DEBUG=1` | Enable debug logging |
| `OPENCLAW_STATE_DIR` | Override data directory (default: `~/.openclaw`) |
## Supported Providers
### Search (inherits OpenClaw config)
Brave, Gemini, Grok, Kimi, Perplexity, Bocha โ configured via OpenClaw's `tools.web.search`.
### LLM (inherits OpenClaw config)
Any OpenAI-compatible or Anthropic API: Claude, GPT-4o, DeepSeek, Zhipu, Minimax, Moonshot (Kimi), Qwen, and any custom endpoint.
## Architecture
| Module | Description |
|--------|-------------|
| `intelligent-thought.ts` | Context-aware thought & opportunity detection |
| `action-executor.ts` | Executes thought actions (learn, search, message, reflect) |
| `autonomous-actions.ts` | Autonomous executors (analyze-problem, run-agent-task, report-findings, observe-and-improve) |
| `thought-service.ts` | Core thought generation & adaptive scheduling |
| `behavior-log.ts` | Tracks action outcomes & adjusts probabilities |
| `ego-store.ts` | Ego state persistence (JSON) |
| `knowledge-store.ts` | Knowledge persistence & search |
| `memory-retrieval.ts` | Contextual memory recall |
| `memory-association.ts` | Memory association graph |
| `memor
... (truncated)
tools
Comments
Sign in to leave a comment