← Back to Plugins
Integration

Guardclaw Openclaw

list3r By list3r 👁 4 views ▲ 0 votes

Privacy-aware plugin for OpenClaw with sensitivity detection, guard agent, and built-in privacy proxy

GitHub

Install

openclaw plugins install --link

Configuration Example

"localModel": {
  "enabled": true,
  "type": "openai-compatible",
  "provider": "ollama",
  "model": "llama3.2:3b",
  "endpoint": "http://localhost:11434"
}

README

# GuardClaw — Privacy Plugin for OpenClaw

GuardClaw is a privacy-aware plugin for [OpenClaw](https://openclaw.ai) that classifies every message and tool call into three sensitivity levels and routes them accordingly:

| Level | Meaning | Action |
|-------|---------|--------|
| **S1** | Safe | Pass through to cloud provider as normal |
| **S2** | Sensitive (PII) | Strip PII via local proxy, then forward to cloud |
| **S3** | Private | Route entirely to a local model — nothing leaves your machine |

## Features

- **Three-tier sensitivity detection** — rule-based keyword/regex/path matching + optional LLM classifier
- **Privacy proxy** — local HTTP proxy that strips PII markers before forwarding to cloud APIs
- **Guard agent** — dedicated local model session for handling S3 (fully private) content
- **Dual-track session history** — maintains both full and sanitized conversation histories
- **Memory isolation** — MEMORY.md (clean) / MEMORY-FULL.md (unredacted) sync
- **Router pipeline** — composable chain of routers (privacy, token-saver, custom)
- **Learning loop** — correction store with embedding-based few-shot injection
- **Dashboard** — web UI for monitoring detections, stats, and configuration
- **Hot-reload config** — edit `~/.openclaw/guardclaw.json` without restarting

## Prerequisites

- **Node.js 22+**
- **OpenClaw 2026.3.x+**
- A local inference backend for privacy detection (any of):
  - [Ollama](https://ollama.ai)
  - [LM Studio](https://lmstudio.ai)
  - [vLLM](https://vllm.ai)
  - [SGLang](https://github.com/sgl-project/sglang)
  - Any OpenAI-compatible endpoint

## Quick Install

```bash
git clone https://github.com/List3r/guardclaw-openclaw-plugin.git /opt/guardclaw
cd /opt/guardclaw
bash scripts/install.sh
```

The install script will:
1. Check prerequisites (Node.js 22+, npm, openclaw CLI)
2. Install dependencies and build
3. Register the plugin with OpenClaw
4. Generate a default `~/.openclaw/guardclaw.json` config
5. Restart the OpenClaw gateway

Options:
- `--install-dir /path` — install location (default: `/opt/guardclaw`)
- `--no-restart` — skip gateway restart
- `--repo URL` — override the git clone URL

## Manual Install

```bash
# Clone
git clone https://github.com/List3r/guardclaw-openclaw-plugin.git /opt/guardclaw
cd /opt/guardclaw

# Install dependencies
npm ci --include=dev

# Build
npm run build

# Register with OpenClaw
openclaw plugins install --link /opt/guardclaw

# Restart gateway
# macOS:
launchctl kickstart -k "gui/$(id -u)/ai.openclaw.gateway"
# Linux:
openclaw gateway restart
```

## Configuration

GuardClaw uses a standalone config file: **`~/.openclaw/guardclaw.json`**

See [`config.example.json`](config.example.json) for the full schema with examples for Ollama, LM Studio, vLLM, SGLang, and custom providers.

### Key settings

**Local model** (for LLM-based privacy detection):
```json
"localModel": {
  "enabled": true,
  "type": "openai-compatible",
  "provider": "ollama",
  "model": "llama3.2:3b",
  "endpoint": "http://localhost:11434"
}
```

**Guard agent** (handles S3 private data locally):
```json
"guardAgent": {
  "id": "guard",
  "workspace": "~/.openclaw/workspace-guard",
  "model": "ollama/llama3.2:3b"
}
```

**Detection rules** (S2 = redact PII, S3 = keep local):
```json
"rules": {
  "keywords": {
    "S2": ["password", "api_key", "secret"],
    "S3": ["ssh", "id_rsa", "private_key", ".pem"]
  },
  "patterns": {
    "S3": ["-----BEGIN (?:RSA )?PRIVATE KEY-----"]
  }
}
```

**S2 policy** — choose how sensitive-but-not-private data is handled:
- `"proxy"` (default) — strip PII via local proxy, then forward to cloud
- `"local"` — route S2 to local model entirely (more private, less capable)

### Recommended models

We tested several models for each role. These performed best:

| Role | Model | Why | VRAM |
|------|-------|-----|------|
| **Detection classifier** | LFM2-8B-A1B (MoE, ~1B active) | Disciplined JSON output, perfect S1 accuracy, concise (~30 tokens) | ~4.3 GB |
| **Embedding** (learning loop) | nomic-embed-text-v1.5 | Fast 768-dim embeddings for correction similarity search | ~0.3 GB |
| **Guard agent** (S3 processing) | Qwen3.5:35B | Strong reasoning for complex private tasks (financial, medical, legal) | ~20 GB |

**Detection classifier notes:** Phi-mini-MoE-instruct (2.4B active) scored 60% vs LFM2's 65% on hard cases and produced frequent JSON parse failures. Qwen3.5-9B was incompatible — reasoning models put output in `reasoning_content`, leaving `content` empty.

**Guard agent alternatives:** Any capable model works here. Smaller options like Llama 3.2:3B or Qwen2.5:7B are fine for simple queries but struggle with multi-step analysis on financial/legal content. Use the largest model your hardware can run.

All three models can run simultaneously in LM Studio or Ollama on a machine with 32 GB+ VRAM (~25 GB total).

### Supported local model providers

| Provider | `type` | Default endpoint |
|----------|--------|-----------------|
| Ollama | `openai-compatible` | `http://localhost:11434` |
| LM Studio | `openai-compatible` | `http://localhost:1234` |
| vLLM | `openai-compatible` | `http://localhost:8000` |
| SGLang | `openai-compatible` | `http://localhost:30000` |
| Ollama (native API) | `ollama-native` | `http://localhost:11434` |
| Custom | `custom` | (your endpoint) |

## Dashboard

After installation, the dashboard is available at:

```
http://127.0.0.1:18789/plugins/guardclaw/stats
```

The dashboard provides:
- Real-time detection event log
- Token usage statistics and cost estimates
- Router pipeline status
- Configuration editor
- Correction store management (for the learning loop)

## Architecture

```
index.ts                 → Plugin entry point (registers hooks, provider, proxy)
src/
  hooks.ts               → 13 OpenClaw hooks (model routing, tool guards, memory)
  privacy-proxy.ts       → HTTP proxy that strips PII before forwarding to cloud
  provider.ts            → Virtual "guardclaw-privacy" provider registration
  detector.ts            → Coordinates rule + LLM detection
  rules.ts               → Keyword/regex/tool-path rule engine
  local-model.ts         → LLM calls for detection (Ollama/vLLM/LM Studio/etc.)
  correction-store.ts    → Learning loop: correction storage + embedding similarity
  router-pipeline.ts     → Composable router chain (privacy, token-saver, custom)
  session-manager.ts     → Dual-track session history (full + clean)
  memory-isolation.ts    → MEMORY.md ↔ MEMORY-FULL.md sync
  token-stats.ts         → Usage tracking and cost accounting
  stats-dashboard.ts     → HTTP dashboard
  live-config.ts         → Hot-reload of guardclaw.json
  routers/
    privacy.ts           → Built-in privacy router (S1/S2/S3 classification)
    token-saver.ts       → Cost-aware model routing (optional)
    configurable.ts      → User-defined custom routers
prompts/
  detection-system.md    → Editable system prompt for LLM classification
  guard-agent-system.md  → System prompt for the guard agent
  token-saver-judge.md   → Prompt for cost-aware routing decisions
```

## Development

```bash
# Watch mode — rebuild on source changes
npm run dev

# Run tests
npm run test

# Clean build artifacts
npm run clean
```

After rebuilding, restart the gateway to pick up changes:
```bash
# macOS:
launchctl kickstart -k "gui/$(id -u)/ai.openclaw.gateway"
# Linux:
openclaw gateway restart
```

## Troubleshooting

**"Cannot find package 'tsx'"** — Run `npm run build` first. The plugin runs from compiled JS, not TS source.

**"No original provider target found" (502)** — The proxy can't determine the upstream provider. Ensure your OpenClaw config has providers with `baseUrl` set.

**"SyntaxError: Unexpected end of JSON input"** — Rebuild (`npm run build`) and restart the gateway.

**Gateway crash loop** — Set `"enabled": false` in `~/.openclaw/guardclaw.json` under `privacy`, restart the gateway, then check logs:
```bash
tail -f ~/.openclaw/logs/gateway.err.log | grep GuardClaw
```

## Uninstall

```bash
openclaw plugins uninstall guardclaw
rm -rf /opt/guardclaw
rm ~/.openclaw/guardclaw.json
# Restart gateway
openclaw gateway restart
```

## License

MIT — see [LICENSE](LICENSE).
integration

Comments

Sign in to leave a comment

Loading comments...