← Back to Plugins
Tools

Tool Compressor

5p00kyy By 5p00kyy 👁 7 views ▲ 0 votes

OpenClaw plugin: compress large tool results before session transcript write

GitHub

Install

openclaw plugins install /path/to/openclaw-tool-compressor

Configuration Example

{
  "plugins": {
    "allow": ["tool-compressor"]
  }
}

README

# openclaw-tool-compressor

OpenClaw plugin that compresses large tool results before they're written to the session transcript.

## Why

Every tool result you've ever received lives in your session JSONL file. On every subsequent turn, the entire context window — including all those old exec outputs, file reads, and web fetches — gets re-sent to the model. A 10KB exec result from 50 turns ago costs 10KB every single turn.

This plugin intercepts `tool_result_persist` (fired after the model sees the result, before JSONL write) and applies head+tail compression: keep the first N lines, keep the last N lines, replace the middle with `[... N lines omitted ...]`.

The model already saw the full output. What gets stored is a compact summary for future context.

Inspired by Claude Code's MicroCompact pattern, which achieves 80-93% token reduction on tool outputs.

## Install

```bash
openclaw plugins install /path/to/openclaw-tool-compressor
# or via npm when published:
openclaw plugins install openclaw-tool-compressor
```

Then add to your `openclaw.json` plugins allowlist:

```json
{
  "plugins": {
    "allow": ["tool-compressor"]
  }
}
```

## Config

```json
{
  "plugins": {
    "entries": {
      "tool-compressor": {
        "enabled": true,
        "minSizeToCompress": 500,
        "verbose": false,
        "limits": {
          "exec": { "maxChars": 4000, "headLines": 40, "tailLines": 20 },
          "Read": { "maxChars": 8000, "headLines": 80, "tailLines": 30 },
          "web_fetch": { "maxChars": 6000, "headLines": 60, "tailLines": 20 },
          "browser": { "maxChars": 5000, "headLines": 50, "tailLines": 20 }
        }
      }
    }
  }
}
```

## How it works

1. Tool completes, model receives full output
2. `tool_result_persist` fires synchronously before JSONL write
3. Plugin checks output size against per-tool limit
4. If over limit: keep first N lines + last N lines, replace middle
5. Compressed message returned → stored in transcript
6. Future turns send the compact version, not the full output

## Compression example

```
$ cat large-file.log   # 500 lines, 40KB

--- stored as: ---
line 1
line 2
...
line 40

[... 440 lines omitted (500 total) ...]

line 481
...
line 500
```

## What's safe to compress

- `exec` stdout/stderr — the model cares about exit status, errors, and final state. Middle lines of verbose build output rarely matter.
- `Read` file content — head+tail preserves structure for most files. For code files, increase `headLines`.  
- `web_fetch` — usually dominated by boilerplate; the interesting content is near the top.
- `browser` snapshots — large DOM trees are heavily redundant.

## What to NOT compress (set `maxChars: null`)

If you're reading a specific config file or small file you need fully, the default 8000 char limit is usually sufficient. You can disable per-tool compression:

```json
{
  "limits": {
    "Read": null
  }
}
```
tools

Comments

Sign in to leave a comment

Loading comments...