← Back to Plugins
Tools

Claw Meta Footer

lqqk7 By lqqk7 ⭐ 1 stars 👁 6 views ▲ 0 votes

OpenClaw plugin: appends stats footer to every bot reply (model, tokens, context, cache hit rate)

GitHub

Install

openclaw plugins install clawhub:claw-meta-footer

Configuration Example

{
  "channels": {
    "telegram": {
      "streaming": "off"
    }
  }
}

README

# claw-meta-footer

An [OpenClaw](https://openclaw.ai) plugin that appends a stats footer to every bot reply, giving you at-a-glance visibility into model usage, token consumption, and session state — right inside your Telegram chat.

## What It Shows

Every bot reply gets a footer like this:

```
`───────────────`
🤖 Model: `claude-sonnet-4-6`
🧠 Think: high
🔢 In: 12.3k   Out: 0.8k
📊 Context: 13.1k / 200k (6.6%)
💾 Cache: 11.9k hit (88.4%)
🔁 Compact: 2
```

| Field | Description |
|---|---|
| **Model** | The model ID that generated the reply |
| **Think** | Thinking/reasoning level (`off`, `low`, `medium`, `high`, `xhigh`, `adaptive`) |
| **In / Out** | Input and output token counts for this turn |
| **Context** | Tokens currently in context vs. the model's context window limit |
| **Cache** | Cache-read token count and hit rate for this turn |
| **Compact** | Number of context compactions that have occurred in this session |

> **Cache** and **Compact** lines only appear when there is data to show.

## Requirements

- OpenClaw `>= 1.0.0`
- Telegram channel with **streaming disabled** — the plugin hooks into `message_sending`, which is only triggered when streaming is off

## Installation

```bash
openclaw plugins install clawhub:claw-meta-footer
```

## Configuration

### 1. Disable streaming on Telegram

In your `openclaw.json`, add `"streaming": "off"` to your Telegram channel config:

```json
{
  "channels": {
    "telegram": {
      "streaming": "off"
    }
  }
}
```

Without this, the plugin won't fire — streamed messages bypass the `message_sending` hook entirely.

### 2. Plugin options (optional)

```json
{
  "plugins": {
    "claw-meta-footer": {
      "enabled": true,
      "skipSubagent": true
    }
  }
}
```

| Option | Type | Default | Description |
|---|---|---|---|
| `enabled` | boolean | `true` | Toggle the footer on/off |
| `skipSubagent` | boolean | `true` | Hide footer on subagent replies (recommended — subagents can be noisy) |

## How It Works

1. **`llm_output` hook** — captures token usage (`input`, `output`, `cacheRead`, `cacheWrite`), model ID, and provider from each LLM response, keyed by channel + chat ID
2. **`message_sending` hook** — before the reply is sent, retrieves the cached stats, reads `thinkingLevel` and `compactionCount` from the session file, resolves the context window size, builds the footer, and appends it to the message content

Context window sizes are resolved via a priority chain:
1. User-configured `contextWindow` in `openclaw.json` models
2. Composite `provider/model` lookup (mirrors OpenClaw's internal overrides)
3. Plain model ID lookup
4. Prefix matching fallback

## License

MIT
tools

Comments

Sign in to leave a comment

Loading comments...