Voice
Localbot Ctl
This is a plugin for OpenClaw to control secondary locally running agent or localbots.
Configuration Example
{
"plugins": {
"load": {
"paths": ["/path/to/localbot-ctl"]
},
"entries": {
"localbot-ctl": { "enabled": true }
}
}
}
README
# localbot-ctl
OpenClaw plugin for controlling local LLM inference via `/lb*` chat commands.
## Commands
| Command | Description |
|---------|-------------|
| `/lbh` | Help β show available commands |
| `/lbs` | Status β active endpoint, model, session tokens |
| `/lbm` | Models β list available models with specs |
| `/lbm <alias>` | Switch model on llama-cpp server |
| `/lbn` | New session β reset LocalBot context |
| `/lbe` | Endpoints β show all inference backends |
| `/lbp` | Performance β benchmark active endpoint |
## Installation
1. Clone to your workspace plugins directory
2. Add to OpenClaw config:
```json
{
"plugins": {
"load": {
"paths": ["/path/to/localbot-ctl"]
},
"entries": {
"localbot-ctl": { "enabled": true }
}
}
}
```
3. Restart gateway
## Configuration
The plugin reads from:
- `config/inference-endpoints.json` β Endpoint definitions
- `config/localbot-models.json` β Model metadata (speeds, context, aliases)
## Requirements
- llama-cpp server (or vLLM/Ollama) running
- LocalBot agents configured in OpenClaw
- Matrix rooms with LocalBot access
## License
MIT
## Changelog
See [CHANGELOG.md](./CHANGELOG.md)
voice
Comments
Sign in to leave a comment