Integration
Plugin Proxy Bridge
OpenClaw plugin: route LLM requests through an OpenAI-compatible proxy with X-Proxy-Token auth
Install
openclaw plugins install /path/to/openclaw-proxy-bridge
README
# openclaw-plugin-proxy-bridge
OpenClaw plugin that routes LLM requests through a remote OpenAI-compatible
proxy server that authenticates via `X-Proxy-Token` header.
## Problem
Many OpenAI-compatible proxy servers (self-hosted on VPS) use a custom
`X-Proxy-Token` header for authentication. OpenClaw's built-in model provider
sends credentials via the standard `Authorization: Bearer` header. This
mismatch causes `401 Unauthorized` errors.
## Solution
This plugin starts a lightweight local HTTP bridge (`127.0.0.1:18800`) that:
1. Accepts requests from OpenClaw with `Authorization: Bearer` header
2. Strips the Bearer header and adds `X-Proxy-Token` with the configured token
3. Forwards the request (including SSE streaming) to the remote proxy
4. Pipes the response back to OpenClaw
```
OpenClaw ──► localhost:18800 ──► your-vps:8089 ──► api.openai.com
(bridge plugin) (your proxy)
Bearer → X-Proxy-Token
```
## Quick start
### Install
```bash
openclaw plugins install /path/to/openclaw-proxy-bridge
openclaw plugins enable proxy-bridge
```
### Configure (interactive)
```bash
openclaw models auth login --provider proxy-bridge
```
The wizard will ask for:
- **Remote proxy URL** — e.g. `http://203.0.113.42:8089`
- **Proxy token** — your `X-Proxy-Token` value
- **Model IDs** — comma-separated list of models available through the proxy
### Configure (manual)
Edit `~/.openclaw/openclaw.json`:
```json5
{
plugins: {
entries: {
"proxy-bridge": {
enabled: true,
config: {
remoteUrl: "http://203.0.113.42:8089",
proxyToken: "your-secret-token",
bridgePort: 18800, // optional, default 18800
models: [
{ "id": "gpt-4o", "name": "GPT-4o" },
{ "id": "gpt-4o-mini", "name": "GPT-4o Mini" }
]
}
}
}
},
models: {
mode: "merge",
providers: {
"proxy-bridge": {
baseUrl: "http://127.0.0.1:18800/v1",
apiKey: "bridge",
api: "openai-completions",
models: [
{ "id": "gpt-4o", "name": "GPT-4o" },
{ "id": "gpt-4o-mini", "name": "GPT-4o Mini" }
]
}
}
},
agents: {
defaults: {
model: { primary: "proxy-bridge/gpt-4o" }
}
}
}
```
### Set default model
```bash
openclaw models set proxy-bridge/gpt-4o
```
### Restart gateway
```bash
openclaw gateway restart
```
### Verify
```bash
openclaw proxy-bridge status
```
## Remote proxy server
This plugin includes a ready-to-deploy proxy server in the `server/` directory.
See [`server/README.md`](./server/README.md) for full deployment instructions.
Quick deploy:
```bash
cd server/
export OPENAI_API_KEY="sk-proj-..."
export OPENAI_PROXY_TOKEN="your-secret-token"
./deploy.sh root@your-vps 8089
```
The server can also run via Docker:
```bash
cd server/
cp .env.example .env # fill in your keys
docker compose up -d
```
### Remote proxy requirements
The remote proxy must implement:
- `GET /health` — health check (optional but recommended)
- `POST /v1/chat/completions` — OpenAI-compatible chat completions endpoint
- Authentication via `X-Proxy-Token` header (or `Authorization: Bearer`)
## Configuration reference
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
| `remoteUrl` | string | yes | — | Remote proxy URL without `/v1` |
| `proxyToken` | string | yes | — | Token for `X-Proxy-Token` header |
| `bridgePort` | number | no | `18800` | Local bridge listen port |
| `models` | array | no | `[gpt-4o, gpt-4o-mini]` | Available model IDs |
## Environment variables
| Variable | Description |
|---|---|
| `PROXY_BRIDGE_URL` | Default remote proxy URL (used in interactive setup) |
| `PROXY_BRIDGE_TOKEN` | Default proxy token (used in interactive setup) |
## How it works
The plugin registers three components:
1. **Service** (`proxy-bridge`) — an HTTP server on `127.0.0.1:{bridgePort}`
that runs as a background service inside the Gateway process. No separate
process needed.
2. **Provider** (`proxy-bridge`) — registers in the `openclaw models auth`
flow so users can configure the proxy interactively via the CLI wizard.
3. **CLI command** (`openclaw proxy-bridge status`) — quick health check for
both the local bridge and the remote proxy.
## Troubleshooting
**Bridge not responding:**
- Check that the plugin is enabled: `openclaw plugins list`
- Restart gateway: `openclaw gateway restart`
- Check logs: `openclaw logs --follow`
**401 from remote proxy:**
- Verify `proxyToken` matches the `OPENAI_PROXY_TOKEN` env var on the server
**Model not found (404):**
- The model ID must exist on OpenAI. Check available models on your proxy.
**Slow responses:**
- First request may take 5–15s due to cold start on the remote proxy.
- The bridge adds <1ms latency (localhost only).
## License
MIT
integration
Comments
Sign in to leave a comment