← Back to Plugins
Integration

Substation

childbornindigo By childbornindigo 👁 10 views ▲ 0 votes

OpenClaw plugin — route LLM requests through Claude Max and ChatGPT Pro subscriptions at zero API cost

GitHub

Install

npm install @anthropic-ai/claude-agent-sdk

Configuration Example

{
  "models": {
    "providers": {
      "indigo": {
        "baseUrl": "http://127.0.0.1:8403/v1",
        "apiKey": "substation-local",
        "api": "openai-completions",
        "models": [
          { "id": "opus-4-6", "name": "Claude Opus 4.6 (SubStation)", "contextWindow": 200000, "maxTokens": 128000 },
          { "id": "sonnet-4-6", "name": "Claude Sonnet 4.6 (SubStation)", "contextWindow": 200000, "maxTokens": 64000 },
          { "id": "haiku-4-5", "name": "Claude Haiku 4.5 (SubStation)", "contextWindow": 200000, "maxTokens": 64000 },
          { "id": "gpt-5.4", "name": "GPT 5.4 (SubStation)", "contextWindow": 200000, "maxTokens": 128000 },
          { "id": "gpt-5.4-mini", "name": "GPT 5.4 Mini (SubStation)", "contextWindow": 200000, "maxTokens": 64000 }
        ]
      }
    }
  }
}

README

# SubStation

OpenClaw plugin that routes LLM requests through your existing subscriptions — Claude Max and ChatGPT Pro/Plus — at zero API cost.

## What it does

SubStation sits between OpenClaw and the LLM providers, proxying requests through your subscription accounts instead of paid API keys:

- **Claude** (Opus, Sonnet, Haiku) — Routes through Claude Code's Agent SDK using your Claude Max OAuth tokens. Requests come from a real Claude Code process, indistinguishable from normal usage.
- **ChatGPT** (GPT-5.4, GPT-5.1 Codex) — Routes through the Codex API using your ChatGPT Pro/Plus OAuth tokens.

## Features

- **Multi-account credential pool** — Add multiple OAuth tokens per provider, SubStation rotates between them (LRU) to distribute usage and handle rate limits
- **Persistent sessions** — Claude sessions stay warm after first request, eliminating cold start overhead
- **Real-time streaming** — Token-by-token streaming from both providers piped directly to OpenClaw
- **Auto-rotation on rate limits** — 429s trigger automatic failover to the next token in the pool
- **Token refresh** — ChatGPT OAuth tokens auto-refresh when expired
- **Hardened** — Global error handlers, log rotation, port conflict auto-resolution, request size limits

## Setup

### 1. Install the plugin

```bash
# Copy plugin to OpenClaw extensions
mkdir -p ~/.openclaw/extensions/substation/dist
cp src/index.js ~/.openclaw/extensions/substation/dist/index.js
cp openclaw.plugin.json ~/.openclaw/extensions/substation/
cp package.json ~/.openclaw/extensions/substation/

# Install Agent SDK dependency
cd ~/.openclaw/extensions/substation
npm install @anthropic-ai/claude-agent-sdk
```

### 2. Add Claude Max tokens

SubStation reads Anthropic OAuth tokens from OpenClaw's auth-profiles:

```
~/.openclaw/agents/main/agent/auth-profiles.json
```

Add one or more `anthropic:*` profiles with `"type": "token"` and your OAuth token. SubStation auto-detects and rotates between all available tokens.

### 3. Add ChatGPT tokens (optional)

Run the included auth script:

```bash
node scripts/substation-auth chatgpt
```

This opens your browser for the OAuth PKCE flow. Tokens are saved to `~/.substation/token-pool.json`. Run multiple times for multiple accounts.

Check token status:

```bash
node scripts/substation-auth status
```

### 4. Configure OpenClaw

Add SubStation as a provider in `~/.openclaw/openclaw.json`:

```json
{
  "models": {
    "providers": {
      "indigo": {
        "baseUrl": "http://127.0.0.1:8403/v1",
        "apiKey": "substation-local",
        "api": "openai-completions",
        "models": [
          { "id": "opus-4-6", "name": "Claude Opus 4.6 (SubStation)", "contextWindow": 200000, "maxTokens": 128000 },
          { "id": "sonnet-4-6", "name": "Claude Sonnet 4.6 (SubStation)", "contextWindow": 200000, "maxTokens": 64000 },
          { "id": "haiku-4-5", "name": "Claude Haiku 4.5 (SubStation)", "contextWindow": 200000, "maxTokens": 64000 },
          { "id": "gpt-5.4", "name": "GPT 5.4 (SubStation)", "contextWindow": 200000, "maxTokens": 128000 },
          { "id": "gpt-5.4-mini", "name": "GPT 5.4 Mini (SubStation)", "contextWindow": 200000, "maxTokens": 64000 }
        ]
      }
    }
  }
}
```

Add models to the allowlist under `agents.defaults.models`:

```json
{
  "indigo/opus-4-6": { "alias": "indigo-opus" },
  "indigo/sonnet-4-6": { "alias": "indigo-sonnet" },
  "indigo/haiku-4-5": { "alias": "indigo-haiku" },
  "indigo/gpt-5.4": { "alias": "indigo-gpt54" },
  "indigo/gpt-5.4-mini": { "alias": "indigo-gpt54-mini" }
}
```

### 5. Restart OpenClaw

Models appear in `/model` picker. Select any `indigo/*` model to route through SubStation.

## Architecture

```
OpenClaw
  |
  v
SubStation Proxy (:8403)
  |
  +-- Claude models --> Agent SDK --> Anthropic API
  |                     (real CC binary, persistent sessions)
  |
  +-- GPT models -----> Codex API --> chatgpt.com
                         (direct HTTP, SSE streaming)
```

## Endpoints

| Endpoint | Description |
|----------|-------------|
| `POST /v1/chat/completions` | Main proxy — OpenAI-compatible chat completions |
| `GET /v1/models` | List available models |
| `GET /health` | Pool status, version, token counts |
| `GET /pool` | Detailed per-token status |

## Models

### Claude (via Agent SDK)
| Model ID | Name |
|----------|------|
| `opus-4-6` | Claude Opus 4.6 |
| `sonnet-4-6` | Claude Sonnet 4.6 |
| `haiku-4-5` | Claude Haiku 4.5 |

### ChatGPT (via Codex API)
| Model ID | Name |
|----------|------|
| `gpt-5.4` | GPT 5.4 |
| `gpt-5.4-mini` | GPT 5.4 Mini |
| `gpt-5.1-codex` | GPT 5.1 Codex |
| `gpt-5.1-codex-mini` | GPT 5.1 Codex Mini |
| `gpt-5.1-codex-max` | GPT 5.1 Codex Max |

## Token sources

SubStation loads tokens from multiple sources at startup (deduped by value):

1. **`~/.substation/token-pool.json`** — Explicit pool (ChatGPT tokens from auth script)
2. **`auth-profiles.json`** — All `anthropic:*` profiles with a `.token` field
3. **`SUBSTATION_OAUTH_TOKENS`** env var — Comma-separated tokens (assumed Anthropic)

## Privacy

- SubStation **never stores or logs message content**
- Requests are proxied through and discarded
- Token IDs (not values) appear in logs for debugging
- No telemetry, no analytics, no external calls beyond the LLM APIs
- All data stays on your machine

## License

MIT
integration

Comments

Sign in to leave a comment

Loading comments...