← Back to Plugins
Tools

PoggioAI_MSc Openclaw

PoggioAI By PoggioAI ⭐ 1 stars 👁 5 views ▲ 0 votes

Plugin for OpenClaw of PoggioAI/MSc

GitHub

Install

npm install
npm

Configuration Example

{
  "plugins": {
    "pai-msc-openclaw": {
      "enabled": true,
      "config": {
        "consortiumPath": "",
        "condaEnvName": "poggioai-msc",
        "defaultPreset": "max-quality",
        "defaultMode": "local",
        "defaultModel": "claude-opus-4-6",
        "defaultBudgetUsd": 300,
        "progressPollIntervalMs": 15000,
        "steeringBasePort": 5001,
        "uploadTimeoutMs": 60000
      }
    }
  }
}

README

# pAI/MSc-openclaw

Native OpenClaw plugin for the [pAI/MSc](https://github.com/PoggioAI/PoggioAI_MSc) autonomous research pipeline. Transforms a research hypothesis into a conference-grade manuscript with a single command โ€” zero config, zero human steers required.

```
================================================================
  pAI/MSc-openclaw โ€” Autonomous Research Pipeline
================================================================

  Thanks from the PoggioAI Team for using this tool!

  Contact us:
    Discord: https://discord.gg/Pz7spPPY
    Email:   [email protected]

  Please acknowledge PoggioAI in your papers and cite our
  technical report if you use this tool:
    https://poggioai.github.io/papers/poggioai-msc-v0.pdf

================================================================
```

---

## Quick Start

```
/pai-msc "Investigate whether batch normalization implicitly regularizes the spectral norm of weight matrices in shallow ReLU networks"
```

That's it. The plugin:
1. Auto-installs the pAI/MSc Python backend on first use
2. Passes your existing OpenClaw API keys (no separate `.env` needed)
3. Creates an isolated run workspace with all inputs in `initial_context/`
4. Prompts you for reference files (papers, datasets) via Telegram/interface
5. Injects 25 backtested quality prompts + a 647-line author style guide
6. Runs the full 22-agent pipeline with quality-maximizing defaults
7. Streams progress updates to your chat as stages complete
8. Delivers the finished paper (PDF or markdown) back to you

---

## What's In This Repository

```
โ”œโ”€โ”€ openclaw.plugin.json              # Plugin manifest โ€” name, version, config schema
โ”œโ”€โ”€ package.json                      # TypeScript package (build with npm run build)
โ”œโ”€โ”€ tsconfig.json                     # TypeScript compiler config
โ”œโ”€โ”€ README.md                         # This file
โ”‚
โ”œโ”€โ”€ src/                              # TypeScript source (3,000+ lines)
โ”‚   โ”œโ”€โ”€ index.ts                      # Entry point: definePluginEntry() โ€” registers
โ”‚   โ”‚                                 #   4 commands, 4 tools, 1 background service,
โ”‚   โ”‚                                 #   and a shutdown hook
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ commands/                     # User-facing slash commands
โ”‚   โ”‚   โ”œโ”€โ”€ research.ts               # /pai-msc "hypothesis" [flags] โ€” the main entry
โ”‚   โ”‚   โ”œโ”€โ”€ pai-msc-status.ts         # /pai-msc-status โ€” show current stage + budget
โ”‚   โ”‚   โ”œโ”€โ”€ pai-msc-stop.ts           # /pai-msc-stop โ€” kill a running pipeline
โ”‚   โ”‚   โ””โ”€โ”€ pai-msc-list.ts           # /pai-msc-list โ€” list all runs in this session
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ tools/                        # Agent-callable tools (programmatic access)
โ”‚   โ”‚   โ”œโ”€โ”€ run-pipeline.ts           # pai-msc.runPipeline โ€” start a run
โ”‚   โ”‚   โ”œโ”€โ”€ steer-pipeline.ts         # pai-msc.steerPipeline โ€” inject instructions
โ”‚   โ”‚   โ”œโ”€โ”€ get-results.ts            # pai-msc.getResults โ€” retrieve status + paper
โ”‚   โ”‚   โ””โ”€โ”€ approve-milestone.ts      # pai-msc.approveMilestone โ€” gate responses
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ services/                     # Core services
โ”‚   โ”‚   โ”œโ”€โ”€ workspace-manager.ts      # Creates per-run workspace with initial_context/,
โ”‚   โ”‚   โ”‚                             #   logs/, uploads/ directories. All run data is
โ”‚   โ”‚   โ”‚                             #   isolated under ~/.openclaw/poggioai-msc/runs/
โ”‚   โ”‚   โ”œโ”€โ”€ upload-handler.ts         # Prompts user for reference files via Telegram/
โ”‚   โ”‚   โ”‚                             #   interface with 3-strategy fallback. Saves to
โ”‚   โ”‚   โ”‚                             #   initial_context/uploads/
โ”‚   โ”‚   โ”œโ”€โ”€ installer.ts              # Auto-install: clone repo โ†’ conda env โ†’ pip โ†’
โ”‚   โ”‚   โ”‚                             #   patch prompts โ†’ preflight check โ†’ sentinel
โ”‚   โ”‚   โ”œโ”€โ”€ process-manager.ts        # Spawn Python subprocess with workspace-aware
โ”‚   โ”‚   โ”‚                             #   env. Logs to logs/stdout.log + stderr.log
โ”‚   โ”‚   โ”œโ”€โ”€ progress-poller.ts        # Background service: polls every 15s for stage
โ”‚   โ”‚   โ”‚                             #   changes, budget thresholds, completion/failure.
โ”‚   โ”‚   โ”‚                             #   Also handles the narrative voice hook and
โ”‚   โ”‚   โ”‚                             #   review score escalation.
โ”‚   โ”‚   โ””โ”€โ”€ quality-injector.ts       # Copies backtested prompts + style guide into
โ”‚   โ”‚                                 #   initial_context/ and paper_workspace/
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ bridge/                       # Integration layer between plugin and Python backend
โ”‚   โ”‚   โ”œโ”€โ”€ env-passthrough.ts        # Reads API keys from OpenClaw env โ†’ writes .env
โ”‚   โ”‚   โ”œโ”€โ”€ config-writer.ts          # Generates .llm_config.yaml from preset + flags
โ”‚   โ”‚   โ”œโ”€โ”€ steering-client.ts        # HTTP client for live steering API
โ”‚   โ”‚   โ”‚                             #   (POST /interrupt, /instruction, GET /status)
โ”‚   โ”‚   โ””โ”€โ”€ result-reader.ts          # Reads run_summary.json, budget_state.json,
โ”‚   โ”‚                                 #   review_verdict.json, finds paper file
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ defaults/                     # Configuration defaults
โ”‚   โ”‚   โ”œโ”€โ”€ quality-presets.ts        # QUALITY_MAX and QUALITY_FAST presets with all
โ”‚   โ”‚   โ”‚                             #   CLI flag values pre-configured
โ”‚   โ”‚   โ””โ”€โ”€ stage-names.ts            # 24 pipeline stage constants + human-readable
โ”‚   โ”‚                                 #   display names for progress messages
โ”‚   โ”‚
โ”‚   โ””โ”€โ”€ types/                        # TypeScript type definitions
โ”‚       โ”œโ”€โ”€ openclaw-api.ts           # OpenClawApi interface + runtime guards
โ”‚       โ”œโ”€โ”€ pipeline.ts               # RunHandle, PipelineOptions, StageEvent, RunSummary
โ”‚       โ”œโ”€โ”€ config.ts                 # PluginConfig interface + DEFAULT_CONFIG
โ”‚       โ”œโ”€โ”€ budget.ts                 # BudgetState, BudgetEntry, BUDGET_THRESHOLDS
โ”‚       โ””โ”€โ”€ steering.ts              # SteeringInstruction, SteeringStatus, ReviewVerdict
โ”‚
โ”œโ”€โ”€ assets/                           # Quality artifacts (ported from backtested Claude skill)
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ author_style_guide_default.md # 647-line ML theory writing standard
โ”‚   โ”‚                                 # Contains: non-negotiable principles, anti-patterns
โ”‚   โ”‚                                 # (paper/section/sentence/epistemic), abstract rules
โ”‚   โ”‚                                 # (120-180 words, no theorem refs), related-work rules
โ”‚   โ”‚                                 # (organize by ideas not authors), concrete lints,
โ”‚   โ”‚                                 # epistemic lints, deletion pass, self-audit checklist,
โ”‚   โ”‚                                 # case studies with diagnosis + fix
โ”‚   โ”‚
โ”‚   โ”œโ”€โ”€ state_template.json           # Pipeline state machine template (32 fields)
โ”‚   โ”‚
โ”‚   โ””โ”€โ”€ prompts/                      # 25 backtested agent prompts
โ”‚       โ”œโ”€โ”€ 01-persona-practical.md   # Practical Compass persona
โ”‚       โ”œโ”€โ”€ 02-persona-rigor.md       # Rigor & Novelty persona
โ”‚       โ”œโ”€โ”€ 03-persona-narrative.md   # Narrative Architect persona
โ”‚       โ”œโ”€โ”€ 04-persona-synthesis.md   # Synthesis coordinator (min 3 debate rounds)
โ”‚       โ”œโ”€โ”€ 05-literature-review.md   # Adversarial novelty falsification
โ”‚       โ”œโ”€โ”€ 06-brainstorm.md          # 3-phase: divergent โ†’ convergent โ†’ dependency
โ”‚       โ”œโ”€โ”€ 07-formalize-goals.md     # Goal formalization + track decomposition
โ”‚       โ”œโ”€โ”€ 08-math-literature.md     # Theory-specific literature search
โ”‚       โ”œโ”€โ”€ 09-math-proposer.md       # Claim graph construction
โ”‚       โ”œโ”€โ”€ 10-math-prover.md         # Proof construction with technique library
โ”‚       โ”œโ”€โ”€ 11-math-verifier.md       # Adversarial proof auditor + numerical checks
โ”‚       โ”œโ”€โ”€ 12-experiment-design.md   # Experiment design with anti-hallucination
โ”‚       โ”œโ”€โ”€ 13-experimentation.md     # Experiment execution
โ”‚       โ”œโ”€โ”€ 14-experiment-verify.md   # Cross-seed stability, verdict annotation
โ”‚       โ”œโ”€โ”€ 15-formalize-results.md   # Conservative results synthesis
โ”‚       โ”œโ”€โ”€ 16-duality-check.md       # Dual-lens: actionability + soundness (>= 6/10)
โ”‚       โ”œโ”€โ”€ 17-resource-prep.md       # Figures, tables, bibliography
โ”‚       โ”œโ”€โ”€ 18-writeup.md             # 260-line writeup: 12 passes, 2 full edit cycles
โ”‚       โ”œโ”€โ”€ 19-proofreading.md        # AI-voice detection checklist (9 categories)
โ”‚       โ”œโ”€โ”€ 20-reviewer.md            # Hard blockers B1-B5, AI voice risk assessment
โ”‚       โ”œโ”€โ”€ 21-research-plan-writeup.md
โ”‚       โ”œโ”€โ”€ 22-track-merge.md         # Theory-experiment unified summary
โ”‚       โ”œโ”€โ”€ 23-verify-completion.md   # 3-way routing: COMPLETE/INCOMPLETE/RETHINK
โ”‚       โ”œโ”€โ”€ 24-followup-lit-review.md # Gap-specific targeted follow-up
โ”‚       โ””โ”€โ”€ 25-narrative-voice.md     # Pre-writeup tone/voice guidance
โ”‚
โ”œโ”€โ”€ scripts/
โ”‚   โ”œโ”€โ”€ install-consortium.sh         # Manual installer (normally auto-runs)
โ”‚   โ””โ”€โ”€ check-prereqs.sh             # Verify conda, python, pdflatex, API keys
โ”‚
โ””โ”€โ”€ examples/
    โ”œโ”€โ”€ quickstart-task.txt           # Example research hypothesis
    โ””โ”€โ”€ custom-style-guide-example.md # How to write a custom style guide
```

---

## How It Works

### The Pipeline

When you run `/pai-msc "hypothesis"`, the plugin orchestrates a 22-agent pipeline:

```
/pai-msc "hypothesis"
  โ”‚
  โ”‚ โ”€โ”€ Plugin Layer โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โ”‚
  โ”œโ”€ 1. Auto-install pAI/MSc Python backend (first time only)
  โ”œโ”€ 2. Create isolated run workspace under ~/.openclaw/poggioai-msc/runs/
  โ”œโ”€ 3. Copy prompts + style guide โ†’ initial_context/
  โ”œโ”€ 4. Write task.txt + pipeline_options.json โ†’ initial_context/
  โ”œโ”€ 5. Prompt user for reference files โ†’ initial_context/uploads/
  โ”œโ”€ 6. Write .env + .llm_config.yaml (per-run, not shared)
  โ”œโ”€ 7. Spawn: python launch_multiagent.py --resume {workspace} [flags]
  โ”‚
  โ”‚ โ”€โ”€ pAI/MSc Pipeline (Python/LangGraph) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  โ”‚
  โ”œโ”€ Phase 1: Persona Debate (3-5 rounds)
  โ”‚     3 personas (Practical, Rigor, Narrative) debate in escalating
  โ”‚     rounds. Each round must be HARDER than the last. Minimum 3
  โ”‚     rounds; extends to 5 if any persona still rejects. Synthesis
  โ”‚     produce

... (truncated)
tools

Comments

Sign in to leave a comment

Loading comments...