Tools
PoggioAI_MSc Openclaw
Plugin for OpenClaw of PoggioAI/MSc
Install
npm install
npm
Configuration Example
{
"plugins": {
"pai-msc-openclaw": {
"enabled": true,
"config": {
"consortiumPath": "",
"condaEnvName": "poggioai-msc",
"defaultPreset": "max-quality",
"defaultMode": "local",
"defaultModel": "claude-opus-4-6",
"defaultBudgetUsd": 300,
"progressPollIntervalMs": 15000,
"steeringBasePort": 5001,
"uploadTimeoutMs": 60000
}
}
}
}
README
# pAI/MSc-openclaw
Native OpenClaw plugin for the [pAI/MSc](https://github.com/PoggioAI/PoggioAI_MSc) autonomous research pipeline. Transforms a research hypothesis into a conference-grade manuscript with a single command โ zero config, zero human steers required.
```
================================================================
pAI/MSc-openclaw โ Autonomous Research Pipeline
================================================================
Thanks from the PoggioAI Team for using this tool!
Contact us:
Discord: https://discord.gg/Pz7spPPY
Email: [email protected]
Please acknowledge PoggioAI in your papers and cite our
technical report if you use this tool:
https://poggioai.github.io/papers/poggioai-msc-v0.pdf
================================================================
```
---
## Quick Start
```
/pai-msc "Investigate whether batch normalization implicitly regularizes the spectral norm of weight matrices in shallow ReLU networks"
```
That's it. The plugin:
1. Auto-installs the pAI/MSc Python backend on first use
2. Passes your existing OpenClaw API keys (no separate `.env` needed)
3. Creates an isolated run workspace with all inputs in `initial_context/`
4. Prompts you for reference files (papers, datasets) via Telegram/interface
5. Injects 25 backtested quality prompts + a 647-line author style guide
6. Runs the full 22-agent pipeline with quality-maximizing defaults
7. Streams progress updates to your chat as stages complete
8. Delivers the finished paper (PDF or markdown) back to you
---
## What's In This Repository
```
โโโ openclaw.plugin.json # Plugin manifest โ name, version, config schema
โโโ package.json # TypeScript package (build with npm run build)
โโโ tsconfig.json # TypeScript compiler config
โโโ README.md # This file
โ
โโโ src/ # TypeScript source (3,000+ lines)
โ โโโ index.ts # Entry point: definePluginEntry() โ registers
โ โ # 4 commands, 4 tools, 1 background service,
โ โ # and a shutdown hook
โ โ
โ โโโ commands/ # User-facing slash commands
โ โ โโโ research.ts # /pai-msc "hypothesis" [flags] โ the main entry
โ โ โโโ pai-msc-status.ts # /pai-msc-status โ show current stage + budget
โ โ โโโ pai-msc-stop.ts # /pai-msc-stop โ kill a running pipeline
โ โ โโโ pai-msc-list.ts # /pai-msc-list โ list all runs in this session
โ โ
โ โโโ tools/ # Agent-callable tools (programmatic access)
โ โ โโโ run-pipeline.ts # pai-msc.runPipeline โ start a run
โ โ โโโ steer-pipeline.ts # pai-msc.steerPipeline โ inject instructions
โ โ โโโ get-results.ts # pai-msc.getResults โ retrieve status + paper
โ โ โโโ approve-milestone.ts # pai-msc.approveMilestone โ gate responses
โ โ
โ โโโ services/ # Core services
โ โ โโโ workspace-manager.ts # Creates per-run workspace with initial_context/,
โ โ โ # logs/, uploads/ directories. All run data is
โ โ โ # isolated under ~/.openclaw/poggioai-msc/runs/
โ โ โโโ upload-handler.ts # Prompts user for reference files via Telegram/
โ โ โ # interface with 3-strategy fallback. Saves to
โ โ โ # initial_context/uploads/
โ โ โโโ installer.ts # Auto-install: clone repo โ conda env โ pip โ
โ โ โ # patch prompts โ preflight check โ sentinel
โ โ โโโ process-manager.ts # Spawn Python subprocess with workspace-aware
โ โ โ # env. Logs to logs/stdout.log + stderr.log
โ โ โโโ progress-poller.ts # Background service: polls every 15s for stage
โ โ โ # changes, budget thresholds, completion/failure.
โ โ โ # Also handles the narrative voice hook and
โ โ โ # review score escalation.
โ โ โโโ quality-injector.ts # Copies backtested prompts + style guide into
โ โ # initial_context/ and paper_workspace/
โ โ
โ โโโ bridge/ # Integration layer between plugin and Python backend
โ โ โโโ env-passthrough.ts # Reads API keys from OpenClaw env โ writes .env
โ โ โโโ config-writer.ts # Generates .llm_config.yaml from preset + flags
โ โ โโโ steering-client.ts # HTTP client for live steering API
โ โ โ # (POST /interrupt, /instruction, GET /status)
โ โ โโโ result-reader.ts # Reads run_summary.json, budget_state.json,
โ โ # review_verdict.json, finds paper file
โ โ
โ โโโ defaults/ # Configuration defaults
โ โ โโโ quality-presets.ts # QUALITY_MAX and QUALITY_FAST presets with all
โ โ โ # CLI flag values pre-configured
โ โ โโโ stage-names.ts # 24 pipeline stage constants + human-readable
โ โ # display names for progress messages
โ โ
โ โโโ types/ # TypeScript type definitions
โ โโโ openclaw-api.ts # OpenClawApi interface + runtime guards
โ โโโ pipeline.ts # RunHandle, PipelineOptions, StageEvent, RunSummary
โ โโโ config.ts # PluginConfig interface + DEFAULT_CONFIG
โ โโโ budget.ts # BudgetState, BudgetEntry, BUDGET_THRESHOLDS
โ โโโ steering.ts # SteeringInstruction, SteeringStatus, ReviewVerdict
โ
โโโ assets/ # Quality artifacts (ported from backtested Claude skill)
โ โ
โ โโโ author_style_guide_default.md # 647-line ML theory writing standard
โ โ # Contains: non-negotiable principles, anti-patterns
โ โ # (paper/section/sentence/epistemic), abstract rules
โ โ # (120-180 words, no theorem refs), related-work rules
โ โ # (organize by ideas not authors), concrete lints,
โ โ # epistemic lints, deletion pass, self-audit checklist,
โ โ # case studies with diagnosis + fix
โ โ
โ โโโ state_template.json # Pipeline state machine template (32 fields)
โ โ
โ โโโ prompts/ # 25 backtested agent prompts
โ โโโ 01-persona-practical.md # Practical Compass persona
โ โโโ 02-persona-rigor.md # Rigor & Novelty persona
โ โโโ 03-persona-narrative.md # Narrative Architect persona
โ โโโ 04-persona-synthesis.md # Synthesis coordinator (min 3 debate rounds)
โ โโโ 05-literature-review.md # Adversarial novelty falsification
โ โโโ 06-brainstorm.md # 3-phase: divergent โ convergent โ dependency
โ โโโ 07-formalize-goals.md # Goal formalization + track decomposition
โ โโโ 08-math-literature.md # Theory-specific literature search
โ โโโ 09-math-proposer.md # Claim graph construction
โ โโโ 10-math-prover.md # Proof construction with technique library
โ โโโ 11-math-verifier.md # Adversarial proof auditor + numerical checks
โ โโโ 12-experiment-design.md # Experiment design with anti-hallucination
โ โโโ 13-experimentation.md # Experiment execution
โ โโโ 14-experiment-verify.md # Cross-seed stability, verdict annotation
โ โโโ 15-formalize-results.md # Conservative results synthesis
โ โโโ 16-duality-check.md # Dual-lens: actionability + soundness (>= 6/10)
โ โโโ 17-resource-prep.md # Figures, tables, bibliography
โ โโโ 18-writeup.md # 260-line writeup: 12 passes, 2 full edit cycles
โ โโโ 19-proofreading.md # AI-voice detection checklist (9 categories)
โ โโโ 20-reviewer.md # Hard blockers B1-B5, AI voice risk assessment
โ โโโ 21-research-plan-writeup.md
โ โโโ 22-track-merge.md # Theory-experiment unified summary
โ โโโ 23-verify-completion.md # 3-way routing: COMPLETE/INCOMPLETE/RETHINK
โ โโโ 24-followup-lit-review.md # Gap-specific targeted follow-up
โ โโโ 25-narrative-voice.md # Pre-writeup tone/voice guidance
โ
โโโ scripts/
โ โโโ install-consortium.sh # Manual installer (normally auto-runs)
โ โโโ check-prereqs.sh # Verify conda, python, pdflatex, API keys
โ
โโโ examples/
โโโ quickstart-task.txt # Example research hypothesis
โโโ custom-style-guide-example.md # How to write a custom style guide
```
---
## How It Works
### The Pipeline
When you run `/pai-msc "hypothesis"`, the plugin orchestrates a 22-agent pipeline:
```
/pai-msc "hypothesis"
โ
โ โโ Plugin Layer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโ 1. Auto-install pAI/MSc Python backend (first time only)
โโ 2. Create isolated run workspace under ~/.openclaw/poggioai-msc/runs/
โโ 3. Copy prompts + style guide โ initial_context/
โโ 4. Write task.txt + pipeline_options.json โ initial_context/
โโ 5. Prompt user for reference files โ initial_context/uploads/
โโ 6. Write .env + .llm_config.yaml (per-run, not shared)
โโ 7. Spawn: python launch_multiagent.py --resume {workspace} [flags]
โ
โ โโ pAI/MSc Pipeline (Python/LangGraph) โโโโโโโโโโโโโโโโโโโโโโ
โ
โโ Phase 1: Persona Debate (3-5 rounds)
โ 3 personas (Practical, Rigor, Narrative) debate in escalating
โ rounds. Each round must be HARDER than the last. Minimum 3
โ rounds; extends to 5 if any persona still rejects. Synthesis
โ produce
... (truncated)
tools
Comments
Sign in to leave a comment