# Unified Playbook CLI Implementation Plan > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. **Goal:** Replace legacy sh/ps1/bat scripts with a single Python CLI driven by TOML config, and update docs/tests/CI accordingly. **Architecture:** `scripts/playbook.py` reads `playbook.toml`, validates config, then executes actions in a fixed order based on section presence. Actions are implemented as pure-Python helpers to keep cross-platform behavior. **Tech Stack:** Python 3.11 (`tomllib`), standard library only; Prettier via local install if available. --- ### Task 1: Create CLI test harness and basic argument handling **Files:** - Create: `tests/cli/test_playbook_cli.py` - Modify: `tests/README.md` **Step 1: Write failing tests for CLI usage/exit codes** ```python import subprocess import sys from pathlib import Path ROOT = Path(__file__).resolve().parents[2] SCRIPT = ROOT / "scripts" / "playbook.py" def run(*args): return subprocess.run([sys.executable, str(SCRIPT), *args], capture_output=True, text=True) def test_help_shows_usage(): result = run("-h") assert result.returncode == 0 assert "Usage:" in result.stderr or "Usage:" in result.stdout def test_missing_config_is_error(): result = run() assert result.returncode != 0 assert "-config" in (result.stderr + result.stdout) ``` **Step 2: Run tests to verify failure** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: FAIL (script missing) **Step 3: Implement minimal CLI skeleton** ```python # scripts/playbook.py import sys def usage(): return "Usage:\n python scripts/playbook.py -config \n python scripts/playbook.py -h" def main(argv): if "-h" in argv or "-help" in argv: print(usage()) return 0 if "-config" not in argv: print("ERROR: -config is required.\n" + usage(), file=sys.stderr) return 2 return 0 if __name__ == "__main__": raise SystemExit(main(sys.argv[1:])) ``` **Step 4: Run tests to verify pass** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: PASS **Step 5: Commit** ```bash git add scripts/playbook.py tests/cli/test_playbook_cli.py tests/README.md git commit -m ":white_check_mark: test(cli): add basic playbook cli tests" ``` --- ### Task 2: Parse TOML config and enforce dispatch order **Files:** - Modify: `scripts/playbook.py` - Create: `playbook.toml.example` - Modify: `README.md` **Step 1: Write failing test for TOML parsing + action order** ```python def test_action_order(tmp_path): config = tmp_path / "playbook.toml" config.write_text(""" [playbook] project_root = "." [format_md] [sync_standards] langs = ["tsl"] """) result = run("-config", str(config)) assert result.returncode == 0 # Expect order: vendor -> sync_templates -> sync_standards -> install_skills -> format_md # Only sections present should log as executed. assert "sync_standards" in result.stdout assert "format_md" in result.stdout ``` **Step 2: Run test to verify failure** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: FAIL (no TOML parser/action logs) **Step 3: Implement TOML parser + order dispatch** ```python import tomllib from pathlib import Path ORDER = ["vendor", "sync_templates", "sync_standards", "install_skills", "format_md"] def load_config(path: Path) -> dict: return tomllib.loads(path.read_text(encoding="utf-8")) def main(argv): # parse -config value # load config # for section in ORDER: if section in config, call action # print "[action] ..." for visibility ``` **Step 4: Run test to verify pass** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: PASS **Step 5: Commit** ```bash git add scripts/playbook.py playbook.toml.example README.md git commit -m ":sparkles: feat(cli): add toml config and dispatch order" ``` --- ### Task 3: Implement `vendor` action (snapshot only) **Files:** - Modify: `scripts/playbook.py` - Modify: `playbook.toml.example` - Modify: `README.md` **Step 1: Write failing test for vendoring output** ```python def test_vendor_creates_snapshot(tmp_path): config = tmp_path / "playbook.toml" config.write_text(""" [playbook] project_root = "{root}" [vendor] langs = ["tsl"] """.format(root=tmp_path)) result = run("-config", str(config)) assert result.returncode == 0 assert (tmp_path / "docs/standards/playbook/SOURCE.md").is_file() ``` **Step 2: Run test to verify failure** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: FAIL (vendor not implemented) **Step 3: Implement vendor snapshot copy** ```python from shutil import copy2, copytree # Copy: scripts/, codex/, rulesets/, docs/common, docs/, templates/ci, templates/ # Generate docs/index.md, README.md, SOURCE.md # Backup existing snapshot with timestamp ``` **Step 4: Run test to verify pass** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: PASS **Step 5: Commit** ```bash git add scripts/playbook.py playbook.toml.example README.md tests/cli/test_playbook_cli.py git commit -m ":sparkles: feat(vendor): add playbook snapshot generation" ``` --- ### Task 4: Implement `sync_templates` and `sync_standards` **Files:** - Modify: `scripts/playbook.py` - Modify: `playbook.toml.example` - Modify: `templates/README.md` **Step 1: Add failing tests for template sync + standards sync** ```python def test_sync_templates_creates_memory_bank(tmp_path): config = tmp_path / "playbook.toml" config.write_text(""" [playbook] project_root = "{root}" [sync_templates] project_name = "Demo" """.format(root=tmp_path)) result = run("-config", str(config)) assert result.returncode == 0 assert (tmp_path / "memory-bank" / "project-brief.md").is_file() def test_sync_standards_creates_agents(tmp_path): config = tmp_path / "playbook.toml" config.write_text(""" [playbook] project_root = "{root}" [sync_standards] langs = ["tsl"] """.format(root=tmp_path)) result = run("-config", str(config)) assert result.returncode == 0 assert (tmp_path / ".agents" / "tsl" / "index.md").is_file() ``` **Step 2: Run tests to verify failure** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: FAIL **Step 3: Implement `sync_templates`** ```python # Copy templates/memory-bank -> /memory-bank (rename *.template.md) # Copy templates/prompts -> /docs/prompts (rename *.template.md) # Copy AGENTS.template.md -> AGENTS.md (merge/append section) # Copy AGENT_RULES.template.md -> AGENT_RULES.md (backup optional) ``` **Step 4: Implement `sync_standards`** ```python # Copy rulesets/ -> /.agents/ # Update AGENTS.md playbook block (robust to blank lines) # Update .gitattributes according to gitattr_mode ``` **Step 5: Run tests to verify pass** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: PASS **Step 6: Commit** ```bash git add scripts/playbook.py playbook.toml.example templates/README.md tests/cli/test_playbook_cli.py git commit -m ":sparkles: feat(sync): add templates and standards actions" ``` --- ### Task 5: Implement `install_skills` and `format_md` **Files:** - Modify: `scripts/playbook.py` - Modify: `playbook.toml.example` - Modify: `README.md` **Step 1: Add failing tests for skills install + md format** ```python def test_install_skills(tmp_path): config = tmp_path / "playbook.toml" target = tmp_path / "codex" config.write_text(f""" [playbook] project_root = "{tmp_path}" [install_skills] codex_home = "{target}" mode = "list" skills = ["brainstorming"] """) result = run("-config", str(config)) assert result.returncode == 0 assert (target / "skills" / "brainstorming" / "SKILL.md").is_file() ``` **Step 2: Run tests to verify failure** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: FAIL **Step 3: Implement `install_skills` and `format_md`** ```python # install_skills: copy codex/skills/ -> /skills/ # format_md: call `prettier` with configured globs if available ``` **Step 4: Run tests to verify pass** Run: `python -m unittest tests/cli/test_playbook_cli.py -v` Expected: PASS **Step 5: Commit** ```bash git add scripts/playbook.py playbook.toml.example tests/cli/test_playbook_cli.py README.md git commit -m ":sparkles: feat(actions): add install_skills and format_md" ``` --- ### Task 6: Remove legacy scripts and update CI/test docs **Files:** - Delete: `scripts/*.sh`, `scripts/*.ps1`, `scripts/*.bat` - Delete: `tests/scripts/*.bats` - Modify: `.gitea/workflows/test.yml` - Modify: `tests/README.md` - Modify: `README.md`, `templates/README.md` **Step 1: Remove legacy scripts and obsolete tests** ```bash rm scripts/*.sh scripts/*.ps1 scripts/*.bat rm tests/scripts/*.bats ``` **Step 2: Update CI to run new Python tests + existing template/link checks** ```yaml - name: Run CLI tests run: python -m unittest discover -s tests/cli -v - name: Validate templates run: | sh tests/templates/validate_python_templates.sh sh tests/templates/validate_cpp_templates.sh sh tests/templates/validate_ci_templates.sh sh tests/templates/validate_project_templates.sh - name: Check doc links run: sh tests/integration/check_doc_links.sh ``` **Step 3: Update docs to new CLI + TOML** - Replace old script usage with `python scripts/playbook.py -config playbook.toml`. - Document config sections and example file name. **Step 4: Run full test suite** Run: `python -m unittest discover -s tests/cli -v && sh tests/templates/validate_project_templates.sh && sh tests/integration/check_doc_links.sh` Expected: PASS **Step 5: Commit** ```bash git add . git commit -m ":wastebasket: remove(legacy): drop old scripts and tests" ``` --- ### Task 7: Final cleanup and formatting **Files:** - Modify: `README.md`, `templates/README.md`, `tests/README.md` **Step 1: Run Markdown format (exclude third-party skills)** Run: `npm run format:md -- --ignore-path .prettierignore` Expected: No unexpected diffs **Step 2: Commit formatting-only changes (if any)** ```bash git add README.md templates/README.md tests/README.md git commit -m ":art: style(docs): format markdown" ```