[
  {
    "path": ".claude-plugin/marketplace.json",
    "content": "{\n  \"name\": \"team-attention-plugins\",\n  \"owner\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"description\": \"Claude Code plugins for power users by Team Attention\",\n  \"plugins\": [\n    {\n      \"name\": \"agent-council\",\n      \"description\": \"Collect and synthesize opinions from multiple AI agents (Gemini, GPT, Codex)\",\n      \"source\": \"./plugins/agent-council\"\n    },\n    {\n      \"name\": \"clarify\",\n      \"description\": \"Three lenses for clarity: vague requirements to specs, strategy blind spots via Known/Unknown quadrants, and content vs form leverage analysis\",\n      \"source\": \"./plugins/clarify\"\n    },\n    {\n      \"name\": \"interactive-review\",\n      \"description\": \"Interactive markdown review with web UI for visual plan/document approval\",\n      \"source\": \"./plugins/interactive-review\"\n    },\n    {\n      \"name\": \"say-summary\",\n      \"description\": \"Speaks a short summary of Claude's response using macOS TTS (Korean/English)\",\n      \"source\": \"./plugins/say-summary\"\n    },\n    {\n      \"name\": \"youtube-digest\",\n      \"description\": \"Summarize YouTube videos with transcript, insights, Korean translation, and quizzes\",\n      \"source\": \"./plugins/youtube-digest\"\n    },\n    {\n      \"name\": \"google-calendar\",\n      \"description\": \"Multi-account Google Calendar integration with parallel querying and conflict detection\",\n      \"source\": \"./plugins/google-calendar\"\n    },\n    {\n      \"name\": \"session-wrap\",\n      \"description\": \"Session wrap-up workflow with multi-agent analysis for documentation, automation, learning, and follow-up\",\n      \"source\": \"./plugins/session-wrap\"\n    },\n    {\n      \"name\": \"dev\",\n      \"description\": \"Developer workflow tools: community scanning, technical decision-making\",\n      \"source\": \"./plugins/dev\"\n    },\n    {\n      \"name\": \"kakaotalk\",\n      \"description\": \"Send and read KakaoTalk messages on macOS using Accessibility API\",\n      \"source\": \"./plugins/kakaotalk\"\n    },\n    {\n      \"name\": \"doubt\",\n      \"description\": \"Force Claude to re-validate when you have doubts (!doubt)\",\n      \"source\": \"./plugins/doubt\"\n    },\n    {\n      \"name\": \"gmail\",\n      \"description\": \"Multi-account Gmail integration - read, search, send, and manage emails with 5-step sending workflow\",\n      \"source\": \"./plugins/gmail\"\n    },\n    {\n      \"name\": \"team-assemble\",\n      \"description\": \"Dynamically assemble expert agent teams for complex tasks using Claude Code's agent teams feature\",\n      \"source\": \"./plugins/team-assemble\"\n    },\n    {\n      \"name\": \"podcast\",\n      \"description\": \"Generate Korean podcast episodes from any source (URLs, tweets, articles) with OpenAI TTS and YouTube auto-upload\",\n      \"source\": \"./plugins/podcast\"\n    },\n    {\n      \"name\": \"fetch-tweet\",\n      \"description\": \"Fetch full tweet text, author info, and engagement data from X/Twitter URLs without authentication (uses FxEmbed)\",\n      \"source\": \"./plugins/fetch-tweet\"\n    }\n  ]\n}\n"
  },
  {
    "path": ".claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"plugins-for-claude-natives\",\n  \"version\": \"0.1.0\",\n  \"description\": \"Claude Code 네이티브 사용자를 위한 유틸리티 플러그인 모음\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\"\n}\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\n*.egg-info/\n*.egg\n\n# PyInstaller\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre\n.pyre/\n\n# pytype\n.pytype/\n\n# Cython\ncython_debug/\n\n# IDE\n.idea/\n.vscode/\n*.swp\n*.swo\n*~\n\n# OS\n.DS_Store\nThumbs.db\n\n# Videos\n*.mp4\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 Team Attention\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.ko.md",
    "content": "# Plugins for Claude Natives\n\nClaude Code의 기능을 확장하고 싶은 파워 유저를 위한 플러그인 모음입니다.\n\n## 목차\n\n- [빠른 시작](#빠른-시작)\n- [플러그인 목록](#플러그인-목록)\n- [상세 설명](#상세-설명)\n  - [agent-council](#agent-council) - 여러 AI 모델의 의견 종합\n  - [clarify](#clarify) - 모호한 요구사항을 명세로 변환\n  - [dev](#dev) - 커뮤니티 스캔 + 기술 의사결정\n  - [interactive-review](#interactive-review) - 웹 UI로 계획 검토\n  - [say-summary](#say-summary) - 응답을 음성으로 듣기\n  - [youtube-digest](#youtube-digest) - YouTube 영상 요약 및 퀴즈\n  - [google-calendar](#google-calendar) - 멀티 계정 캘린더 통합\n  - [kakaotalk](#kakaotalk) - macOS 카카오톡 메시지 발송/읽기\n  - [session-wrap](#session-wrap) - 세션 마무리 + 히스토리 분석\n  - [podcast](#podcast) - 소스→YouTube 한국어 팟캐스트 생성\n  - [fetch-tweet](#fetch-tweet) - 인증 없이 트윗 본문과 메타데이터 가져오기\n- [기여하기](#기여하기)\n- [라이선스](#라이선스)\n\n---\n\n## 빠른 시작\n\n```bash\n# 마켓플레이스 추가\n/plugin marketplace add team-attention/plugins-for-claude-natives\n\n# 플러그인 설치\n/plugin install <plugin-name>\n```\n\n---\n\n## 플러그인 목록\n\n| 플러그인 | 설명 | 소셜 |\n|---------|------|------|\n| [agent-council](./plugins/agent-council/) | 여러 AI 에이전트(Gemini, GPT, Codex)의 의견을 수집하고 종합 | [LinkedIn](https://www.linkedin.com/posts/gb-jeong_claude-code%EA%B0%80-codex-gemini-cli-%EA%B3%BC-%ED%9A%8C%EC%9D%98%ED%95%B4%EC%84%9C-%EA%B2%B0%EB%A1%A0%EC%9D%84-activity-7406083077258665984-L_fD) |\n| [clarify](./plugins/clarify/) | 반복적인 질문을 통해 모호한 요구사항을 정확한 명세로 변환 | [LinkedIn](https://www.linkedin.com/posts/gb-jeong_%ED%81%B4%EB%A1%9C%EB%93%9C%EC%BD%94%EB%93%9C%EA%B0%80-%EA%B0%9D%EA%B4%80%EC%8B%9D%EC%9C%BC%EB%A1%9C-%EC%A7%88%EB%AC%B8%ED%95%98%EA%B2%8C-%ED%95%98%EB%8A%94-skills%EB%A5%BC-%EC%82%AC%EC%9A%A9%ED%95%B4%EB%B3%B4%EC%84%B8%EC%9A%94-clarify-activity-7413349697022570496-qLts) |\n| [dev](./plugins/dev/) | 커뮤니티 의견 스캔 + 기술 의사결정 분석 | |\n| [interactive-review](./plugins/interactive-review/) | 웹 UI를 통한 인터랙티브 마크다운 리뷰 | [LinkedIn](https://www.linkedin.com/posts/hoyeonleekr_claude-code%EA%B0%80-%EC%9E%91%EC%84%B1%ED%95%9C-%EA%B3%84%ED%9A%8D%EC%9D%B4%EB%82%98-%EA%B8%B4-%EB%AC%B8%EC%84%9C%EC%97%90-%EB%8C%80%ED%95%9C-%EC%96%B4%EB%96%BB%EA%B2%8C-%ED%94%BC%EB%93%9C%EB%B0%B1-%EC%A3%BC%EC%84%B8%EC%9A%94-activity-7412613598516051968-ujHp) |\n| [say-summary](./plugins/say-summary/) | Claude 응답을 macOS TTS로 요약해서 읽어줌 (한국어/영어) | [LinkedIn](https://www.linkedin.com/posts/gb-jeong_claude-code%EC%9D%98-%EC%9D%91%EB%8B%B5%EC%9D%84-%EC%9A%94%EC%95%BD%ED%95%B4%EC%84%9C-%EC%9D%8C%EC%84%B1%EC%9C%BC%EB%A1%9C-%EB%93%A4%EC%9D%84-%EC%88%98-%EC%9E%88%EB%8A%94-hooks-activity-7412609821390249984-ekCd) |\n| [youtube-digest](./plugins/youtube-digest/) | YouTube 영상 요약, 인사이트, 한글 번역, 퀴즈 제공 | [LinkedIn](https://www.linkedin.com/posts/gb-jeong_84%EB%B6%84%EC%A7%9C%EB%A6%AC-%EC%98%81%EC%96%B4-%ED%8C%9F%EC%BA%90%EC%8A%A4%ED%8A%B8%EB%A5%BC-5%EB%B6%84-%EB%A7%8C%EC%97%90-%ED%95%B5%EC%8B%AC-%ED%8C%8C%EC%95%85%ED%95%98%EA%B3%A0-%ED%80%B4%EC%A6%88%EA%B9%8C%EC%A7%80-%ED%92%80%EA%B3%A0-%EC%A7%81%EC%A0%91-activity-7414055598754848768-c0oy) |\n| [google-calendar](./plugins/google-calendar/) | 멀티 계정 Google Calendar 통합, 병렬 조회 및 충돌 감지 | |\n| [kakaotalk](./plugins/kakaotalk/) | macOS 카카오톡 메시지 발송 및 읽기 (Accessibility API) | |\n| [session-wrap](./plugins/session-wrap/) | 세션 마무리, 히스토리 분석, 세션 검증 툴킷 | |\n| [podcast](./plugins/podcast/) | 소스(URL, 트윗, 아티클)를 분석하여 OpenAI TTS로 한국어 팟캐스트 생성 및 YouTube 업로드 | |\n| [fetch-tweet](./plugins/fetch-tweet/) | X/Twitter URL에서 인증 없이 트윗 본문, 작성자 정보, 인게이지먼트 데이터 조회 (FxEmbed 활용) | |\n\n---\n\n## 상세 설명\n\n### agent-council\n\n**여러 AI 모델을 소환해서 질문에 대한 합의를 도출합니다.**\n\n어려운 결정을 내려야 하거나 다양한 관점이 필요할 때, 이 플러그인은 여러 AI 에이전트(Gemini CLI, GPT, Codex)에 병렬로 질문하고 그 의견들을 하나의 균형 잡힌 답변으로 종합합니다.\n\n**트리거 문구:**\n- \"summon the council\"\n- \"다른 AI들한테 물어봐\"\n- \"여러 모델 의견 듣고 싶어\"\n\n**동작 방식:**\n1. 질문이 여러 AI 에이전트에 동시에 전송됨\n2. 각 에이전트가 자신의 관점을 제시\n3. Claude가 응답들을 종합하여 합의점과 이견을 정리\n\n```bash\n# 예시\nUser: \"summon the council - TypeScript vs JavaScript 뭘 써야 할까?\"\n```\n\n---\n\n### clarify\n\n**모호한 요구사항을 정확하고 실행 가능한 명세로 변환합니다.**\n\n불명확한 지시사항으로 코드를 작성하기 전에, 이 플러그인이 구조화된 인터뷰를 통해 정확히 무엇이 필요한지 파악합니다. 더 이상 추측도, 재작업도 필요 없습니다.\n\n**트리거 문구:**\n- \"/clarify\"\n- \"요구사항 명확히 해줘\"\n- \"내가 뭘 원하는 건지...\"\n\n**프로세스:**\n1. **캡처** - 원본 요구사항을 그대로 기록\n2. **질문** - 모호한 부분을 해결하기 위한 객관식 질문\n3. **비교** - Before/After로 변환 결과 제시\n4. **저장** - 선택적으로 명세를 파일로 저장\n\n**변환 예시:**\n\n| Before | After |\n|--------|-------|\n| \"로그인 기능 추가해줘\" | 목표: 사용자명/비밀번호 로그인과 자가 가입 추가. 범위: 로그인, 로그아웃, 가입, 비밀번호 재설정. 제약: 24시간 세션, bcrypt, 5회 시도 제한. |\n\n---\n\n### dev\n\n**개발자 워크플로우 도구: 커뮤니티 스캔과 기술 의사결정.**\n\n개발자 리서치와 의사결정을 위한 두 가지 강력한 스킬을 제공합니다.\n\n#### 스킬\n\n**`/dev-scan`** - 개발자 커뮤니티 의견 스캔\n- Reddit (Gemini CLI 통해), Hacker News, Dev.to, Lobsters를 병렬 검색\n- 공통 의견, 논쟁점, 주목할 시각을 종합\n- 도구 도입 전 커뮤니티 분위기 파악에 유용\n\n**`/tech-decision`** - 깊이 있는 기술 의사결정 분석\n- 4개의 전문 에이전트가 병렬로 실행되는 다단계 워크플로우\n- 코드베이스 분석, 문서 리서치, 커뮤니티 의견, AI 전문가 관점 종합\n- 두괄식(결론 먼저) 보고서와 점수화된 비교 제공\n\n**트리거 문구:**\n- \"개발자 반응...\", \"개발자들 뭐라고 해?\"\n- \"A vs B\", \"어떤 라이브러리 써야 해?\", \"기술 의사결정\"\n\n**tech-decision 동작 방식:**\n\n```\nPhase 1: 병렬 정보 수집\n┌─────────────────┬─────────────────┬─────────────────┬─────────────────┐\n│ codebase-       │ docs-           │ dev-scan        │ agent-council   │\n│ explorer        │ researcher      │ (커뮤니티)       │ (AI 전문가)      │\n└────────┬────────┴────────┬────────┴────────┬────────┴────────┬────────┘\n         └─────────────────┴─────────────────┴─────────────────┘\n                                    │\nPhase 2: 분석 및 종합               ▼\n┌─────────────────────────────────────────────────────────────────────────┐\n│                        tradeoff-analyzer                                 │\n└─────────────────────────────────────────────────────────────────────────┘\n                                    │\n                                    ▼\n┌─────────────────────────────────────────────────────────────────────────┐\n│                       decision-synthesizer                               │\n│                       (두괄식: 결론 먼저)                                  │\n└─────────────────────────────────────────────────────────────────────────┘\n```\n\n```bash\n# 예시\nUser: \"React vs Vue 뭐가 나을까?\"\nUser: \"상태관리 라이브러리 뭐 쓸지 고민이야\"\nUser: \"모놀리스 vs 마이크로서비스 어떻게 해야 할까?\"\n```\n\n---\n\n### interactive-review\n\n**웹 인터페이스를 통해 Claude의 계획과 문서를 검토합니다.**\n\n터미널에서 긴 마크다운을 읽는 대신, 이 플러그인은 브라우저 기반 UI를 열어 항목별로 체크/언체크하고, 코멘트를 추가하고, 구조화된 피드백을 제출할 수 있게 합니다.\n\n**트리거 문구:**\n- \"/review\"\n- \"이 계획 검토해줘\"\n- \"확인해볼게\"\n\n**플로우:**\n1. Claude가 계획이나 문서를 생성\n2. 브라우저에 웹 UI가 자동으로 열림\n3. 각 항목을 체크박스와 선택적 코멘트로 검토\n4. Submit 클릭하여 구조화된 피드백 전송\n5. Claude가 승인/거부된 항목에 따라 조정\n\n---\n\n### say-summary\n\n**Claude의 응답을 음성으로 들을 수 있습니다 (macOS 전용).**\n\n이 플러그인은 Stop hook을 사용하여 Claude의 응답을 짧은 헤드라인으로 요약하고 macOS 텍스트-투-스피치로 읽어줍니다. 코딩하면서 음성 피드백을 원할 때 딱입니다.\n\n**기능:**\n- Claude Haiku를 사용해 응답을 3-10단어로 요약\n- 한국어 vs 영어 자동 감지\n- 적절한 음성 사용 (한국어: Yuna, 영어: Samantha)\n- 백그라운드 실행, Claude Code 차단 없음\n\n**요구사항:**\n- macOS (`say` 명령어 사용)\n- Python 3.10+\n\n---\n\n### youtube-digest\n\n**YouTube 영상을 트랜스크립트, 번역, 이해도 퀴즈와 함께 요약합니다.**\n\nYouTube URL을 입력하면 완전한 분석을 받을 수 있습니다: 요약, 핵심 인사이트, 전체 트랜스크립트 한글 번역, 그리고 이해도를 테스트하는 3단계 퀴즈(총 9문제).\n\n**트리거 문구:**\n- \"이 유튜브 정리해줘\"\n- \"영상 요약해줘\"\n- YouTube URL\n\n**받을 수 있는 것:**\n1. **요약** - 핵심 포인트와 함께 3-5문장 개요\n2. **인사이트** - 실행 가능한 테이크어웨이와 아이디어\n3. **전체 트랜스크립트** - 한글 번역과 타임스탬프 포함\n4. **3단계 퀴즈** - 기본, 중급, 심화 문제\n5. **Deep Research** (선택) - 주제를 확장하는 웹 검색\n\n**저장 위치:** `research/readings/youtube/YYYY-MM-DD-title.md`\n\n---\n\n### google-calendar\n\n**Claude Code에서 여러 Google Calendar 계정을 관리합니다.**\n\n여러 Google 계정(회사, 개인 등)의 일정을 조회, 생성, 수정, 삭제할 수 있으며 자동 충돌 감지 기능을 제공합니다.\n\n**트리거 문구:**\n- \"일정 보여줘\"\n- \"캘린더 확인\"\n- \"미팅 만들어줘\"\n- \"충돌 확인해줘\"\n\n**기능:**\n- 여러 계정 병렬 조회\n- 계정 간 충돌 감지\n- 전체 CRUD 작업 (생성, 조회, 수정, 삭제)\n- refresh token으로 사전 인증 (반복 로그인 불필요)\n\n**설정 필요:**\n1. Calendar API가 활성화된 Google Cloud 프로젝트 생성\n2. 각 계정별 설정 스크립트 실행\n\n```bash\n# 계정별 최초 1회 설정\nuv run python scripts/setup_auth.py --account work\nuv run python scripts/setup_auth.py --account personal\n```\n\n---\n\n### kakaotalk\n\n![Demo](./assets/kakaotalk.gif)\n\n**macOS에서 카카오톡 메시지를 발송하고 읽습니다.**\n\nAccessibility API를 사용하여 카카오톡 앱을 제어합니다. 자연어로 메시지를 보내거나 대화 내역을 확인할 수 있습니다.\n\n**트리거 문구:**\n- \"카톡 보내줘\", \"카카오톡 메시지\"\n- \"~에게 메시지 보내줘\"\n- \"채팅 읽어줘\"\n\n**기능:**\n- 자연어 메시지 발송 (발송 전 확인)\n- 채팅방 대화 내역 조회\n- 채팅방 목록 확인\n- \"sent with claude code\" 서명 자동 추가\n\n**요구사항:**\n- macOS 전용\n- 카카오톡 앱 실행 중\n- Accessibility 권한 필요\n\n```\n# 예시 (자연어 트리거)\n\"구봉한테 밥 먹었어? 보내줘\"\n\"구봉이랑 대화 내역 보여줘\"\n```\n\n---\n\n### session-wrap\n\n**종합 세션 마무리 및 분석 툴킷.**\n\n코딩 세션을 철저히 분석하고 마무리하며, 세션 히스토리에서 인사이트를 추출합니다.\n\n#### 스킬\n\n**`/wrap`** - 세션 마무리 워크플로우\n- 종합 세션 분석을 위한 2단계 멀티 에이전트 파이프라인\n- 문서화 필요사항, 자동화 기회, 배운 점, 후속 작업 캡처\n- `/wrap [커밋 메시지]`로 빠른 커밋\n\n**`/history-insight`** - 세션 히스토리 분석\n- Claude Code 세션 히스토리에서 패턴과 인사이트 분석\n- 현재 프로젝트 또는 전체 세션 검색\n- 주제, 결정사항, 반복 패턴 추출\n\n**`/session-analyzer`** - 사후 세션 검증\n- SKILL.md 명세 대비 세션 행동 검증\n- 에이전트, 훅, 도구가 올바르게 실행되었는지 확인\n- 상세한 준수 보고서 생성\n\n**/wrap 동작 방식 (2단계 파이프라인):**\n\n```\nPhase 1: 분석 (병렬)\n┌──────────────┬──────────────┬──────────────┬──────────────┐\n│ doc-updater  │ automation-  │ learning-    │ followup-    │\n│              │ scout        │ extractor    │ suggester    │\n└──────┬───────┴──────┬───────┴──────┬───────┴──────┬───────┘\n       └──────────────┴──────────────┴──────────────┘\n                            │\nPhase 2: 검증               ▼\n┌─────────────────────────────────────────────────────────────┐\n│                    duplicate-checker                         │\n└─────────────────────────────────────────────────────────────┘\n                            │\n                            ▼\n                    사용자 선택\n```\n\n**장점:**\n- 중요한 발견을 문서화하는 것을 잊지 않음\n- 자동화할 가치가 있는 패턴 식별\n- 미래 세션을 위한 명확한 인수인계점 생성\n- 과거 세션의 반복 패턴 분석\n- 스킬 구현이 명세대로 동작하는지 검증\n\n---\n\n### podcast\n\n**어떤 소스든 한국어 팟캐스트 에피소드로 변환하고 YouTube에 자동 업로드합니다.**\n\nURL, 트윗, 아티클, PDF를 제공하면 병렬 분석 → 융합 스크립트 작성 → OpenAI TTS 음성 생성 → YouTube 업로드까지 한 번에 처리합니다.\n\n**트리거 문구:**\n- \"팟캐스트 만들어\"\n- \"이 글을 팟캐스트로\"\n- \"에피소드 만들어줘\"\n- \"podcast\"\n\n**파이프라인:**\n\n```\n소스 → 병렬 분석 → 스크립트 작성 → TTS (OpenAI) → MP4 → YouTube 업로드\n```\n\n**결과물:**\n1. **스크립트** - 8-12분 한국어 팟캐스트 스크립트 (오프닝, 분석, 융합, 클로징)\n2. **오디오** - OpenAI gpt-4o-mini-tts의 자연스러운 한국어 음성 MP3\n3. **영상** - 다크 타이틀 카드 MP4 (1920x1080)\n4. **YouTube** - unlisted로 자동 업로드 + 메타데이터\n\n**부분 실행 지원:**\n- \"스크립트만 써줘\" → 스크립트만\n- \"이 스크립트로 TTS 만들어\" → 오디오만\n- \"YouTube에 올려줘\" → 기존 MP4 업로드\n\n**요구사항:**\n- ffmpeg (오디오 병합 및 MP4 변환)\n- OpenAI API 키 (`OPENAI_API_KEY` 환경변수)\n- Google OAuth 클라이언트 시크릿 (YouTube 업로드용)\n\n```bash\n# 예시\nUser: \"이 두 개의 아티클로 팟캐스트 만들어줘\"\n# → 두 아티클 병렬 분석\n# → 융합 스크립트 작성\n# → 음성 생성\n# → YouTube 업로드\n```\n\n---\n\n### fetch-tweet\n\n**Claude Code에서 어떤 공개 트윗이든 읽을 수 있습니다 — 인증 없이, API 키 없이, JavaScript 없이.**\n\nX/Twitter URL을 던지면 트윗 본문(URL 확장됨), 작성자 정보, 인게이지먼트 수치, 첨부 미디어, 인용 트윗까지 전부 가져옵니다. 오픈소스 [FxEmbed](https://github.com/FxEmbed/FxEmbed) 프로젝트의 백엔드를 활용합니다.\n\n**트리거 문구:**\n- \"트윗 가져와\", \"트윗 번역해줘\"\n- \"fetch this tweet\", \"translate this tweet\"\n- 모든 X/Twitter URL (`x.com`, `twitter.com`)\n\n**왜 필요한가:**\nX가 비인증 트윗 임베드를 막아서, 스크립트로 트윗을 읽으려면 보통 API 키나 브라우저 자동화가 필요합니다. 이 플러그인은 Discord/Telegram의 fxtwitter 링크 프리뷰를 구동하는 동일한 백엔드 `api.fxtwitter.com`을 경유해 그 문제를 우회합니다.\n\n**기능:**\n- 의존성 0 (Python 표준 라이브러리만 사용)\n- 인용 트윗과 미디어를 포함한 전체 트윗 데이터\n- 다른 스킬과 파이프라인 연동을 위한 `--json` 모드\n- 스크립트 실행이 어려울 때 WebFetch fallback\n\n```bash\n# 직접 스크립트 사용\npython scripts/fetch_tweet.py https://x.com/sama/status/...\npython scripts/fetch_tweet.py <url> --json | jq '.tweet.text'\n```\n\n**제약:** 비공개/삭제된 트윗은 조회 불가.\n\n---\n\n## 기여하기\n\n기여를 환영합니다! 이슈나 PR을 열어주세요.\n\n## 라이선스\n\nMIT\n"
  },
  {
    "path": "README.md",
    "content": "# Plugins for Claude Natives\n\nA collection of Claude Code plugins for power users who want to extend Claude Code's capabilities beyond the defaults.\n\n## Table of Contents\n\n- [Quick Start](#quick-start)\n- [Available Plugins](#available-plugins)\n- [Plugin Details](#plugin-details)\n  - [agent-council](#agent-council) - Get consensus from multiple AI models\n  - [clarify](#clarify) - Transform vague requirements into specs\n  - [dev](#dev) - Community scanning + technical decision-making\n  - [doubt](#doubt) - Force Claude to re-validate responses\n  - [interactive-review](#interactive-review) - Review plans with a web UI\n  - [say-summary](#say-summary) - Hear responses via text-to-speech\n  - [youtube-digest](#youtube-digest) - Summarize and quiz on YouTube videos\n  - [gmail](#gmail) - Multi-account Gmail integration\n  - [google-calendar](#google-calendar) - Multi-account calendar integration\n  - [kakaotalk](#kakaotalk) - Send/read KakaoTalk messages on macOS\n  - [session-wrap](#session-wrap) - Session wrap-up + history analysis toolkit\n  - [team-assemble](#team-assemble) - Dynamic agent team orchestration\n  - [podcast](#podcast) - Source-to-YouTube Korean podcast generator\n  - [fetch-tweet](#fetch-tweet) - Fetch tweet text and metadata without auth\n- [Contributing](#contributing)\n- [License](#license)\n\n---\n\n## Quick Start\n\n```bash\n# Add this marketplace to Claude Code\n/plugin marketplace add team-attention/plugins-for-claude-natives\n\n# Install any plugin\n/plugin install <plugin-name>\n```\n\n---\n\n## Available Plugins\n\n| Plugin | Description |\n|--------|-------------|\n| [agent-council](./plugins/agent-council/) | Collect and synthesize opinions from multiple AI agents (Gemini, GPT, Codex) |\n| [clarify](./plugins/clarify/) | Transform vague requirements into precise specifications through iterative questioning |\n| [dev](./plugins/dev/) | Developer workflow: community opinion scanning and technical decision analysis |\n| [doubt](./plugins/doubt/) | Force Claude to re-validate its response when `!rv` is in your prompt |\n| [interactive-review](./plugins/interactive-review/) | Interactive markdown review with web UI for visual plan/document approval |\n| [say-summary](./plugins/say-summary/) | Speaks a short summary of Claude's response using macOS TTS (Korean/English) |\n| [youtube-digest](./plugins/youtube-digest/) | Summarize YouTube videos with transcript, insights, Korean translation, and quizzes |\n| [gmail](./plugins/gmail/) | Multi-account Gmail integration with email reading, searching, sending, and management |\n| [google-calendar](./plugins/google-calendar/) | Multi-account Google Calendar integration with parallel querying and conflict detection |\n| [kakaotalk](./plugins/kakaotalk/) | Send and read KakaoTalk messages on macOS using Accessibility API |\n| [session-wrap](./plugins/session-wrap/) | Session wrap-up, history analysis, and session validation toolkit |\n| [team-assemble](./plugins/team-assemble/) | Dynamically assemble expert agent teams for complex tasks using Claude Code's agent teams feature |\n| [podcast](./plugins/podcast/) | Generate Korean podcast episodes from any source with OpenAI TTS and YouTube auto-upload |\n| [fetch-tweet](./plugins/fetch-tweet/) | Fetch full tweet text, author info, and engagement data from X/Twitter URLs without authentication |\n\n## Plugin Details\n\n### agent-council\n\n![Demo](./assets/agent-council.gif)\n\n**Summon multiple AI models to debate your question and reach a consensus.**\n\nWhen you're facing a tough decision or want diverse perspectives, this plugin queries multiple AI agents (Gemini CLI, GPT, Codex) in parallel and synthesizes their opinions into a single, balanced answer.\n\n**Trigger phrases:**\n- \"summon the council\"\n- \"ask other AIs\"\n- \"what do other models think?\"\n\n**How it works:**\n1. Your question is sent to multiple AI agents simultaneously\n2. Each agent provides its perspective\n3. Claude synthesizes the responses into a consensus view with noted disagreements\n\n```bash\n# Example\nUser: \"summon the council - should I use TypeScript or JavaScript for my new project?\"\n```\n\n---\n\n### clarify\n\n![Demo](./assets/clarify.gif)\n\n**Turn vague requirements into precise, actionable specifications.**\n\nBefore writing code based on ambiguous instructions, this plugin conducts a structured interview to extract exactly what you need. No more assumptions, no more rework.\n\n**Trigger phrases:**\n- \"/clarify\"\n- \"clarify requirements\"\n- \"what do I mean by...\"\n\n**The process:**\n1. **Capture** - Record the original requirement verbatim\n2. **Question** - Ask targeted multiple-choice questions to resolve ambiguities\n3. **Compare** - Present before/after showing the transformation\n4. **Save** - Optionally save the clarified spec to a file\n\n**Example transformation:**\n\n| Before | After |\n|--------|-------|\n| \"Add a login feature\" | Goal: Add username/password login with self-registration. Scope: Login, logout, registration, password reset. Constraints: 24h session, bcrypt, rate limit 5 attempts. |\n\n---\n\n### dev\n\n**Developer workflow tools: community scanning and technical decision-making.**\n\nThis plugin provides two powerful skills for developer research and decision-making.\n\n#### Skills\n\n**`/dev-scan`** - Scan developer communities for real opinions\n- Searches Reddit (via Gemini CLI), Hacker News, Dev.to, and Lobsters in parallel\n- Synthesizes consensus, controversies, and notable perspectives\n- Great for understanding community sentiment before adopting a tool\n\n**`/tech-decision`** - Deep technical decision analysis\n- Multi-phase workflow with 4 specialized agents running in parallel\n- Combines codebase analysis, docs research, community opinions, and AI perspectives\n- Produces executive-summary-first reports with scored comparisons\n\n**Trigger phrases:**\n- \"developer reactions to...\", \"what do devs think about...\"\n- \"A vs B\", \"which library should I use\", \"기술 의사결정\"\n\n**How tech-decision works:**\n\n```\nPhase 1: Parallel Information Gathering\n┌─────────────────┬─────────────────┬─────────────────┬─────────────────┐\n│ codebase-       │ docs-           │ dev-scan        │ agent-council   │\n│ explorer        │ researcher      │ (community)     │ (AI experts)    │\n└────────┬────────┴────────┬────────┴────────┬────────┴────────┬────────┘\n         └─────────────────┴─────────────────┴─────────────────┘\n                                    │\nPhase 2: Analysis & Synthesis       ▼\n┌─────────────────────────────────────────────────────────────────────────┐\n│                        tradeoff-analyzer                                 │\n└─────────────────────────────────────────────────────────────────────────┘\n                                    │\n                                    ▼\n┌─────────────────────────────────────────────────────────────────────────┐\n│                       decision-synthesizer                               │\n│                    (Executive Summary First)                             │\n└─────────────────────────────────────────────────────────────────────────┘\n```\n\n```bash\n# Examples\nUser: \"React vs Vue for my new project?\"\nUser: \"Which state management library should I use?\"\nUser: \"Monolith vs microservices for our scale?\"\n```\n\n---\n\n### doubt\n\n**Force Claude to double-check its response before delivering.**\n\nWhen you include `!rv` anywhere in your prompt, Claude will pause before responding, re-validate its answer against potential errors, and only then deliver the response. Perfect for critical decisions or when you want extra confidence.\n\n**Trigger:**\n- Include `!rv` anywhere in your prompt\n\n**How it works:**\n1. `UserPromptSubmit` hook detects `!rv` keyword and sets a flag\n2. `Stop` hook intercepts Claude's response before delivery\n3. Claude re-validates the response for errors, hallucinations, or questionable claims\n4. Only after verification does Claude deliver the final answer\n\n**Why `!rv` instead of `!doubt`?**\nThe word \"doubt\" affects Claude's behavior - it starts doubting from the beginning. `!rv` (re-validate) is neutral.\n\n```bash\n# Example\nUser: \"What's the time complexity of binary search? !rv\"\n# Claude will verify its answer before responding\n```\n\n---\n\n### interactive-review\n\n**Review Claude's plans and documents through a visual web interface.**\n\nInstead of reading long markdown in the terminal, this plugin opens a browser-based UI where you can check/uncheck items, add comments, and submit structured feedback.\n\n**Trigger phrases:**\n- \"/review\"\n- \"review this plan\"\n- \"let me check this\"\n\n**The flow:**\n1. Claude generates a plan or document\n2. A web UI opens automatically in your browser\n3. Review each item with checkboxes and optional comments\n4. Click Submit to send structured feedback back to Claude\n5. Claude adjusts based on your approved/rejected items\n\n---\n\n### say-summary\n\n![Demo](./assets/say-summary.gif)\n\n**Hear Claude's responses spoken aloud (macOS only).**\n\nThis plugin uses a Stop hook to summarize Claude's response to a short headline and speaks it using macOS text-to-speech. Perfect for when you're coding and want audio feedback.\n\n**Features:**\n- Summarizes responses to 3-10 words using Claude Haiku\n- Auto-detects Korean vs English\n- Uses appropriate voice (Yuna for Korean, Samantha for English)\n- Runs in background, doesn't block Claude Code\n\n**Requirements:**\n- macOS (uses the `say` command)\n- Python 3.10+\n\n---\n\n### youtube-digest\n\n![Demo](./assets/youtube-digest.jpeg)\n\n**Summarize YouTube videos with transcripts, translations, and comprehension quizzes.**\n\nDrop a YouTube URL and get a complete breakdown: summary, key insights, full Korean translation of the transcript, and a 3-stage quiz (9 questions total) to test your understanding.\n\n**Trigger phrases:**\n- \"summarize this YouTube\"\n- \"digest this video\"\n- YouTube URL\n\n**What you get:**\n1. **Summary** - 3-5 sentence overview with key points\n2. **Insights** - Actionable takeaways and ideas\n3. **Full transcript** - With Korean translation and timestamps\n4. **3-stage quiz** - Basic, intermediate, and advanced questions\n5. **Deep Research** (optional) - Web search to expand on the topic\n\n**Output location:** `research/readings/youtube/YYYY-MM-DD-title.md`\n\n---\n\n### gmail\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/7/7e/Gmail_icon_%282020%29.svg/1280px-Gmail_icon_%282020%29.svg.png\" width=\"120\" alt=\"Gmail\">\n\n**Manage multiple Gmail accounts from Claude Code.**\n\nRead, search, send, and manage emails across multiple Google accounts with full Gmail API integration.\n\n**Trigger phrases:**\n- \"check my email\"\n- \"send an email to...\"\n- \"search for emails from...\"\n- \"reply to this email\"\n- \"mark as read\"\n\n**Features:**\n- Multi-account support via `accounts.yaml` (work, personal, etc.)\n- Gmail search query syntax support\n- Email sending with attachments and HTML\n- Label and draft management\n- **5-step email sending workflow** with context gathering, draft review, and test delivery\n- Rate limiting and quota management\n- Batch processing and local caching\n\n**5-Step Email Workflow:**\n1. **Context gathering** - Parallel Explore agents search recipient info and related projects\n2. **Previous conversations** - Search recent emails to determine reply vs new thread\n3. **Draft composition** - Create draft with user feedback\n4. **Test send** - Send to your own email for verification\n5. **Actual send** - Deliver to recipient\n\n**Setup:**\n1. Create Google Cloud project with Gmail API enabled\n2. Run setup script for each account\n\n```bash\n# One-time setup per account\nuv run python scripts/setup_auth.py --account work\nuv run python scripts/setup_auth.py --account personal\n```\n\nAccount metadata is stored in `accounts.yaml` for easy management.\n\n---\n\n### google-calendar\n\n<img src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/a/a5/Google_Calendar_icon_%282020%29.svg/960px-Google_Calendar_icon_%282020%29.svg.png\" width=\"120\" alt=\"Google Calendar\">\n\n**Manage multiple Google Calendar accounts from Claude Code.**\n\nQuery, create, update, and delete events across multiple Google accounts (work, personal, etc.) with automatic conflict detection.\n\n**Trigger phrases:**\n- \"show my schedule\"\n- \"what's on my calendar\"\n- \"create a meeting\"\n- \"check for conflicts\"\n\n**Features:**\n- Parallel querying across multiple accounts\n- Conflict detection between accounts\n- Full CRUD operations (create, read, update, delete)\n- Pre-authenticated with refresh tokens (no repeated logins)\n\n**Setup required:**\n1. Create Google Cloud project with Calendar API\n2. Run setup script for each account\n\n```bash\n# One-time setup per account\nuv run python scripts/setup_auth.py --account work\nuv run python scripts/setup_auth.py --account personal\n```\n\n---\n\n### kakaotalk\n\n![Demo](./assets/kakaotalk.gif)\n\n**Send and read KakaoTalk messages from Claude Code on macOS.**\n\nUses macOS Accessibility API to control the KakaoTalk app. Send messages or read chat history using natural language.\n\n**Trigger phrases:**\n- \"카톡 보내줘\", \"카카오톡 메시지\"\n- \"~에게 메시지 보내줘\"\n- \"채팅 읽어줘\"\n- \"KakaoTalk message\"\n\n**Features:**\n- Natural language message sending (with confirmation before send)\n- Chat history retrieval\n- Chat room listing\n- Auto-signature \"sent with claude code\"\n\n**Requirements:**\n- macOS only\n- KakaoTalk app must be running\n- Accessibility permission required\n\n```\n# Examples (natural language)\n\"구봉한테 밥 먹었어? 보내줘\"\n\"구봉이랑 대화 내역 보여줘\"\n```\n\n---\n\n### session-wrap\n\n**Comprehensive session wrap-up and analysis toolkit.**\n\nEnd your coding sessions with thorough analysis, and dive deep into session history for insights.\n\n#### Skills\n\n**`/wrap`** - Session wrap-up workflow\n- 2-phase multi-agent pipeline for comprehensive session analysis\n- Captures documentation needs, automation opportunities, learnings, and follow-ups\n- `/wrap [commit message]` for quick commits\n\n**`/history-insight`** - Session history analysis\n- Analyze Claude Code session history for patterns and insights\n- Search current project or all sessions\n- Extract themes, decisions, and recurring topics\n\n**`/session-analyzer`** - Post-hoc session validation\n- Validate session behavior against SKILL.md specifications\n- Check if agents, hooks, and tools executed correctly\n- Generate detailed compliance reports\n\n**How /wrap works (2-Phase Pipeline):**\n\n```\nPhase 1: Analysis (Parallel)\n┌──────────────┬──────────────┬──────────────┬──────────────┐\n│ doc-updater  │ automation-  │ learning-    │ followup-    │\n│              │ scout        │ extractor    │ suggester    │\n└──────┬───────┴──────┬───────┴──────┬───────┴──────┬───────┘\n       └──────────────┴──────────────┴──────────────┘\n                            │\nPhase 2: Validation         ▼\n┌─────────────────────────────────────────────────────────────┐\n│                    duplicate-checker                         │\n└─────────────────────────────────────────────────────────────┘\n                            │\n                            ▼\n                    User Selection\n```\n\n**Benefits:**\n- Never forget to document important discoveries\n- Identify patterns worth automating\n- Create clear handoff points for future sessions\n- Analyze past sessions for recurring patterns\n- Validate skill implementations against specifications\n\n---\n\n### team-assemble\n\n**Dynamically assemble expert agent teams for complex tasks.**\n\nInstead of manually designing agents, this plugin analyzes your task, scouts the codebase, and assembles an optimal team with the right roles, dependencies, and validation criteria — all using Claude Code's agent teams feature.\n\n> **Prerequisite:** Agent teams must be enabled. See [setup guide](./plugins/team-assemble/skills/team-assemble/references/enable-agent-teams.md).\n\n**Trigger phrases:**\n- \"assemble a team to...\"\n- \"team assemble\"\n- \"use a team for...\"\n\n**6-Phase Workflow:**\n\n```\nPhase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5 → Phase 6\nTask       Codebase   Integrate  Execute   Validate   Complete\nAnalysis   Scouts     & Confirm                       & Cleanup\n```\n\n**Key features:**\n- **Dynamic agent design** — scouts your codebase and proposes agents tailored to the task (no fixed catalog)\n- **Model 3-tier** — opus for strategy, sonnet for execution, haiku for research\n- **Parallel execution** — independent agents run simultaneously\n- **Acceptance criteria** — every team has measurable validation criteria\n- **Verify/fix loop** — QA validates, support fixes (max 3 rounds)\n- **Two approval gates** — confirm scope (Phase 1) and team composition (Phase 3)\n\n```bash\n# Examples\nUser: \"Assemble a team to refactor authentication from session-based to JWT\"\nUser: \"Use a team to evaluate Redis vs Memcached vs in-memory caching\"\nUser: \"Team assemble — extract shared utils from three microservices into a common lib\"\n```\n\n---\n\n### podcast\n\n**Turn any source into a Korean podcast episode, automatically uploaded to YouTube.**\n\nDrop URLs, tweets, articles, or PDFs and this plugin will analyze them, write a conversational Korean script, generate audio using OpenAI's `gpt-4o-mini-tts`, and upload the result to YouTube — all in one go.\n\n**Trigger phrases:**\n- \"make a podcast from this\"\n- \"팟캐스트 만들어\"\n- \"turn this into an episode\"\n- \"이 글을 팟캐스트로\"\n\n**The pipeline:**\n\n```\nSources → Parallel Analysis → Script Writing → TTS (OpenAI) → MP4 → YouTube Upload\n```\n\n**What you get:**\n1. **Script** - 8-12 min Korean podcast script with opening, analysis, fusion, and closing\n2. **Audio** - MP3 generated via OpenAI gpt-4o-mini-tts with natural Korean voice\n3. **Video** - MP4 with dark title card (1920x1080)\n4. **YouTube** - Auto-uploaded as unlisted with metadata\n\n**Partial execution supported:**\n- \"Just write the script\" → Script only\n- \"Generate TTS from this script\" → Audio only\n- \"Upload to YouTube\" → Upload existing MP4\n\n**Requirements:**\n- ffmpeg (for audio merging and MP4 conversion)\n- OpenAI API key (`OPENAI_API_KEY` env var)\n- Google OAuth client secret (for YouTube upload)\n\n```bash\n# Example\nUser: \"이 두 개의 아티클로 팟캐스트 만들어줘\"\n# → Analyzes both articles in parallel\n# → Writes fusion script\n# → Generates audio\n# → Uploads to YouTube\n```\n\n---\n\n### fetch-tweet\n\n**Read any public tweet from Claude Code — no auth, no API key, no JavaScript.**\n\nDrop an X/Twitter URL and get the full tweet text (URLs expanded), author info, engagement metrics, attached media, and quoted tweets. Powered by the open-source [FxEmbed](https://github.com/FxEmbed/FxEmbed) project.\n\n**Trigger phrases:**\n- \"트윗 가져와\", \"트윗 번역해줘\"\n- \"fetch this tweet\", \"translate this tweet\"\n- Any X/Twitter URL (`x.com`, `twitter.com`)\n\n**Why this exists:**\nX removed unauthenticated tweet embeds, so reading a tweet from a script normally requires an API key or browser automation. This plugin sidesteps that by routing through `api.fxtwitter.com` — the same backend that powers fxtwitter link previews on Discord/Telegram.\n\n**Features:**\n- Zero dependencies (Python stdlib only)\n- Full tweet data including quote tweets and media\n- `--json` mode for pipeline use with other skills\n- WebFetch fallback when script execution isn't available\n\n```bash\n# Direct script usage\npython scripts/fetch_tweet.py https://x.com/sama/status/...\npython scripts/fetch_tweet.py <url> --json | jq '.tweet.text'\n```\n\n**Limitations:** Private/deleted tweets cannot be fetched.\n\n---\n\n## Contributing\n\nContributions welcome! Please open an issue or PR.\n\n## License\n\nMIT\n"
  },
  {
    "path": "plugins/agent-council/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"agent-council\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Collect and synthesize opinions from multiple AI Agents for Claude Code\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"claude-code\", \"plugin\", \"llm\", \"multi-agent\", \"ai-council\"],\n  \"skills\": \"./skills/agent-council\"\n}\n"
  },
  {
    "path": "plugins/agent-council/AGENTS.md",
    "content": "# Project Instructions (Codex)\n\n## Plan / To-do UI (IMPORTANT)\n\n- When you decide to use any Skill, **always** call `update_plan` immediately (before any potentially long-running shell/tool calls) so Codex’s Plan/To-do panel appears. Include **exactly one** `in_progress` item in that first plan.\n- Keep the plan updated during execution (at most one `in_progress` step at a time).\n- For `agent-council`: after the first `council.sh wait` (the non-blocking one), feed `.ui.codex.update_plan.plan` into `update_plan` right away, then repeat `wait → update_plan` until done.\n"
  },
  {
    "path": "plugins/agent-council/CLAUDE.md",
    "content": "# Project Instructions (Claude Code)\n\n## Todo UI (IMPORTANT)\n\n- When you decide to use any Skill, **always** call `TodoWrite` immediately (before any potentially long-running shell/tool calls) so Claude’s Todo UI appears. Include **exactly one** `in_progress` item in that first todo list.\n- Keep the todo list updated during execution (at most one `in_progress` item at a time).\n- For `agent-council`: after the first `council.sh wait` (the non-blocking one), feed `.ui.claude.todo_write.todos` into `TodoWrite` right away, then repeat `wait → TodoWrite` until done.\n\n"
  },
  {
    "path": "plugins/agent-council/README.ko.md",
    "content": "# Agent Council\n\n**[English Version](./README.md)**\n\n> 여러 AI CLI(Codex, Gemini, ...)의 의견을 모으고, 설정 가능한 의장(Chairman)이 종합해 결론을 내리게 하는 스킬\n> [Karpathy의 LLM Council](https://github.com/karpathy/llm-council)에서 영감을 받음\n\n## LLM Council과의 차이점\n\n**추가 API 비용이 들지 않습니다!**\n\nKarpathy의 LLM Council은 각 LLM의 API를 직접 호출하여 비용이 발생하지만, Agent Council은 설치된 AI CLI(Claude Code, Codex CLI, Gemini CLI 등)를 활용합니다. 주로 하나의 호스트 CLI를 메인으로 쓰면서 다른 CLI들은 구독 플랜으로 필요할 때만 쓰는 분들에게 특히 유용합니다.\n\nMCP보다 Skill이 훨씬 간단하고 재현 가능해서 npx로 설치 후 직접 커스터마이징하여 사용하시는 것을 추천합니다.\n\n## 데모\n\nhttps://github.com/user-attachments/assets/c550c473-00d2-4def-b7ba-654cc7643e9b\n\n## 작동 방식\n\nAgent Council은 AI 합의를 수집하기 위한 3단계 프로세스를 구현합니다:\n\n**Stage 1: Initial Opinions (초기 의견 수집)**\n설정된 모든 AI 에이전트가 동시에 질문을 받고 독립적으로 응답합니다.\n\n**Stage 2: Response Collection (응답 수집)**\n각 에이전트의 응답을 수집하여 포맷된 형태로 표시합니다.\n\n**Stage 3: Chairman Synthesis (의장 종합)**\n기본값(`role: auto`)에서는 “현재 사용 중인 호스트 에이전트(Claude Code / Codex CLI 등)”가 의장 역할을 하며, 모든 의견을 종합해 최종 추천을 제시합니다. 원하면 `chairman.command`를 설정해 `council.sh` 안에서 Stage 3 종합을 CLI로 직접 실행할 수도 있습니다.\n\n## 설치\n\n### 방법 A: npx로 설치 (권장)\n\n```bash\nnpx github:team-attention/agent-council\n```\n\n현재 프로젝트 디렉토리에 스킬 파일들이 복사됩니다.\nAgent Council을 업그레이드한 뒤 `Missing runtime dependency: yaml` 같은 런타임 에러가 나면, 위 설치 커맨드를 한 번 더 실행해서 설치된 스킬 파일을 갱신하세요.\n\n기본값으로 설치 스크립트가 자동으로 Claude Code(`.claude/`) / Codex CLI(`.codex/`) 설치 여부를 감지해서 가능한 타깃에 설치합니다.\n\n설치 위치:\n- `.claude/skills/agent-council/` (Claude Code)\n- `.codex/skills/agent-council/` (Codex CLI)\n\n선택사항 (Codex용 레포 스킬로 설치):\n```bash\nnpx github:team-attention/agent-council --target codex\n```\n\n다른 타깃:\n```bash\nnpx github:team-attention/agent-council --target claude\nnpx github:team-attention/agent-council --target both\n```\n\n생성되는 `council.config.yaml`은 감지된 멤버 CLI(claude/codex/gemini 등)만 포함하며, 설치 타깃(호스트)은 members에 포함되지 않도록 처리합니다. 이 필터링은 **초기 생성 시점에만** 적용되며, 이후 편집 내용은 자동으로 정리되지 않습니다.\n\n### 방법 B: Claude Code 플러그인으로 설치 (Claude Code 전용)\n\n```bash\n# 마켓플레이스 추가\n/plugin marketplace add team-attention/plugins-for-claude-natives\n\n# 플러그인 설치\n/plugin install agent-council@plugins-for-claude-natives\n```\n\n참고(플러그인 설치): **Agent Council은 Node.js가 필요**하며, Claude Code 플러그인은 Node를 번들/자동 설치할 수 없습니다. Node를 별도로 설치하세요(예: macOS `brew install node`).\n\n### 2. Agent CLI 설치\n\n`council.config.yaml`의 `council.members`에 적힌 CLI를 설치하세요(템플릿 기본 포함: `claude`, `codex`, `gemini`):\n\n```bash\n# Anthropic Claude Code\n# https://claude.ai/code\n\n# OpenAI Codex CLI\n# https://github.com/openai/codex\n\n# Google Gemini CLI\n# https://github.com/google-gemini/gemini-cli\n```\n\n설치 확인(멤버별):\n```bash\ncommand -v claude\ncommand -v codex\ncommand -v gemini\n```\n\n### 3. Council 멤버 설정 (선택사항)\n\n설치된 스킬 폴더의 설정 파일을 편집해서 council을 커스터마이즈:\n- `.claude/skills/agent-council/council.config.yaml`\n- `.codex/skills/agent-council/council.config.yaml`\n\n```yaml\ncouncil:\n  chairman:\n    role: \"auto\" # auto|claude|codex|gemini|...\n    # command: \"codex exec\" # 선택: council.sh에서 Stage 3 종합까지 실행\n\n  members:\n    - name: codex\n      command: \"codex exec\"\n      emoji: \"🤖\"\n      color: \"BLUE\"\n\n    - name: gemini\n      command: \"gemini\"\n      emoji: \"💎\"\n      color: \"GREEN\"\n\n    # 필요에 따라 에이전트 추가\n    # - name: grok\n    #   command: \"grok\"\n    #   emoji: \"🚀\"\n    #   color: \"MAGENTA\"\n```\n\n## 사용법\n\n### 호스트 에이전트를 통한 사용 (Claude Code / Codex CLI)\n\n호스트 에이전트에게 council 소집을 요청하면 됩니다:\n\n```\n\"다른 AI들 의견도 들어보자\"\n\"council 소집해줘\"\n\"여러 관점에서 검토해줘\"\n\"codex랑 gemini 의견 물어봐\"\n```\n\n### 스크립트 직접 실행\n\n```bash\nJOB_DIR=$(.codex/skills/agent-council/scripts/council.sh start \"질문 내용\")\n.codex/skills/agent-council/scripts/council.sh status --text \"$JOB_DIR\"\n.codex/skills/agent-council/scripts/council.sh results \"$JOB_DIR\"\n.codex/skills/agent-council/scripts/council.sh clean \"$JOB_DIR\"\n```\n\n팁: `status --text`에 `--verbose`를 추가하면 멤버별 상태 라인이 함께 출력됩니다.\n팁: `status --checklist`는 체크리스트 형태로 간단히 보여줍니다(Codex/Claude tool cell에 유용).\n팁: `wait`를 쓰면 “의미 있는 진행”이 있을 때만 반환해서 tool cell 스팸을 줄일 수 있습니다(JSON 출력, 커서는 자동으로 저장/갱신; 기본값은 멤버 수에 따라 대략 ~5~10번 수준으로 자동 배치, `--bucket 1`이면 매 완료마다 반환).\n\n원샷 실행(잡 시작 → 대기 → 결과 출력 → 정리):\n\n```bash\n.codex/skills/agent-council/scripts/council.sh \"질문 내용\"\n```\n\n참고: 호스트 에이전트 도구 UI(Codex CLI / Claude Code)에서는 원샷이 **블로킹하지 않습니다**. 네이티브 plan/todo UI를 갱신할 수 있도록 `wait` JSON을 한 번 반환하고 종료하며, 이후 `wait` → 네이티브 UI 갱신 → `results` → `clean` 순서로 진행하세요.\n\n#### 진행상황\n\n- 실제 터미널에서는 원샷이 멤버 완료에 맞춰 진행상황 라인을 주기적으로 출력합니다.\n- 호스트 에이전트 도구 UI에서는 원샷이 `wait` JSON을 반환합니다(네이티브 plan/todo UI 갱신 목적).\n- 스크립팅이 필요하면 job mode(`start` → `status` → `results` → `clean`)도 사용할 수 있습니다.\n\n## 예시\n\n```\nUser: \"새 대시보드 프로젝트에 React vs Vue 어떨까? council 소집해줘\"\n\n호스트 에이전트(Claude Code / Codex CLI):\n1. council.sh 실행하여 설정된 멤버(예: Codex, Gemini) 의견 수집\n2. 각 에이전트의 관점 표시\n3. 의장으로서 종합:\n   \"Council의 의견을 바탕으로, 대시보드의 데이터 시각화 요구사항과\n   팀의 숙련도를 고려할 때...\"\n```\n\n## 프로젝트 구조\n\n```\nagent-council/\n├── .claude-plugin/\n│   └── marketplace.json     # 마켓플레이스 설정 (Claude Code 전용)\n├── bin/\n│   └── install.js           # npx 설치 스크립트\n├── skills/\n│   └── agent-council/\n│       ├── SKILL.md         # 스킬 문서\n│       └── scripts/\n│           ├── council.sh           # 실행 스크립트\n│           ├── council-job.sh       # 백그라운드 Job runner (폴링 가능)\n│           ├── council-job.js       # Job runner 구현\n│           └── council-job-worker.js # 멤버별 워커\n├── council.config.yaml      # Council 멤버 설정\n├── README.md                # 영어 문서\n├── README.ko.md             # 이 문서\n└── LICENSE\n```\n\n## 주의사항\n\n- 응답 시간은 가장 느린 에이전트에 의존 (병렬 실행)\n- 민감한 정보는 council에 공유하지 않기\n- 에이전트는 기본적으로 병렬로 실행되어 빠른 응답 제공\n- 각 CLI 도구의 구독 플랜이 필요합니다 (API 비용 별도 발생 없음)\n\n## 기여하기\n\n기여를 환영합니다! 다음과 같은 기여가 가능합니다:\n- 새로운 AI 에이전트 지원 추가\n- 종합 프로세스 개선\n- 설정 옵션 확장\n\n## 라이선스\n\nMIT 라이선스 - 자세한 내용은 [LICENSE](./LICENSE) 참조\n\n## 크레딧\n\n- [Karpathy의 LLM Council](https://github.com/karpathy/llm-council)에서 영감\n- [Claude Code](https://claude.ai/code) / [Codex CLI](https://github.com/openai/codex) 용으로 제작\n"
  },
  {
    "path": "plugins/agent-council/README.md",
    "content": "# Agent Council\n\n**[한국어 버전 (Korean)](./README.ko.md)**\n\n> A skill that gathers opinions from multiple AI CLIs (Codex, Gemini, ...) and lets a configurable Chairman synthesize a conclusion.\n> Inspired by [Karpathy's LLM Council](https://github.com/karpathy/llm-council)\n\n## Key Difference from LLM Council\n\n**No additional API costs!**\n\nUnlike Karpathy's LLM Council which directly calls each LLM's API (incurring costs), Agent Council uses your installed AI CLIs (Claude Code, Codex CLI, Gemini CLI, ...). This is especially useful if you mainly use one host CLI and occasionally consult others via subscriptions.\n\nSkills are much simpler and more reproducible than MCP. We recommend installing via npx and customizing it yourself!\n\n## Demo\n\nhttps://github.com/user-attachments/assets/c550c473-00d2-4def-b7ba-654cc7643e9b\n\n## How it Works\n\nAgent Council implements a 3-stage process for gathering AI consensus:\n\n**Stage 1: Initial Opinions**\nAll configured AI agents receive your question simultaneously and respond independently.\n\n**Stage 2: Response Collection**\nResponses from each agent are collected and displayed to you in a formatted view.\n\n**Stage 3: Chairman Synthesis**\nYour host agent (Claude Code / Codex CLI / etc.) acts as the Chairman by default (`role: auto`), synthesizing all opinions into a final recommendation. Optionally, you can configure a Chairman CLI command to run synthesis inside `council.sh`.\n\n## Setup\n\n### Option A: Install via npx (Recommended)\n\n```bash\nnpx github:team-attention/agent-council\n```\n\nThis copies the skill files to your current project directory.\nIf you upgrade Agent Council and hit a runtime error like `Missing runtime dependency: yaml`, re-run the installer command above to refresh your installed skill files.\n\nBy default, the installer auto-detects whether to install for Claude Code (`.claude/`) and/or Codex CLI (`.codex/`) based on what’s available on your machine and in the repo.\n\nInstalled paths:\n- `.claude/skills/agent-council/` (Claude Code)\n- `.codex/skills/agent-council/` (Codex CLI)\n\nOptional (Codex repo skill):\n```bash\nnpx github:team-attention/agent-council --target codex\n```\n\nOther targets:\n```bash\nnpx github:team-attention/agent-council --target claude\nnpx github:team-attention/agent-council --target both\n```\n\nThe generated `council.config.yaml` includes only detected member CLIs (e.g. `claude`, `codex`, `gemini`) and avoids adding the host target as a member. This filtering happens only at initial generation; later edits will not auto-remove missing CLIs.\n\n### Option B: Install via Claude Code Plugin (Claude Code only)\n\n```bash\n# Add the marketplace\n/plugin marketplace add team-attention/plugins-for-claude-natives\n\n# Install the plugin\n/plugin install agent-council@plugins-for-claude-natives\n```\n\nNote (Plugin installs): **Agent Council requires Node.js**, and Claude Code plugins can’t bundle or auto-install Node for you. Install Node separately (e.g. `brew install node` on macOS).\n\n### 2. Install Agent CLIs\n\nInstall the CLIs listed under `council.members` in your `council.config.yaml` (template includes `claude`, `codex`, `gemini`):\n\n```bash\n# Anthropic Claude Code\n# https://claude.ai/code\n\n# OpenAI Codex CLI\n# https://github.com/openai/codex\n\n# Google Gemini CLI\n# https://github.com/google-gemini/gemini-cli\n```\n\nVerify each member CLI:\n```bash\ncommand -v claude\ncommand -v codex\ncommand -v gemini\n```\n\n### 3. Configure Council Members (Optional)\n\nEdit the generated config in your installed skill directory:\n- `.claude/skills/agent-council/council.config.yaml`\n- `.codex/skills/agent-council/council.config.yaml`\n\n```yaml\ncouncil:\n  chairman:\n    role: \"auto\" # auto|claude|codex|gemini|...\n    # command: \"codex exec\" # optional: run Stage 3 inside council.sh\n\n  members:\n    - name: codex\n      command: \"codex exec\"\n      emoji: \"🤖\"\n      color: \"BLUE\"\n\n    - name: gemini\n      command: \"gemini\"\n      emoji: \"💎\"\n      color: \"GREEN\"\n\n    # Add more agents as needed\n    # - name: grok\n    #   command: \"grok\"\n    #   emoji: \"🚀\"\n    #   color: \"MAGENTA\"\n```\n\n## Usage\n\n### Via your host agent (Claude Code / Codex CLI)\n\nAsk your host agent to summon the council:\n\n```\n\"Let's hear opinions from other AIs\"\n\"Summon the council\"\n\"Review this from multiple perspectives\"\n\"Ask codex and gemini for their opinions\"\n```\n\n### Direct Script Execution\n\n```bash\nJOB_DIR=$(.codex/skills/agent-council/scripts/council.sh start \"Your question here\")\n.codex/skills/agent-council/scripts/council.sh status --text \"$JOB_DIR\"\n.codex/skills/agent-council/scripts/council.sh results \"$JOB_DIR\"\n.codex/skills/agent-council/scripts/council.sh clean \"$JOB_DIR\"\n```\n\nTip: add `--verbose` to `status --text` to include per-member lines.\nTip: use `status --checklist` for a compact checkbox view (handy in Codex/Claude tool cells).\nTip: use `wait` to block until meaningful progress without spamming tool cells (prints JSON, persists a cursor automatically; auto-batches to a small number of updates (typically ~5–10); `--bucket 1` for every completion).\n\nOne-shot (runs job → waits → prints results → cleans):\n\n```bash\n.codex/skills/agent-council/scripts/council.sh \"Your question here\"\n```\n\nNote: In host-agent tool UIs (Codex CLI / Claude Code), one-shot does **not** block. It returns a single `wait` JSON payload so the host agent can update native plan/todo UIs. Continue with `wait` → native UI update → `results` → `clean`.\n\n#### Progress\n\n- In a real terminal, one-shot prints periodic progress lines as members complete.\n- In host-agent tool UIs, one-shot returns `wait` JSON (so the host can update native plan/todo UIs).\n- Job mode is still available for scripting (`start` → `status` → `results` → `clean`).\n\n## Example\n\n```\nUser: \"React vs Vue for a new dashboard project - summon the council\"\n\nHost agent (Claude Code / Codex CLI):\n1. Executes council.sh to collect opinions from configured members (e.g., Codex, Gemini)\n2. Displays each agent's perspective\n3. Synthesizes as Chairman:\n   \"Based on the council's input, considering your dashboard's\n   data visualization needs and team's familiarity, I recommend...\"\n```\n\n## Project Structure\n\n```\nagent-council/\n├── .claude-plugin/\n│   └── marketplace.json     # Marketplace config (Claude Code only)\n├── bin/\n│   └── install.js           # npx installer\n├── skills/\n│   └── agent-council/\n│       ├── SKILL.md         # Skill documentation\n│       └── scripts/\n│           ├── council.sh       # Execution script\n│           ├── council-job.sh   # Background job runner (pollable)\n│           ├── council-job.js   # Job runner implementation\n│           └── council-job-worker.js # Per-member worker\n├── council.config.yaml      # Council member configuration\n├── README.md                # This file\n├── README.ko.md             # Korean documentation\n└── LICENSE\n```\n\n## Notes\n\n- Response time depends on the slowest agent (parallel execution)\n- Do not share sensitive information with the council\n- Agents run in parallel by default for faster responses\n- Subscription plans for each CLI tool are required (no additional API costs)\n\n## Contributing\n\nContributions are welcome! Feel free to:\n- Add support for new AI agents\n- Improve the synthesis process\n- Enhance the configuration options\n\n## License\n\nMIT License - see [LICENSE](./LICENSE) for details.\n\n## Credits\n\n- Inspired by [Karpathy's LLM Council](https://github.com/karpathy/llm-council)\n- Built for [Claude Code](https://claude.ai/code) and [Codex CLI](https://github.com/openai/codex)\n"
  },
  {
    "path": "plugins/agent-council/SKILL.md",
    "content": "---\nname: agent-council\ndescription: Collect and synthesize opinions from multiple AI agents. Use when users say \"summon the council\", \"ask other AIs\", or want multiple AI perspectives on a question.\n---\n\n# Agent Council\n\nCollect multiple AI opinions and synthesize one answer.\n\n## Usage\n\nRun a job and collect results:\n\n```bash\nJOB_DIR=$(./skills/agent-council/scripts/council.sh start \"your question here\")\n./skills/agent-council/scripts/council.sh wait \"$JOB_DIR\"\n./skills/agent-council/scripts/council.sh results \"$JOB_DIR\"\n./skills/agent-council/scripts/council.sh clean \"$JOB_DIR\"\n```\n\nOne-shot:\n\n```bash\n./skills/agent-council/scripts/council.sh \"your question here\"\n```\n\n## References\n\n- `references/overview.md` — workflow and background.\n- `references/examples.md` — usage examples.\n- `references/config.md` — member configuration.\n- `references/requirements.md` — dependencies and CLI checks.\n- `references/host-ui.md` — host UI checklist guidance.\n- `references/safety.md` — safety notes.\n"
  },
  {
    "path": "plugins/agent-council/bin/install.js",
    "content": "#!/usr/bin/env node\n\nconst fs = require('fs');\nconst path = require('path');\nconst { execSync } = require('child_process');\nconst YAML = require('yaml');\n\nconst GREEN = '\\x1b[32m';\nconst YELLOW = '\\x1b[33m';\nconst CYAN = '\\x1b[36m';\nconst RED = '\\x1b[31m';\nconst NC = '\\x1b[0m';\n\nconst packageRoot = path.resolve(__dirname, '..');\nconst targetDir = process.cwd();\nconst claudeDir = path.join(targetDir, '.claude');\nconst codexDir = path.join(targetDir, '.codex');\nconst yamlModuleDir = path.dirname(require.resolve('yaml/package.json'));\n\nfunction parseArgs(argv) {\n  const args = argv.slice(2);\n  const flags = new Set(args);\n\n  const targetIndex = args.indexOf('--target');\n  let target = 'auto';\n  if (targetIndex !== -1 && args[targetIndex + 1]) {\n    target = args[targetIndex + 1];\n  } else if (flags.has('--both')) {\n    target = 'both';\n  } else if (flags.has('--codex')) {\n    target = 'codex';\n  } else if (flags.has('--claude')) {\n    target = 'claude';\n  }\n\n  if (!['auto', 'claude', 'codex', 'both'].includes(target)) {\n    throw new Error(`Invalid --target \"${target}\". Use auto|claude|codex|both.`);\n  }\n\n  return { target };\n}\n\nconsole.log(`${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}`);\nconsole.log(`${CYAN}  Agent Council - Installation${NC}`);\nconsole.log(`${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}`);\nconsole.log();\n\nfunction copyRecursive(src, dest) {\n  const stat = fs.statSync(src);\n\n  if (stat.isDirectory()) {\n    if (!fs.existsSync(dest)) {\n      fs.mkdirSync(dest, { recursive: true });\n    }\n    const files = fs.readdirSync(src);\n    for (const file of files) {\n      copyRecursive(path.join(src, file), path.join(dest, file));\n    }\n  } else {\n    fs.copyFileSync(src, dest);\n    // Preserve executable permission for .sh files\n    if (src.endsWith('.sh')) {\n      fs.chmodSync(dest, 0o755);\n    }\n  }\n}\n\nfunction commandExists(command) {\n  try {\n    const checkCmd = process.platform === 'win32' ? `where ${command}` : `command -v ${command}`;\n    execSync(checkCmd, { stdio: 'ignore' });\n    return true;\n  } catch {\n    return false;\n  }\n}\n\ntry {\n  const { target: requestedTarget } = parseArgs(process.argv);\n\n  const detected = {\n    claude: commandExists('claude'),\n    codex: commandExists('codex'),\n    gemini: commandExists('gemini'),\n  };\n\n  const hasClaudeDir = fs.existsSync(claudeDir);\n  const hasCodexDir = fs.existsSync(codexDir);\n\n  let target = requestedTarget;\n  if (requestedTarget === 'auto') {\n    const wantClaude = hasClaudeDir || detected.claude;\n    const wantCodex = hasCodexDir || detected.codex;\n\n    if (wantClaude && wantCodex) target = 'both';\n    else if (wantCodex) target = 'codex';\n    else if (wantClaude) target = 'claude';\n    else target = 'claude';\n\n    console.log(`${CYAN}Auto-detected target:${NC} ${target}`);\n    if (!wantClaude && !wantCodex) {\n      console.log(\n        `${YELLOW}  ⓘ Could not detect Claude Code or Codex CLI; defaulting to \"claude\". Use --target codex if needed.${NC}`\n      );\n    }\n    console.log();\n  }\n\n  const installs = [];\n  if (target === 'claude' || target === 'both') {\n    installs.push({\n      label: 'Claude Code',\n      rootDir: claudeDir,\n      skillsDest: path.join(claudeDir, 'skills', 'agent-council'),\n      displayPath: '.claude/skills/agent-council',\n      hostRole: 'claude',\n    });\n  }\n  if (target === 'codex' || target === 'both') {\n    installs.push({\n      label: 'Codex CLI',\n      rootDir: codexDir,\n      skillsDest: path.join(codexDir, 'skills', 'agent-council'),\n      displayPath: '.codex/skills/agent-council',\n      hostRole: 'codex',\n    });\n  }\n\n  // Copy skills folder to target(s)\n  const skillsSrc = path.join(packageRoot, 'skills', 'agent-council');\n  const templateConfigPath = path.join(packageRoot, 'council.config.yaml');\n  const templateConfigText = fs.existsSync(templateConfigPath) ? fs.readFileSync(templateConfigPath, 'utf8') : null;\n\n  for (const install of installs) {\n    if (!fs.existsSync(install.rootDir)) {\n      fs.mkdirSync(install.rootDir, { recursive: true });\n    }\n\n    if (fs.existsSync(skillsSrc)) {\n      console.log(`${YELLOW}Installing skills (${install.label})...${NC}`);\n      copyRecursive(skillsSrc, install.skillsDest);\n      console.log(`${GREEN}  ✓ ${install.displayPath}${NC}`);\n    }\n\n    // Ship runtime dependencies needed by the skill at execution time.\n    const runtimeModulesDir = path.join(install.skillsDest, 'node_modules');\n    if (!fs.existsSync(runtimeModulesDir)) fs.mkdirSync(runtimeModulesDir, { recursive: true });\n\n    console.log(`${YELLOW}Installing runtime deps (${install.label})...${NC}`);\n    copyRecursive(yamlModuleDir, path.join(runtimeModulesDir, 'yaml'));\n    console.log(`${GREEN}  ✓ ${install.displayPath}/node_modules/yaml${NC}`);\n\n    // Copy config file to skill folder if not exists\n    const configDest = path.join(install.skillsDest, 'council.config.yaml');\n    if (!fs.existsSync(configDest)) {\n      console.log(`${YELLOW}Installing config (${install.label})...${NC}`);\n      if (!templateConfigText) {\n        console.log(`${YELLOW}  ⓘ Template council.config.yaml not found; writing an empty config.${NC}`);\n        fs.writeFileSync(\n          configDest,\n          ['council:', '  members: []', '  chairman:', '    role: \"auto\"', '  settings:', '    parallel: true', ''].join(\n            '\\n'\n          ),\n          'utf8'\n        );\n        console.log(`${GREEN}  ✓ ${install.displayPath}/council.config.yaml${NC}`);\n        continue;\n      }\n\n      const doc = YAML.parseDocument(templateConfigText);\n      const membersNode = doc.getIn(['council', 'members']);\n\n      if (membersNode && YAML.isCollection(membersNode)) {\n        const enabledMembers = membersNode.items.filter((item) => {\n          const member = item.toJSON();\n          const nameLc = String(member.name || '').toLowerCase();\n          if (nameLc === install.hostRole) return false;\n\n          const baseCommand = String(member.command || '')\n            .trim()\n            .split(/\\s+/)[0];\n          if (!baseCommand) return false;\n\n          return commandExists(baseCommand);\n        });\n\n        membersNode.items = enabledMembers;\n\n        if (enabledMembers.length === 0) {\n          console.log(\n            `${YELLOW}  ⓘ No member CLIs detected from template. Writing members: []; edit council.config.yaml to add members.${NC}`\n          );\n        }\n      } else {\n        console.log(`${YELLOW}  ⓘ Template is missing council.members; writing template as-is.${NC}`);\n      }\n\n      fs.writeFileSync(configDest, String(doc), 'utf8');\n      console.log(`${GREEN}  ✓ ${install.displayPath}/council.config.yaml${NC}`);\n    } else if (fs.existsSync(configDest)) {\n      console.log(`${YELLOW}  ⓘ council.config.yaml already exists (${install.label}), skipping${NC}`);\n    }\n  }\n\n  console.log();\n  console.log(`${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}`);\n  console.log(`${GREEN}  Installation complete!${NC}`);\n  console.log(`${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}`);\n  console.log();\n  if (installs.some((i) => i.hostRole === 'claude')) {\n    console.log(`${CYAN}Usage in Claude:${NC}`);\n    console.log(`  \"Summon the council\"`);\n    console.log(`  \"Let's hear opinions from other AIs\"`);\n    console.log();\n  }\n  if (installs.some((i) => i.hostRole === 'codex')) {\n    console.log(`${CYAN}Usage in Codex:${NC}`);\n    console.log(`  \"Summon the council\"`);\n    console.log(`  \"Let's hear opinions from other AIs\"`);\n    console.log();\n  }\n  console.log();\n  console.log(`${CYAN}Direct execution:${NC}`);\n  if (installs.some((i) => i.hostRole === 'claude')) {\n    console.log(`  .claude/skills/agent-council/scripts/council.sh \"your question\"`);\n  }\n  if (installs.some((i) => i.hostRole === 'codex')) {\n    console.log(`  .codex/skills/agent-council/scripts/council.sh \"your question\"`);\n  }\n  console.log();\n  console.log(`${YELLOW}Note: Only detected CLIs are enabled as members in the generated config.${NC}`);\n  console.log(`${YELLOW}      Detected: claude=${detected.claude} codex=${detected.codex} gemini=${detected.gemini}${NC}`);\n\n} catch (error) {\n  console.error(`${RED}Error during installation: ${error.message}${NC}`);\n  process.exit(1);\n}\n"
  },
  {
    "path": "plugins/agent-council/council.config.yaml",
    "content": "# Agent Council Configuration\n# Add or remove council members as needed.\n# Note: the installer filters members to detected CLIs only on initial generation.\n# After that, missing CLIs are not auto-removed and will report `missing_cli` at runtime.\n\ncouncil:\n  # Council members configuration\n  # Each member needs:\n  #   - name: identifier for the agent\n  #   - command: CLI command to execute (prompt will be appended)\n  #   - emoji: display emoji (optional)\n  #   - color: ANSI color (RED, GREEN, BLUE, YELLOW, CYAN)\n\n  members:\n    \n    - name: codex\n      command: \"codex exec\"\n      emoji: \"🤖\"\n      color: \"BLUE\"\n\n    - name: gemini\n      command: \"gemini\"\n      emoji: \"💎\"\n      color: \"GREEN\"\n\n  # Chairman configuration (Claude will act as chairman)\n  chairman:\n    # role: auto|claude|codex|gemini|...\n    # - auto: infer from host tool (Claude Code => claude, Codex CLI => codex)\n    role: \"auto\"\n    description: \"Synthesizes all opinions and provides final recommendation\"\n    # Optional: run synthesis inside council.sh via CLI (otherwise the host agent synthesizes)\n    # command: \"codex exec\"\n\n    # Execution settings\n  settings:\n    timeout: 120 # Timeout seconds per agent (0 to disable)\n    exclude_chairman_from_members: true # Avoid calling the chairman as a member by default\n    # synthesize: true       # Force Stage 3 synthesis inside council.sh (requires chairman.command or --chairman codex/gemini)\n"
  },
  {
    "path": "plugins/agent-council/package.json",
    "content": "{\n  \"name\": \"@team-attention/agent-council\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Collect and synthesize opinions from multiple AI Agents for Claude Code\",\n  \"bin\": {\n    \"agent-council\": \"./bin/install.js\"\n  },\n  \"files\": [\n    \"bin/\",\n    \"skills/\",\n    \"council.config.yaml\"\n  ],\n  \"dependencies\": {\n    \"yaml\": \"^2.8.2\"\n  },\n  \"keywords\": [\n    \"claude-code\",\n    \"plugin\",\n    \"llm\",\n    \"multi-agent\",\n    \"ai-council\"\n  ],\n  \"author\": \"Team Attention\",\n  \"license\": \"MIT\",\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"https://github.com/team-attention/plugins-for-claude-natives.git\"\n  },\n  \"homepage\": \"https://github.com/team-attention/plugins-for-claude-natives\"\n}\n"
  },
  {
    "path": "plugins/agent-council/scripts/council-job-worker.js",
    "content": "#!/usr/bin/env node\n\nconst fs = require('fs');\nconst path = require('path');\nconst crypto = require('crypto');\nconst { spawn } = require('child_process');\n\nfunction exitWithError(message) {\n  process.stderr.write(`${message}\\n`);\n  process.exit(1);\n}\n\nfunction parseArgs(argv) {\n  const args = argv.slice(2);\n  const out = { _: [] };\n  for (let i = 0; i < args.length; i++) {\n    const a = args[i];\n    if (!a.startsWith('--')) {\n      out._.push(a);\n      continue;\n    }\n\n    const [key, rawValue] = a.split('=', 2);\n    if (rawValue != null) {\n      out[key.slice(2)] = rawValue;\n      continue;\n    }\n    const next = args[i + 1];\n    if (next == null || next.startsWith('--')) {\n      out[key.slice(2)] = true;\n      continue;\n    }\n    out[key.slice(2)] = next;\n    i++;\n  }\n  return out;\n}\n\nfunction splitCommand(command) {\n  const tokens = [];\n  let current = '';\n  let inSingle = false;\n  let inDouble = false;\n  let escapeNext = false;\n\n  for (const ch of String(command || '')) {\n    if (escapeNext) {\n      current += ch;\n      escapeNext = false;\n      continue;\n    }\n\n    if (!inSingle && ch === '\\\\') {\n      escapeNext = true;\n      continue;\n    }\n\n    if (!inDouble && ch === \"'\") {\n      inSingle = !inSingle;\n      continue;\n    }\n\n    if (!inSingle && ch === '\"') {\n      inDouble = !inDouble;\n      continue;\n    }\n\n    if (!inSingle && !inDouble && /\\s/.test(ch)) {\n      if (current) tokens.push(current);\n      current = '';\n      continue;\n    }\n\n    current += ch;\n  }\n\n  if (current) tokens.push(current);\n  if (inSingle || inDouble) return null;\n  return tokens;\n}\n\nfunction atomicWriteJson(filePath, payload) {\n  const tmpPath = `${filePath}.${process.pid}.${crypto.randomBytes(4).toString('hex')}.tmp`;\n  fs.writeFileSync(tmpPath, JSON.stringify(payload, null, 2), 'utf8');\n  fs.renameSync(tmpPath, filePath);\n}\n\nfunction main() {\n  const options = parseArgs(process.argv);\n  const jobDir = options['job-dir'];\n  const member = options.member;\n  const safeMember = options['safe-member'];\n  const command = options.command;\n  const timeoutSec = options.timeout ? Number(options.timeout) : 0;\n\n  if (!jobDir) exitWithError('worker: missing --job-dir');\n  if (!member) exitWithError('worker: missing --member');\n  if (!safeMember) exitWithError('worker: missing --safe-member');\n  if (!command) exitWithError('worker: missing --command');\n\n  const membersRoot = path.join(jobDir, 'members');\n  const memberDir = path.join(membersRoot, safeMember);\n  const statusPath = path.join(memberDir, 'status.json');\n  const outPath = path.join(memberDir, 'output.txt');\n  const errPath = path.join(memberDir, 'error.txt');\n\n  const promptPath = path.join(jobDir, 'prompt.txt');\n  const prompt = fs.existsSync(promptPath) ? fs.readFileSync(promptPath, 'utf8') : '';\n\n  const tokens = splitCommand(command);\n  if (!tokens || tokens.length === 0) {\n    atomicWriteJson(statusPath, {\n      member,\n      state: 'error',\n      message: 'Invalid command string',\n      finishedAt: new Date().toISOString(),\n      command,\n    });\n    process.exit(1);\n  }\n\n  const program = tokens[0];\n  const args = tokens.slice(1);\n\n  atomicWriteJson(statusPath, {\n    member,\n    state: 'running',\n    startedAt: new Date().toISOString(),\n    command,\n    pid: null,\n  });\n\n  const outStream = fs.createWriteStream(outPath, { flags: 'w' });\n  const errStream = fs.createWriteStream(errPath, { flags: 'w' });\n\n  let child;\n  try {\n    child = spawn(program, [...args, prompt], {\n      stdio: ['ignore', 'pipe', 'pipe'],\n      env: process.env,\n    });\n  } catch (error) {\n    atomicWriteJson(statusPath, {\n      member,\n      state: 'error',\n      message: error && error.message ? error.message : 'Failed to spawn command',\n      finishedAt: new Date().toISOString(),\n      command,\n    });\n    process.exit(1);\n  }\n\n  atomicWriteJson(statusPath, {\n    member,\n    state: 'running',\n    startedAt: new Date().toISOString(),\n    command,\n    pid: child.pid,\n  });\n\n  if (child.stdout) child.stdout.pipe(outStream);\n  if (child.stderr) child.stderr.pipe(errStream);\n\n  let timeoutHandle = null;\n  let timeoutTriggered = false;\n  if (Number.isFinite(timeoutSec) && timeoutSec > 0) {\n    timeoutHandle = setTimeout(() => {\n      timeoutTriggered = true;\n      try {\n        process.kill(child.pid, 'SIGTERM');\n      } catch {\n        // ignore\n      }\n    }, timeoutSec * 1000);\n    timeoutHandle.unref();\n  }\n\n  const finalize = (payload) => {\n    try {\n      outStream.end();\n      errStream.end();\n    } catch {\n      // ignore\n    }\n    atomicWriteJson(statusPath, payload);\n  };\n\n  child.on('error', (error) => {\n    const isMissing = error && error.code === 'ENOENT';\n    finalize({\n      member,\n      state: isMissing ? 'missing_cli' : 'error',\n      message: error && error.message ? error.message : 'Process error',\n      finishedAt: new Date().toISOString(),\n      command,\n      exitCode: null,\n      pid: child.pid,\n    });\n    process.exit(1);\n  });\n\n  child.on('exit', (code, signal) => {\n    if (timeoutHandle) clearTimeout(timeoutHandle);\n    const timedOut = Boolean(timeoutTriggered) && signal === 'SIGTERM';\n    const canceled = !timedOut && signal === 'SIGTERM';\n    finalize({\n      member,\n      state: timedOut ? 'timed_out' : canceled ? 'canceled' : code === 0 ? 'done' : 'error',\n      message: timedOut ? `Timed out after ${timeoutSec}s` : canceled ? 'Canceled' : null,\n      finishedAt: new Date().toISOString(),\n      command,\n      exitCode: typeof code === 'number' ? code : null,\n      signal: signal || null,\n      pid: child.pid,\n    });\n    process.exit(code === 0 ? 0 : 1);\n  });\n}\n\nif (require.main === module) {\n  main();\n}\n"
  },
  {
    "path": "plugins/agent-council/scripts/council-job.js",
    "content": "#!/usr/bin/env node\n\nconst fs = require('fs');\nconst path = require('path');\nconst crypto = require('crypto');\nconst { spawn } = require('child_process');\n\nconst SCRIPT_DIR = __dirname;\nconst SKILL_DIR = path.resolve(SCRIPT_DIR, '..');\nconst WORKER_PATH = path.join(SCRIPT_DIR, 'council-job-worker.js');\n\nconst SKILL_CONFIG_FILE = path.join(SKILL_DIR, 'council.config.yaml');\nconst REPO_CONFIG_FILE = path.join(path.resolve(SKILL_DIR, '../..'), 'council.config.yaml');\n\nfunction exitWithError(message) {\n  process.stderr.write(`${message}\\n`);\n  process.exit(1);\n}\n\nfunction resolveDefaultConfigFile() {\n  if (fs.existsSync(SKILL_CONFIG_FILE)) return SKILL_CONFIG_FILE;\n  if (fs.existsSync(REPO_CONFIG_FILE)) return REPO_CONFIG_FILE;\n  return SKILL_CONFIG_FILE;\n}\n\nfunction detectHostRole() {\n  const normalized = SKILL_DIR.replace(/\\\\/g, '/');\n  if (normalized.includes('/.claude/skills/')) return 'claude';\n  if (normalized.includes('/.codex/skills/')) return 'codex';\n  return 'unknown';\n}\n\nfunction normalizeBool(value) {\n  if (value == null) return null;\n  const v = String(value).trim().toLowerCase();\n  if (['1', 'true', 'yes', 'y', 'on'].includes(v)) return true;\n  if (['0', 'false', 'no', 'n', 'off'].includes(v)) return false;\n  return null;\n}\n\nfunction resolveAutoRole(role, hostRole) {\n  const roleLc = String(role || '').trim().toLowerCase();\n  if (roleLc && roleLc !== 'auto') return roleLc;\n  if (hostRole === 'codex') return 'codex';\n  if (hostRole === 'claude') return 'claude';\n  return 'claude';\n}\n\nfunction parseCouncilConfig(configPath) {\n  const fallback = {\n    council: {\n      chairman: { role: 'auto' },\n      members: [\n        { name: 'claude', command: 'claude -p', emoji: '🧠', color: 'CYAN' },\n        { name: 'codex', command: 'codex exec', emoji: '🤖', color: 'BLUE' },\n        { name: 'gemini', command: 'gemini', emoji: '💎', color: 'GREEN' },\n      ],\n      settings: { exclude_chairman_from_members: true, timeout: 120 },\n    },\n  };\n\n  if (!fs.existsSync(configPath)) return fallback;\n\n  let YAML;\n  try {\n    YAML = require('yaml');\n  } catch {\n    exitWithError(\n      [\n        'Missing runtime dependency: yaml',\n        'Your Agent Council installation is out of date.',\n        'Reinstall from your project root:',\n        '  npx github:team-attention/agent-council --target auto',\n      ].join('\\n')\n    );\n  }\n\n  let parsed;\n  try {\n    parsed = YAML.parse(fs.readFileSync(configPath, 'utf8'));\n  } catch (error) {\n    const message = error && error.message ? error.message : String(error);\n    exitWithError(`Invalid YAML in ${configPath}: ${message}`);\n  }\n\n  if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {\n    exitWithError(`Invalid config in ${configPath}: expected a YAML mapping/object at the document root`);\n  }\n  if (!parsed.council) {\n    exitWithError(`Invalid config in ${configPath}: missing required top-level key 'council:'`);\n  }\n  if (typeof parsed.council !== 'object' || Array.isArray(parsed.council)) {\n    exitWithError(`Invalid config in ${configPath}: 'council' must be a mapping/object`);\n  }\n\n  const merged = {\n    council: {\n      chairman: { ...fallback.council.chairman },\n      members: Array.isArray(fallback.council.members) ? [...fallback.council.members] : [],\n      settings: { ...fallback.council.settings },\n    },\n  };\n\n  const council = parsed.council;\n\n  if (council.chairman != null) {\n    if (typeof council.chairman !== 'object' || Array.isArray(council.chairman)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.chairman' must be a mapping/object`);\n    }\n    merged.council.chairman = { ...merged.council.chairman, ...council.chairman };\n  }\n\n  if (Object.prototype.hasOwnProperty.call(council, 'members')) {\n    if (!Array.isArray(council.members)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.members' must be a list/array`);\n    }\n    merged.council.members = council.members;\n  }\n\n  if (council.settings != null) {\n    if (typeof council.settings !== 'object' || Array.isArray(council.settings)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.settings' must be a mapping/object`);\n    }\n    merged.council.settings = { ...merged.council.settings, ...council.settings };\n  }\n\n  return merged;\n}\n\nfunction ensureDir(dirPath) {\n  fs.mkdirSync(dirPath, { recursive: true });\n}\n\nfunction safeFileName(name) {\n  const cleaned = String(name || '').trim().toLowerCase().replace(/[^a-z0-9_-]+/g, '-');\n  return cleaned || 'member';\n}\n\nfunction atomicWriteJson(filePath, payload) {\n  const tmpPath = `${filePath}.${process.pid}.${crypto.randomBytes(4).toString('hex')}.tmp`;\n  fs.writeFileSync(tmpPath, JSON.stringify(payload, null, 2), 'utf8');\n  fs.renameSync(tmpPath, filePath);\n}\n\nfunction readJsonIfExists(filePath) {\n  try {\n    if (!fs.existsSync(filePath)) return null;\n    return JSON.parse(fs.readFileSync(filePath, 'utf8'));\n  } catch {\n    return null;\n  }\n}\n\nfunction sleepMs(ms) {\n  const msNum = Number(ms);\n  if (!Number.isFinite(msNum) || msNum <= 0) return;\n  const sab = new SharedArrayBuffer(4);\n  const view = new Int32Array(sab);\n  Atomics.wait(view, 0, 0, Math.trunc(msNum));\n}\n\nfunction computeTerminalDoneCount(counts) {\n  const c = counts || {};\n  return (\n    Number(c.done || 0) +\n    Number(c.missing_cli || 0) +\n    Number(c.error || 0) +\n    Number(c.timed_out || 0) +\n    Number(c.canceled || 0)\n  );\n}\n\nfunction asCodexStepStatus(value) {\n  const v = String(value || '');\n  if (v === 'pending' || v === 'in_progress' || v === 'completed') return v;\n  return 'pending';\n}\n\nfunction buildCouncilUiPayload(statusPayload) {\n  const counts = statusPayload.counts || {};\n  const done = computeTerminalDoneCount(counts);\n  const total = Number(counts.total || 0);\n  const isDone = String(statusPayload.overallState || '') === 'done';\n\n  const queued = Number(counts.queued || 0);\n  const running = Number(counts.running || 0);\n\n  const members = Array.isArray(statusPayload.members) ? statusPayload.members : [];\n  const sortedMembers = members\n    .map((m) => ({\n      member: m && m.member != null ? String(m.member) : '',\n      state: m && m.state != null ? String(m.state) : 'unknown',\n      exitCode: m && m.exitCode != null ? m.exitCode : null,\n    }))\n    .filter((m) => m.member)\n    .sort((a, b) => a.member.localeCompare(b.member));\n\n  const terminalStates = new Set(['done', 'missing_cli', 'error', 'timed_out', 'canceled']);\n  // Keep the Plan UI visible by ensuring exactly one `in_progress` item while work remains.\n  const dispatchStatus = asCodexStepStatus(isDone ? 'completed' : queued > 0 ? 'in_progress' : 'completed');\n  let hasInProgress = dispatchStatus === 'in_progress';\n\n  const memberSteps = sortedMembers.map((m) => {\n    const state = m.state || 'unknown';\n    const isTerminal = terminalStates.has(state);\n\n    let status;\n    if (isTerminal) {\n      status = 'completed';\n    } else if (!hasInProgress && running > 0 && state === 'running') {\n      status = 'in_progress';\n      hasInProgress = true;\n    } else {\n      status = 'pending';\n    }\n\n    const label = `[Council] Ask ${m.member}`;\n    return { label, status: asCodexStepStatus(status) };\n  });\n\n  // Once members are done, the host agent should synthesize and then mark this step completed.\n  const synthStatus = asCodexStepStatus(isDone ? (hasInProgress ? 'pending' : 'in_progress') : 'pending');\n\n  const codexPlan = [\n    { step: `[Council] Prompt dispatch`, status: dispatchStatus },\n    ...memberSteps.map((s) => ({ step: s.label, status: s.status })),\n    { step: `[Council] Synthesize`, status: synthStatus },\n  ];\n\n  const claudeTodos = [\n    {\n      content: `[Council] Prompt dispatch`,\n      status: dispatchStatus,\n      activeForm: dispatchStatus === 'completed' ? 'Dispatched council prompts' : 'Dispatching council prompts',\n    },\n    ...memberSteps.map((s) => ({\n      content: s.label,\n      status: s.status,\n      activeForm: s.status === 'completed' ? 'Finished' : 'Awaiting response',\n    })),\n    {\n      content: `[Council] Synthesize`,\n      status: synthStatus,\n      activeForm:\n        synthStatus === 'completed'\n          ? 'Council results ready'\n          : synthStatus === 'in_progress'\n            ? 'Ready to synthesize'\n            : 'Waiting to synthesize',\n    },\n  ];\n\n  return {\n    progress: { done, total, overallState: String(statusPayload.overallState || '') },\n    codex: { update_plan: { plan: codexPlan } },\n    claude: { todo_write: { todos: claudeTodos } },\n  };\n}\n\nfunction computeStatusPayload(jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  if (!fs.existsSync(resolvedJobDir)) exitWithError(`jobDir not found: ${resolvedJobDir}`);\n\n  const jobMeta = readJsonIfExists(path.join(resolvedJobDir, 'job.json'));\n  if (!jobMeta) exitWithError(`job.json not found: ${path.join(resolvedJobDir, 'job.json')}`);\n\n  const membersRoot = path.join(resolvedJobDir, 'members');\n  if (!fs.existsSync(membersRoot)) exitWithError(`members folder not found: ${membersRoot}`);\n\n  const members = [];\n  for (const entry of fs.readdirSync(membersRoot)) {\n    const statusPath = path.join(membersRoot, entry, 'status.json');\n    const status = readJsonIfExists(statusPath);\n    if (status) members.push({ safeName: entry, ...status });\n  }\n\n  const totals = { queued: 0, running: 0, done: 0, error: 0, missing_cli: 0, timed_out: 0, canceled: 0 };\n  for (const m of members) {\n    const state = String(m.state || 'unknown');\n    if (Object.prototype.hasOwnProperty.call(totals, state)) totals[state]++;\n  }\n\n  const allDone = totals.running === 0 && totals.queued === 0;\n  const overallState = allDone ? 'done' : totals.running > 0 ? 'running' : 'queued';\n\n  return {\n    jobDir: resolvedJobDir,\n    id: jobMeta.id || null,\n    chairmanRole: jobMeta.chairmanRole || null,\n    overallState,\n    counts: { total: members.length, ...totals },\n    members: members\n      .map((m) => ({\n        member: m.member,\n        state: m.state,\n        startedAt: m.startedAt || null,\n        finishedAt: m.finishedAt || null,\n        exitCode: m.exitCode != null ? m.exitCode : null,\n        message: m.message || null,\n      }))\n      .sort((a, b) => String(a.member).localeCompare(String(b.member))),\n  };\n}\n\nfunction parseArgs(argv) {\n  const args = argv.slice(2);\n  const out = { _: [] };\n  const booleanFlags = new Set([\n    'json',\n    'text',\n    'checklist',\n    'help',\n    'h',\n    'verbose',\n    'include-chairman',\n    'exclude-chairman',\n  ]);\n  for (let i = 0; i < args.length; i++) {\n    const a = args[i];\n    if (a === '--') {\n      out._.push(...args.slice(i + 1));\n      break;\n    }\n    if (!a.startsWith('--')) {\n      out._.push(a);\n      continue;\n    }\n\n    const [key, rawValue] = a.split('=', 2);\n    if (rawValue != null) {\n      out[key.slice(2)] = rawValue;\n      continue;\n    }\n\n    const normalizedKey = key.slice(2);\n    if (booleanFlags.has(normalizedKey)) {\n      out[normalizedKey] = true;\n      continue;\n    }\n\n    const next = args[i + 1];\n    if (next == null || next.startsWith('--')) {\n      out[normalizedKey] = true;\n      continue;\n    }\n    out[normalizedKey] = next;\n    i++;\n  }\n  return out;\n}\n\nfunction printHelp() {\n  process.stdout.write(`Agent Council (job mode)\n\nUsage:\n  council-job.sh start [--config path] [--chairman auto|claude|codex|...] [--jobs-dir path] [--json] \"question\"\n  council-job.sh status [--json|--text|--checklist] [--verbose] <jobDir>\n  council-job.sh wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] <jobDir>\n  council-job.sh results [--json] <jobDir>\n  council-job.sh stop <jobDir>\n  council-job.sh clean <jobDir>\n\nNotes:\n  - start returns immediately and runs members in parallel via detached Node workers\n  - poll status with repeated short calls to update TODO/plan UIs in host agents\n  - wait prints JSON by default and blocks until meaningful progress occurs, so you don't spam tool cells\n`);\n}\n\nfunction cmdStart(options, prompt) {\n  const configPath = options.config || process.env.COUNCIL_CONFIG || resolveDefaultConfigFile();\n  const jobsDir =\n    options['jobs-dir'] || process.env.COUNCIL_JOBS_DIR || path.join(SKILL_DIR, '.jobs');\n\n  ensureDir(jobsDir);\n\n  const hostRole = detectHostRole();\n  const config = parseCouncilConfig(configPath);\n  const chairmanRoleRaw = options.chairman || process.env.COUNCIL_CHAIRMAN || config.council.chairman.role || 'auto';\n  const chairmanRole = resolveAutoRole(chairmanRoleRaw, hostRole);\n\n  const includeChairman = Boolean(options['include-chairman']);\n  const excludeChairmanOverride =\n    options['exclude-chairman'] != null ? true : options['include-chairman'] != null ? false : null;\n\n  const excludeSetting = normalizeBool(config.council.settings.exclude_chairman_from_members);\n  const excludeChairmanFromMembers =\n    excludeChairmanOverride != null ? excludeChairmanOverride : excludeSetting != null ? excludeSetting : true;\n\n  const timeoutSetting = Number(config.council.settings.timeout || 0);\n  const timeoutOverride = options.timeout != null ? Number(options.timeout) : null;\n  const timeoutSec = Number.isFinite(timeoutOverride) && timeoutOverride > 0 ? timeoutOverride : timeoutSetting > 0 ? timeoutSetting : 0;\n\n  const requestedMembers = config.council.members || [];\n  const members = requestedMembers.filter((m) => {\n    if (!m || !m.name || !m.command) return false;\n    const nameLc = String(m.name).toLowerCase();\n    if (excludeChairmanFromMembers && !includeChairman && nameLc === chairmanRole) return false;\n    return true;\n  });\n\n  const jobId = `${new Date().toISOString().replace(/[:.]/g, '').replace('T', '-').slice(0, 15)}-${crypto\n    .randomBytes(3)\n    .toString('hex')}`;\n  const jobDir = path.join(jobsDir, `council-${jobId}`);\n  const membersDir = path.join(jobDir, 'members');\n  ensureDir(membersDir);\n\n  fs.writeFileSync(path.join(jobDir, 'prompt.txt'), String(prompt), 'utf8');\n\n  const jobMeta = {\n    id: `council-${jobId}`,\n    createdAt: new Date().toISOString(),\n    configPath,\n    hostRole,\n    chairmanRole,\n    settings: {\n      excludeChairmanFromMembers,\n      timeoutSec: timeoutSec || null,\n    },\n    members: members.map((m) => ({\n      name: String(m.name),\n      command: String(m.command),\n      emoji: m.emoji ? String(m.emoji) : null,\n      color: m.color ? String(m.color) : null,\n    })),\n  };\n  atomicWriteJson(path.join(jobDir, 'job.json'), jobMeta);\n\n  for (const member of members) {\n    const name = String(member.name);\n    const safeName = safeFileName(name);\n    const memberDir = path.join(membersDir, safeName);\n    ensureDir(memberDir);\n\n    atomicWriteJson(path.join(memberDir, 'status.json'), {\n      member: name,\n      state: 'queued',\n      queuedAt: new Date().toISOString(),\n      command: String(member.command),\n    });\n\n    const workerArgs = [\n      WORKER_PATH,\n      '--job-dir',\n      jobDir,\n      '--member',\n      name,\n      '--safe-member',\n      safeName,\n      '--command',\n      String(member.command),\n    ];\n    if (timeoutSec && Number.isFinite(timeoutSec) && timeoutSec > 0) {\n      workerArgs.push('--timeout', String(timeoutSec));\n    }\n\n    const child = spawn(process.execPath, workerArgs, {\n      detached: true,\n      stdio: 'ignore',\n      env: process.env,\n    });\n    child.unref();\n  }\n\n  if (options.json) {\n    process.stdout.write(`${JSON.stringify({ jobDir, ...jobMeta }, null, 2)}\\n`);\n  } else {\n    process.stdout.write(`${jobDir}\\n`);\n  }\n}\n\nfunction cmdStatus(options, jobDir) {\n  const payload = computeStatusPayload(jobDir);\n\n  const wantChecklist = Boolean(options.checklist) && !options.json;\n  if (wantChecklist) {\n    const done = computeTerminalDoneCount(payload.counts);\n    const headerId = payload.id ? ` (${payload.id})` : '';\n    process.stdout.write(`Agent Council${headerId}\\n`);\n    process.stdout.write(\n      `Progress: ${done}/${payload.counts.total} done  (running ${payload.counts.running}, queued ${payload.counts.queued})\\n`\n    );\n    for (const m of payload.members) {\n      const state = String(m.state || '');\n      const mark =\n        state === 'done'\n          ? '[x]'\n          : state === 'running' || state === 'queued'\n            ? '[ ]'\n            : state\n              ? '[!]'\n              : '[ ]';\n      const exitInfo = m.exitCode != null ? ` (exit ${m.exitCode})` : '';\n      process.stdout.write(`${mark} ${m.member} — ${state}${exitInfo}\\n`);\n    }\n    return;\n  }\n\n  const wantText = Boolean(options.text) && !options.json;\n  if (wantText) {\n    const done = computeTerminalDoneCount(payload.counts);\n    process.stdout.write(`members ${done}/${payload.counts.total} done; running=${payload.counts.running} queued=${payload.counts.queued}\\n`);\n    if (options.verbose) {\n      for (const m of payload.members) {\n        process.stdout.write(`- ${m.member}: ${m.state}${m.exitCode != null ? ` (exit ${m.exitCode})` : ''}\\n`);\n      }\n    }\n    return;\n  }\n\n  process.stdout.write(`${JSON.stringify(payload, null, 2)}\\n`);\n}\n\nfunction parseWaitCursor(value) {\n  const raw = String(value || '').trim();\n  if (!raw) return null;\n  const parts = raw.split(':');\n  const version = parts[0];\n  if (version === 'v1' && parts.length === 4) {\n    const bucketSize = Number(parts[1]);\n    const doneBucket = Number(parts[2]);\n    const isDone = parts[3] === '1';\n    if (!Number.isFinite(bucketSize) || bucketSize <= 0) return null;\n    if (!Number.isFinite(doneBucket) || doneBucket < 0) return null;\n    return { version, bucketSize, dispatchBucket: 0, doneBucket, isDone };\n  }\n  if (version === 'v2' && parts.length === 5) {\n    const bucketSize = Number(parts[1]);\n    const dispatchBucket = Number(parts[2]);\n    const doneBucket = Number(parts[3]);\n    const isDone = parts[4] === '1';\n    if (!Number.isFinite(bucketSize) || bucketSize <= 0) return null;\n    if (!Number.isFinite(dispatchBucket) || dispatchBucket < 0) return null;\n    if (!Number.isFinite(doneBucket) || doneBucket < 0) return null;\n    return { version, bucketSize, dispatchBucket, doneBucket, isDone };\n  }\n  return null;\n}\n\nfunction formatWaitCursor(bucketSize, dispatchBucket, doneBucket, isDone) {\n  return `v2:${bucketSize}:${dispatchBucket}:${doneBucket}:${isDone ? 1 : 0}`;\n}\n\nfunction asWaitPayload(statusPayload) {\n  const members = Array.isArray(statusPayload.members) ? statusPayload.members : [];\n  return {\n    jobDir: statusPayload.jobDir,\n    id: statusPayload.id,\n    chairmanRole: statusPayload.chairmanRole,\n    overallState: statusPayload.overallState,\n    counts: statusPayload.counts,\n    members: members.map((m) => ({\n      member: m.member,\n      state: m.state,\n      exitCode: m.exitCode != null ? m.exitCode : null,\n      message: m.message || null,\n    })),\n    ui: buildCouncilUiPayload(statusPayload),\n  };\n}\n\nfunction resolveBucketSize(options, total, prevCursor) {\n  const raw = options.bucket != null ? options.bucket : options['bucket-size'];\n\n  if (raw == null || raw === true) {\n    if (prevCursor && prevCursor.bucketSize) return prevCursor.bucketSize;\n  } else {\n    const asString = String(raw).trim().toLowerCase();\n    if (asString !== 'auto') {\n      const num = Number(asString);\n      if (!Number.isFinite(num) || num <= 0) exitWithError(`wait: invalid --bucket: ${raw}`);\n      return Math.trunc(num);\n    }\n  }\n\n  // Auto-bucket: target ~5 updates total.\n  const totalNum = Number(total || 0);\n  if (!Number.isFinite(totalNum) || totalNum <= 0) return 1;\n  return Math.max(1, Math.ceil(totalNum / 5));\n}\n\nfunction cmdWait(options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const cursorFilePath = path.join(resolvedJobDir, '.wait_cursor');\n  const prevCursorRaw =\n    options.cursor != null\n      ? String(options.cursor)\n      : fs.existsSync(cursorFilePath)\n        ? String(fs.readFileSync(cursorFilePath, 'utf8')).trim()\n        : '';\n  const prevCursor = parseWaitCursor(prevCursorRaw);\n\n  const intervalMsRaw = options['interval-ms'] != null ? options['interval-ms'] : 250;\n  const intervalMs = Math.max(50, Math.trunc(Number(intervalMsRaw)));\n  if (!Number.isFinite(intervalMs) || intervalMs <= 0) exitWithError(`wait: invalid --interval-ms: ${intervalMsRaw}`);\n\n  const timeoutMsRaw = options['timeout-ms'] != null ? options['timeout-ms'] : 0;\n  const timeoutMs = Math.trunc(Number(timeoutMsRaw));\n  if (!Number.isFinite(timeoutMs) || timeoutMs < 0) exitWithError(`wait: invalid --timeout-ms: ${timeoutMsRaw}`);\n\n  // Always read once to decide bucket sizing and (when no cursor is given) return immediately.\n  let payload = computeStatusPayload(jobDir);\n  const bucketSize = resolveBucketSize(options, payload.counts.total, prevCursor);\n\n  const doneCount = computeTerminalDoneCount(payload.counts);\n  const isDone = payload.overallState === 'done';\n  const total = Number(payload.counts.total || 0);\n  const queued = Number(payload.counts.queued || 0);\n  const dispatchBucket = queued === 0 && total > 0 ? 1 : 0;\n  const doneBucket = Math.floor(doneCount / bucketSize);\n  const cursor = formatWaitCursor(bucketSize, dispatchBucket, doneBucket, isDone);\n\n  if (!prevCursor) {\n    fs.writeFileSync(cursorFilePath, cursor, 'utf8');\n    process.stdout.write(`${JSON.stringify({ ...asWaitPayload(payload), cursor }, null, 2)}\\n`);\n    return;\n  }\n\n  const start = Date.now();\n  while (cursor === prevCursorRaw) {\n    if (timeoutMs > 0 && Date.now() - start >= timeoutMs) break;\n    sleepMs(intervalMs);\n    payload = computeStatusPayload(jobDir);\n    const d = computeTerminalDoneCount(payload.counts);\n    const doneFlag = payload.overallState === 'done';\n    const totalCount = Number(payload.counts.total || 0);\n    const queuedCount = Number(payload.counts.queued || 0);\n    const dispatchB = queuedCount === 0 && totalCount > 0 ? 1 : 0;\n    const doneB = Math.floor(d / bucketSize);\n    const nextCursor = formatWaitCursor(bucketSize, dispatchB, doneB, doneFlag);\n    if (nextCursor !== prevCursorRaw) {\n      fs.writeFileSync(cursorFilePath, nextCursor, 'utf8');\n      process.stdout.write(`${JSON.stringify({ ...asWaitPayload(payload), cursor: nextCursor }, null, 2)}\\n`);\n      return;\n    }\n  }\n\n  // Timeout: return current state (cursor may be unchanged).\n  const finalPayload = computeStatusPayload(jobDir);\n  const finalDone = computeTerminalDoneCount(finalPayload.counts);\n  const finalDoneFlag = finalPayload.overallState === 'done';\n  const finalTotal = Number(finalPayload.counts.total || 0);\n  const finalQueued = Number(finalPayload.counts.queued || 0);\n  const finalDispatchBucket = finalQueued === 0 && finalTotal > 0 ? 1 : 0;\n  const finalDoneBucket = Math.floor(finalDone / bucketSize);\n  const finalCursor = formatWaitCursor(bucketSize, finalDispatchBucket, finalDoneBucket, finalDoneFlag);\n  fs.writeFileSync(cursorFilePath, finalCursor, 'utf8');\n  process.stdout.write(`${JSON.stringify({ ...asWaitPayload(finalPayload), cursor: finalCursor }, null, 2)}\\n`);\n}\n\nfunction cmdResults(options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const jobMeta = readJsonIfExists(path.join(resolvedJobDir, 'job.json'));\n  const membersRoot = path.join(resolvedJobDir, 'members');\n\n  const members = [];\n  if (fs.existsSync(membersRoot)) {\n    for (const entry of fs.readdirSync(membersRoot)) {\n      const statusPath = path.join(membersRoot, entry, 'status.json');\n      const outputPath = path.join(membersRoot, entry, 'output.txt');\n      const errorPath = path.join(membersRoot, entry, 'error.txt');\n      const status = readJsonIfExists(statusPath);\n      if (!status) continue;\n      const output = fs.existsSync(outputPath) ? fs.readFileSync(outputPath, 'utf8') : '';\n      const stderr = fs.existsSync(errorPath) ? fs.readFileSync(errorPath, 'utf8') : '';\n      members.push({ safeName: entry, ...status, output, stderr });\n    }\n  }\n\n  if (options.json) {\n    process.stdout.write(\n      `${JSON.stringify(\n        {\n          jobDir: resolvedJobDir,\n          id: jobMeta ? jobMeta.id : null,\n          prompt: fs.existsSync(path.join(resolvedJobDir, 'prompt.txt'))\n            ? fs.readFileSync(path.join(resolvedJobDir, 'prompt.txt'), 'utf8')\n            : null,\n          members: members\n            .map((m) => ({\n              member: m.member,\n              state: m.state,\n              exitCode: m.exitCode != null ? m.exitCode : null,\n              message: m.message || null,\n              output: m.output,\n              stderr: m.stderr,\n            }))\n            .sort((a, b) => String(a.member).localeCompare(String(b.member))),\n        },\n        null,\n        2\n      )}\\n`\n    );\n    return;\n  }\n\n  for (const m of members.sort((a, b) => String(a.member).localeCompare(String(b.member)))) {\n    process.stdout.write(`\\n=== ${m.member} (${m.state}) ===\\n`);\n    if (m.message) process.stdout.write(`${m.message}\\n`);\n    process.stdout.write(m.output || '');\n    if (!m.output && m.stderr) {\n      process.stdout.write('\\n');\n      process.stdout.write(m.stderr);\n    }\n    process.stdout.write('\\n');\n  }\n}\n\nfunction cmdStop(_options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const membersRoot = path.join(resolvedJobDir, 'members');\n  if (!fs.existsSync(membersRoot)) exitWithError(`No members folder found: ${membersRoot}`);\n\n  let stoppedAny = false;\n  for (const entry of fs.readdirSync(membersRoot)) {\n    const statusPath = path.join(membersRoot, entry, 'status.json');\n    const status = readJsonIfExists(statusPath);\n    if (!status) continue;\n    if (status.state !== 'running') continue;\n    if (!status.pid) continue;\n\n    try {\n      process.kill(Number(status.pid), 'SIGTERM');\n      stoppedAny = true;\n    } catch {\n      // ignore\n    }\n  }\n\n  process.stdout.write(stoppedAny ? 'stop: sent SIGTERM to running members\\n' : 'stop: no running members\\n');\n}\n\nfunction cmdClean(_options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  fs.rmSync(resolvedJobDir, { recursive: true, force: true });\n  process.stdout.write(`cleaned: ${resolvedJobDir}\\n`);\n}\n\nfunction main() {\n  const options = parseArgs(process.argv);\n  const [command, ...rest] = options._;\n\n  if (!command || options.help || options.h) {\n    printHelp();\n    return;\n  }\n\n  if (command === 'start') {\n    const prompt = rest.join(' ').trim();\n    if (!prompt) exitWithError('start: missing prompt');\n    cmdStart(options, prompt);\n    return;\n  }\n  if (command === 'status') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('status: missing jobDir');\n    cmdStatus(options, jobDir);\n    return;\n  }\n  if (command === 'wait') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('wait: missing jobDir');\n    cmdWait(options, jobDir);\n    return;\n  }\n  if (command === 'results') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('results: missing jobDir');\n    cmdResults(options, jobDir);\n    return;\n  }\n  if (command === 'stop') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('stop: missing jobDir');\n    cmdStop(options, jobDir);\n    return;\n  }\n  if (command === 'clean') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('clean: missing jobDir');\n    cmdClean(options, jobDir);\n    return;\n  }\n\n  exitWithError(`Unknown command: ${command}`);\n}\n\nif (require.main === module) {\n  main();\n}\n"
  },
  {
    "path": "plugins/agent-council/scripts/council-job.sh",
    "content": "#!/bin/bash\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\n\nif ! command -v node >/dev/null 2>&1; then\n  echo \"Error: Node.js is required to run Agent Council job mode.\" >&2\n  echo \"Install Node.js and try again (plugin installs cannot bundle Node).\" >&2\n  echo \"\" >&2\n  echo \"macOS (Homebrew): brew install node\" >&2\n  echo \"Or download from: https://nodejs.org/\" >&2\n  exit 127\nfi\n\nexec node \"$SCRIPT_DIR/council-job.js\" \"$@\"\n"
  },
  {
    "path": "plugins/agent-council/scripts/council.sh",
    "content": "#!/bin/bash\n#\n# Agent Council (job mode default)\n#\n# Subcommands:\n#   council.sh start [options] \"question\"     # returns JOB_DIR immediately\n#   council.sh status [--json|--text|--checklist] JOB_DIR # poll progress\n#   council.sh wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] JOB_DIR\n#   council.sh results [--json] JOB_DIR       # print collected outputs\n#   council.sh stop JOB_DIR                   # best-effort stop running members\n#   council.sh clean JOB_DIR                  # remove job directory\n#\n# One-shot:\n#   council.sh \"question\"\n#   (in a real terminal: starts a job, waits for completion, prints results, cleans up)\n#   (in host-agent tool UIs: returns a single `wait` JSON payload immediately; host drives progress + results)\n#\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nJOB_SCRIPT=\"$SCRIPT_DIR/council-job.sh\"\n\nusage() {\n  cat <<EOF\nAgent Council\n\nDefault mode is job-based parallel execution (pollable).\n\nUsage:\n  $(basename \"$0\") start [options] \"question\"\n  $(basename \"$0\") status [--json|--text|--checklist] <jobDir>\n  $(basename \"$0\") wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] <jobDir>\n  $(basename \"$0\") results [--json] <jobDir>\n  $(basename \"$0\") stop <jobDir>\n  $(basename \"$0\") clean <jobDir>\n\nOne-shot:\n  $(basename \"$0\") \"question\"\nEOF\n}\n\nif [ $# -eq 0 ]; then\n  usage\n  exit 1\nfi\n\ncase \"$1\" in\n  -h|--help|help)\n    usage\n    exit 0\n    ;;\nesac\n\nif ! command -v node >/dev/null 2>&1; then\n  echo \"Error: Node.js is required to run Agent Council.\" >&2\n  echo \"Claude Code plugins cannot bundle or auto-install Node.\" >&2\n  echo \"\" >&2\n  echo \"macOS (Homebrew): brew install node\" >&2\n  echo \"Or download from: https://nodejs.org/\" >&2\n  exit 127\nfi\n\ncase \"$1\" in\n  start|status|wait|results|stop|clean)\n    exec \"$JOB_SCRIPT\" \"$@\"\n    ;;\nesac\n\nin_host_agent_context() {\n  if [ -n \"${CODEX_CACHE_FILE:-}\" ]; then\n    return 0\n  fi\n\n  case \"$SCRIPT_DIR\" in\n    */.codex/skills/*|*/.claude/skills/*)\n      # Tool-call environments typically do not provide a real TTY on stdout/stderr.\n      if [ ! -t 1 ] && [ ! -t 2 ]; then\n        return 0\n      fi\n      ;;\n  esac\n\n  return 1\n}\n\nJOB_DIR=\"$(\"$JOB_SCRIPT\" start \"$@\")\"\n\n# Host agents (Codex CLI / Claude Code) cannot update native TODO/plan UIs while a long-running\n# command is executing. If we're in a host agent context, return immediately with a single `wait`\n# JSON payload (includes `.ui.codex.update_plan.plan` / `.ui.claude.todo_write.todos`) and let the\n# host agent drive progress updates with repeated short `wait` calls + native UI updates.\nif in_host_agent_context; then\n  exec \"$JOB_SCRIPT\" wait \"$JOB_DIR\"\nfi\n\necho \"council: started ${JOB_DIR}\" >&2\n\ncleanup_on_signal() {\n  if [ -n \"${JOB_DIR:-}\" ] && [ -d \"$JOB_DIR\" ]; then\n    \"$JOB_SCRIPT\" stop \"$JOB_DIR\" >/dev/null 2>&1 || true\n    \"$JOB_SCRIPT\" clean \"$JOB_DIR\" >/dev/null 2>&1 || true\n  fi\n  exit 130\n}\n\ntrap cleanup_on_signal INT TERM\n\nwhile true; do\n  WAIT_JSON=\"$(\"$JOB_SCRIPT\" wait \"$JOB_DIR\")\"\n  OVERALL=\"$(printf '%s' \"$WAIT_JSON\" | node -e '\nconst fs=require(\"fs\");\nconst d=JSON.parse(fs.readFileSync(0,\"utf8\"));\nprocess.stdout.write(String(d.overallState||\"\"));\n')\"\n\n  \"$JOB_SCRIPT\" status --text \"$JOB_DIR\" >&2\n\n  if [ \"$OVERALL\" = \"done\" ]; then\n    break\n  fi\ndone\n\ntrap - INT TERM\n\n\"$JOB_SCRIPT\" results \"$JOB_DIR\"\n\"$JOB_SCRIPT\" clean \"$JOB_DIR\" >/dev/null\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/SKILL.md",
    "content": "---\nname: agent-council\ndescription: Collect and synthesize opinions from multiple AI agents. Use when users say \"summon the council\", \"ask other AIs\", or want multiple AI perspectives on a question.\n---\n\n# Agent Council\n\nCollect multiple AI opinions and synthesize one answer.\n\n## Usage\n\nRun a job and collect results:\n\n```bash\nJOB_DIR=$(./skills/agent-council/scripts/council.sh start \"your question here\")\n./skills/agent-council/scripts/council.sh wait \"$JOB_DIR\"\n./skills/agent-council/scripts/council.sh results \"$JOB_DIR\"\n./skills/agent-council/scripts/council.sh clean \"$JOB_DIR\"\n```\n\nOne-shot:\n\n```bash\n./skills/agent-council/scripts/council.sh \"your question here\"\n```\n\n## References\n\n- `references/overview.md` — workflow and background.\n- `references/examples.md` — usage examples.\n- `references/config.md` — member configuration.\n- `references/requirements.md` — dependencies and CLI checks.\n- `references/host-ui.md` — host UI checklist guidance.\n- `references/safety.md` — safety notes.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/config.md",
    "content": "# Configure members\n\nEdit `council.config.yaml` to set chairman and members:\n\n```yaml\ncouncil:\n  chairman:\n    role: \"auto\"\n  members:\n    - name: claude\n      command: \"claude -p\"\n      emoji: \"🧠\"\n      color: \"CYAN\"\n    - name: codex\n      command: \"codex exec\"\n      emoji: \"🤖\"\n      color: \"BLUE\"\n    - name: gemini\n      command: \"gemini\"\n      emoji: \"💎\"\n      color: \"GREEN\"\n```\n\nAdd custom members by appending entries to `members`:\n\n- Use a stable `name` (lowercase, short).\n- Set `command` to a runnable CLI invocation.\n- Provide `emoji` and `color` for readability (optional but recommended).\n- Note that the installer filters members to detected CLIs only when it first generates `council.config.yaml`. After that, missing CLIs are not auto-removed and will report `missing_cli` at runtime; remove unavailable members or install the CLI before running.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/examples.md",
    "content": "# Examples\n\n## Technical decision\n\nPrompt:\n```\nReact vs Vue - which fits this project better? Summon the council\n```\n\nSteps:\n1. Run `council.sh` to collect opinions from configured members.\n2. Organize member perspectives.\n3. Recommend based on project context.\n\n## Architecture review\n\nPrompt:\n```\nLet's hear other AIs' opinions on this design\n```\n\nSteps:\n1. Summarize the design and query the council.\n2. Collect feedback from each member.\n3. Analyze commonalities and synthesize.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/host-ui.md",
    "content": "# Host UI Checklist Guidance\n\nUse these steps only when a host agent UI supports native checklist updates.\n\n## Checklist flow\n\n1. Run `council.sh wait` once to seed the cursor and get the JSON payload.\n2. Update the host's native checklist UI using the payload (if provided).\n3. Repeat `wait` until progress changes, then update the UI again.\n4. Finish with `results` and `clean`.\n\n## Behavior notes\n\n- Do not run a blocking wait before the first checklist update, or the Plan UI may not appear.\n- Keep exactly one in_progress item while work remains.\n- Preserve existing checklist items and append the [Council] section.\n- Avoid a long while loop in a single tool call; update after each wait return.\n- Use `--bucket 1` for per-member updates when needed.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/overview.md",
    "content": "# Overview\n\n- Gather responses from configured member CLIs.\n- Let the chairman synthesize the final response (default: `role: auto`, current agent).\n- Configure members in `council.config.yaml`; exclude the chairman from members by default.\n- Reference [Karpathy's LLM Council](https://github.com/karpathy/llm-council) for inspiration.\n\n## Workflow (3 stages)\n\n1. Send the same prompt to each member.\n2. Collect and surface member responses.\n3. Synthesize the final answer as chairman; optionally run the chairman inside `council.sh` via `chairman.command`.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/requirements.md",
    "content": "# Requirements\n\n- Install and authenticate the CLIs listed under `council.members` in `council.config.yaml`.\n- Note that the installer filters members to detected CLIs only on initial config generation; afterward, missing CLIs show as `missing_cli` in status output.\n- Install Node.js (plugins cannot bundle or auto-install it).\n- Verify each member’s base command exists (for example, `command -v <binary>` or `<binary> --version`).\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/references/safety.md",
    "content": "# Safety\n\n- Do not share sensitive information with the council.\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/scripts/council-job-worker.js",
    "content": "#!/usr/bin/env node\n\nconst fs = require('fs');\nconst path = require('path');\nconst crypto = require('crypto');\nconst { spawn } = require('child_process');\n\nfunction exitWithError(message) {\n  process.stderr.write(`${message}\\n`);\n  process.exit(1);\n}\n\nfunction parseArgs(argv) {\n  const args = argv.slice(2);\n  const out = { _: [] };\n  for (let i = 0; i < args.length; i++) {\n    const a = args[i];\n    if (!a.startsWith('--')) {\n      out._.push(a);\n      continue;\n    }\n\n    const [key, rawValue] = a.split('=', 2);\n    if (rawValue != null) {\n      out[key.slice(2)] = rawValue;\n      continue;\n    }\n    const next = args[i + 1];\n    if (next == null || next.startsWith('--')) {\n      out[key.slice(2)] = true;\n      continue;\n    }\n    out[key.slice(2)] = next;\n    i++;\n  }\n  return out;\n}\n\nfunction splitCommand(command) {\n  const tokens = [];\n  let current = '';\n  let inSingle = false;\n  let inDouble = false;\n  let escapeNext = false;\n\n  for (const ch of String(command || '')) {\n    if (escapeNext) {\n      current += ch;\n      escapeNext = false;\n      continue;\n    }\n\n    if (!inSingle && ch === '\\\\') {\n      escapeNext = true;\n      continue;\n    }\n\n    if (!inDouble && ch === \"'\") {\n      inSingle = !inSingle;\n      continue;\n    }\n\n    if (!inSingle && ch === '\"') {\n      inDouble = !inDouble;\n      continue;\n    }\n\n    if (!inSingle && !inDouble && /\\s/.test(ch)) {\n      if (current) tokens.push(current);\n      current = '';\n      continue;\n    }\n\n    current += ch;\n  }\n\n  if (current) tokens.push(current);\n  if (inSingle || inDouble) return null;\n  return tokens;\n}\n\nfunction atomicWriteJson(filePath, payload) {\n  const tmpPath = `${filePath}.${process.pid}.${crypto.randomBytes(4).toString('hex')}.tmp`;\n  fs.writeFileSync(tmpPath, JSON.stringify(payload, null, 2), 'utf8');\n  fs.renameSync(tmpPath, filePath);\n}\n\nfunction main() {\n  const options = parseArgs(process.argv);\n  const jobDir = options['job-dir'];\n  const member = options.member;\n  const safeMember = options['safe-member'];\n  const command = options.command;\n  const timeoutSec = options.timeout ? Number(options.timeout) : 0;\n\n  if (!jobDir) exitWithError('worker: missing --job-dir');\n  if (!member) exitWithError('worker: missing --member');\n  if (!safeMember) exitWithError('worker: missing --safe-member');\n  if (!command) exitWithError('worker: missing --command');\n\n  const membersRoot = path.join(jobDir, 'members');\n  const memberDir = path.join(membersRoot, safeMember);\n  const statusPath = path.join(memberDir, 'status.json');\n  const outPath = path.join(memberDir, 'output.txt');\n  const errPath = path.join(memberDir, 'error.txt');\n\n  const promptPath = path.join(jobDir, 'prompt.txt');\n  const prompt = fs.existsSync(promptPath) ? fs.readFileSync(promptPath, 'utf8') : '';\n\n  const tokens = splitCommand(command);\n  if (!tokens || tokens.length === 0) {\n    atomicWriteJson(statusPath, {\n      member,\n      state: 'error',\n      message: 'Invalid command string',\n      finishedAt: new Date().toISOString(),\n      command,\n    });\n    process.exit(1);\n  }\n\n  const program = tokens[0];\n  const args = tokens.slice(1);\n\n  atomicWriteJson(statusPath, {\n    member,\n    state: 'running',\n    startedAt: new Date().toISOString(),\n    command,\n    pid: null,\n  });\n\n  const outStream = fs.createWriteStream(outPath, { flags: 'w' });\n  const errStream = fs.createWriteStream(errPath, { flags: 'w' });\n\n  let child;\n  try {\n    child = spawn(program, [...args, prompt], {\n      stdio: ['ignore', 'pipe', 'pipe'],\n      env: process.env,\n    });\n  } catch (error) {\n    atomicWriteJson(statusPath, {\n      member,\n      state: 'error',\n      message: error && error.message ? error.message : 'Failed to spawn command',\n      finishedAt: new Date().toISOString(),\n      command,\n    });\n    process.exit(1);\n  }\n\n  atomicWriteJson(statusPath, {\n    member,\n    state: 'running',\n    startedAt: new Date().toISOString(),\n    command,\n    pid: child.pid,\n  });\n\n  if (child.stdout) child.stdout.pipe(outStream);\n  if (child.stderr) child.stderr.pipe(errStream);\n\n  let timeoutHandle = null;\n  let timeoutTriggered = false;\n  if (Number.isFinite(timeoutSec) && timeoutSec > 0) {\n    timeoutHandle = setTimeout(() => {\n      timeoutTriggered = true;\n      try {\n        process.kill(child.pid, 'SIGTERM');\n      } catch {\n        // ignore\n      }\n    }, timeoutSec * 1000);\n    timeoutHandle.unref();\n  }\n\n  const finalize = (payload) => {\n    try {\n      outStream.end();\n      errStream.end();\n    } catch {\n      // ignore\n    }\n    atomicWriteJson(statusPath, payload);\n  };\n\n  child.on('error', (error) => {\n    const isMissing = error && error.code === 'ENOENT';\n    finalize({\n      member,\n      state: isMissing ? 'missing_cli' : 'error',\n      message: error && error.message ? error.message : 'Process error',\n      finishedAt: new Date().toISOString(),\n      command,\n      exitCode: null,\n      pid: child.pid,\n    });\n    process.exit(1);\n  });\n\n  child.on('exit', (code, signal) => {\n    if (timeoutHandle) clearTimeout(timeoutHandle);\n    const timedOut = Boolean(timeoutTriggered) && signal === 'SIGTERM';\n    const canceled = !timedOut && signal === 'SIGTERM';\n    finalize({\n      member,\n      state: timedOut ? 'timed_out' : canceled ? 'canceled' : code === 0 ? 'done' : 'error',\n      message: timedOut ? `Timed out after ${timeoutSec}s` : canceled ? 'Canceled' : null,\n      finishedAt: new Date().toISOString(),\n      command,\n      exitCode: typeof code === 'number' ? code : null,\n      signal: signal || null,\n      pid: child.pid,\n    });\n    process.exit(code === 0 ? 0 : 1);\n  });\n}\n\nif (require.main === module) {\n  main();\n}\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/scripts/council-job.js",
    "content": "#!/usr/bin/env node\n\nconst fs = require('fs');\nconst path = require('path');\nconst crypto = require('crypto');\nconst { spawn } = require('child_process');\n\nconst SCRIPT_DIR = __dirname;\nconst SKILL_DIR = path.resolve(SCRIPT_DIR, '..');\nconst WORKER_PATH = path.join(SCRIPT_DIR, 'council-job-worker.js');\n\nconst SKILL_CONFIG_FILE = path.join(SKILL_DIR, 'council.config.yaml');\nconst REPO_CONFIG_FILE = path.join(path.resolve(SKILL_DIR, '../..'), 'council.config.yaml');\n\nfunction exitWithError(message) {\n  process.stderr.write(`${message}\\n`);\n  process.exit(1);\n}\n\nfunction resolveDefaultConfigFile() {\n  if (fs.existsSync(SKILL_CONFIG_FILE)) return SKILL_CONFIG_FILE;\n  if (fs.existsSync(REPO_CONFIG_FILE)) return REPO_CONFIG_FILE;\n  return SKILL_CONFIG_FILE;\n}\n\nfunction detectHostRole() {\n  const normalized = SKILL_DIR.replace(/\\\\/g, '/');\n  if (normalized.includes('/.claude/skills/')) return 'claude';\n  if (normalized.includes('/.codex/skills/')) return 'codex';\n  return 'unknown';\n}\n\nfunction normalizeBool(value) {\n  if (value == null) return null;\n  const v = String(value).trim().toLowerCase();\n  if (['1', 'true', 'yes', 'y', 'on'].includes(v)) return true;\n  if (['0', 'false', 'no', 'n', 'off'].includes(v)) return false;\n  return null;\n}\n\nfunction resolveAutoRole(role, hostRole) {\n  const roleLc = String(role || '').trim().toLowerCase();\n  if (roleLc && roleLc !== 'auto') return roleLc;\n  if (hostRole === 'codex') return 'codex';\n  if (hostRole === 'claude') return 'claude';\n  return 'claude';\n}\n\nfunction parseCouncilConfig(configPath) {\n  const fallback = {\n    council: {\n      chairman: { role: 'auto' },\n      members: [\n        { name: 'claude', command: 'claude -p', emoji: '🧠', color: 'CYAN' },\n        { name: 'codex', command: 'codex exec', emoji: '🤖', color: 'BLUE' },\n        { name: 'gemini', command: 'gemini', emoji: '💎', color: 'GREEN' },\n      ],\n      settings: { exclude_chairman_from_members: true, timeout: 120 },\n    },\n  };\n\n  if (!fs.existsSync(configPath)) return fallback;\n\n  let YAML;\n  try {\n    YAML = require('yaml');\n  } catch {\n    exitWithError(\n      [\n        'Missing runtime dependency: yaml',\n        'Your Agent Council installation is out of date.',\n        'Reinstall from your project root:',\n        '  npx github:team-attention/agent-council --target auto',\n      ].join('\\n')\n    );\n  }\n\n  let parsed;\n  try {\n    parsed = YAML.parse(fs.readFileSync(configPath, 'utf8'));\n  } catch (error) {\n    const message = error && error.message ? error.message : String(error);\n    exitWithError(`Invalid YAML in ${configPath}: ${message}`);\n  }\n\n  if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {\n    exitWithError(`Invalid config in ${configPath}: expected a YAML mapping/object at the document root`);\n  }\n  if (!parsed.council) {\n    exitWithError(`Invalid config in ${configPath}: missing required top-level key 'council:'`);\n  }\n  if (typeof parsed.council !== 'object' || Array.isArray(parsed.council)) {\n    exitWithError(`Invalid config in ${configPath}: 'council' must be a mapping/object`);\n  }\n\n  const merged = {\n    council: {\n      chairman: { ...fallback.council.chairman },\n      members: Array.isArray(fallback.council.members) ? [...fallback.council.members] : [],\n      settings: { ...fallback.council.settings },\n    },\n  };\n\n  const council = parsed.council;\n\n  if (council.chairman != null) {\n    if (typeof council.chairman !== 'object' || Array.isArray(council.chairman)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.chairman' must be a mapping/object`);\n    }\n    merged.council.chairman = { ...merged.council.chairman, ...council.chairman };\n  }\n\n  if (Object.prototype.hasOwnProperty.call(council, 'members')) {\n    if (!Array.isArray(council.members)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.members' must be a list/array`);\n    }\n    merged.council.members = council.members;\n  }\n\n  if (council.settings != null) {\n    if (typeof council.settings !== 'object' || Array.isArray(council.settings)) {\n      exitWithError(`Invalid config in ${configPath}: 'council.settings' must be a mapping/object`);\n    }\n    merged.council.settings = { ...merged.council.settings, ...council.settings };\n  }\n\n  return merged;\n}\n\nfunction ensureDir(dirPath) {\n  fs.mkdirSync(dirPath, { recursive: true });\n}\n\nfunction safeFileName(name) {\n  const cleaned = String(name || '').trim().toLowerCase().replace(/[^a-z0-9_-]+/g, '-');\n  return cleaned || 'member';\n}\n\nfunction atomicWriteJson(filePath, payload) {\n  const tmpPath = `${filePath}.${process.pid}.${crypto.randomBytes(4).toString('hex')}.tmp`;\n  fs.writeFileSync(tmpPath, JSON.stringify(payload, null, 2), 'utf8');\n  fs.renameSync(tmpPath, filePath);\n}\n\nfunction readJsonIfExists(filePath) {\n  try {\n    if (!fs.existsSync(filePath)) return null;\n    return JSON.parse(fs.readFileSync(filePath, 'utf8'));\n  } catch {\n    return null;\n  }\n}\n\nfunction sleepMs(ms) {\n  const msNum = Number(ms);\n  if (!Number.isFinite(msNum) || msNum <= 0) return;\n  const sab = new SharedArrayBuffer(4);\n  const view = new Int32Array(sab);\n  Atomics.wait(view, 0, 0, Math.trunc(msNum));\n}\n\nfunction computeTerminalDoneCount(counts) {\n  const c = counts || {};\n  return (\n    Number(c.done || 0) +\n    Number(c.missing_cli || 0) +\n    Number(c.error || 0) +\n    Number(c.timed_out || 0) +\n    Number(c.canceled || 0)\n  );\n}\n\nfunction asCodexStepStatus(value) {\n  const v = String(value || '');\n  if (v === 'pending' || v === 'in_progress' || v === 'completed') return v;\n  return 'pending';\n}\n\nfunction buildCouncilUiPayload(statusPayload) {\n  const counts = statusPayload.counts || {};\n  const done = computeTerminalDoneCount(counts);\n  const total = Number(counts.total || 0);\n  const isDone = String(statusPayload.overallState || '') === 'done';\n\n  const queued = Number(counts.queued || 0);\n  const running = Number(counts.running || 0);\n\n  const members = Array.isArray(statusPayload.members) ? statusPayload.members : [];\n  const sortedMembers = members\n    .map((m) => ({\n      member: m && m.member != null ? String(m.member) : '',\n      state: m && m.state != null ? String(m.state) : 'unknown',\n      exitCode: m && m.exitCode != null ? m.exitCode : null,\n    }))\n    .filter((m) => m.member)\n    .sort((a, b) => a.member.localeCompare(b.member));\n\n  const terminalStates = new Set(['done', 'missing_cli', 'error', 'timed_out', 'canceled']);\n  // Keep the Plan UI visible by ensuring exactly one `in_progress` item while work remains.\n  const dispatchStatus = asCodexStepStatus(isDone ? 'completed' : queued > 0 ? 'in_progress' : 'completed');\n  let hasInProgress = dispatchStatus === 'in_progress';\n\n  const memberSteps = sortedMembers.map((m) => {\n    const state = m.state || 'unknown';\n    const isTerminal = terminalStates.has(state);\n\n    let status;\n    if (isTerminal) {\n      status = 'completed';\n    } else if (!hasInProgress && running > 0 && state === 'running') {\n      status = 'in_progress';\n      hasInProgress = true;\n    } else {\n      status = 'pending';\n    }\n\n    const label = `[Council] Ask ${m.member}`;\n    return { label, status: asCodexStepStatus(status) };\n  });\n\n  // Once members are done, the host agent should synthesize and then mark this step completed.\n  const synthStatus = asCodexStepStatus(isDone ? (hasInProgress ? 'pending' : 'in_progress') : 'pending');\n\n  const codexPlan = [\n    { step: `[Council] Prompt dispatch`, status: dispatchStatus },\n    ...memberSteps.map((s) => ({ step: s.label, status: s.status })),\n    { step: `[Council] Synthesize`, status: synthStatus },\n  ];\n\n  const claudeTodos = [\n    {\n      content: `[Council] Prompt dispatch`,\n      status: dispatchStatus,\n      activeForm: dispatchStatus === 'completed' ? 'Dispatched council prompts' : 'Dispatching council prompts',\n    },\n    ...memberSteps.map((s) => ({\n      content: s.label,\n      status: s.status,\n      activeForm: s.status === 'completed' ? 'Finished' : 'Awaiting response',\n    })),\n    {\n      content: `[Council] Synthesize`,\n      status: synthStatus,\n      activeForm:\n        synthStatus === 'completed'\n          ? 'Council results ready'\n          : synthStatus === 'in_progress'\n            ? 'Ready to synthesize'\n            : 'Waiting to synthesize',\n    },\n  ];\n\n  return {\n    progress: { done, total, overallState: String(statusPayload.overallState || '') },\n    codex: { update_plan: { plan: codexPlan } },\n    claude: { todo_write: { todos: claudeTodos } },\n  };\n}\n\nfunction computeStatusPayload(jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  if (!fs.existsSync(resolvedJobDir)) exitWithError(`jobDir not found: ${resolvedJobDir}`);\n\n  const jobMeta = readJsonIfExists(path.join(resolvedJobDir, 'job.json'));\n  if (!jobMeta) exitWithError(`job.json not found: ${path.join(resolvedJobDir, 'job.json')}`);\n\n  const membersRoot = path.join(resolvedJobDir, 'members');\n  if (!fs.existsSync(membersRoot)) exitWithError(`members folder not found: ${membersRoot}`);\n\n  const members = [];\n  for (const entry of fs.readdirSync(membersRoot)) {\n    const statusPath = path.join(membersRoot, entry, 'status.json');\n    const status = readJsonIfExists(statusPath);\n    if (status) members.push({ safeName: entry, ...status });\n  }\n\n  const totals = { queued: 0, running: 0, done: 0, error: 0, missing_cli: 0, timed_out: 0, canceled: 0 };\n  for (const m of members) {\n    const state = String(m.state || 'unknown');\n    if (Object.prototype.hasOwnProperty.call(totals, state)) totals[state]++;\n  }\n\n  const allDone = totals.running === 0 && totals.queued === 0;\n  const overallState = allDone ? 'done' : totals.running > 0 ? 'running' : 'queued';\n\n  return {\n    jobDir: resolvedJobDir,\n    id: jobMeta.id || null,\n    chairmanRole: jobMeta.chairmanRole || null,\n    overallState,\n    counts: { total: members.length, ...totals },\n    members: members\n      .map((m) => ({\n        member: m.member,\n        state: m.state,\n        startedAt: m.startedAt || null,\n        finishedAt: m.finishedAt || null,\n        exitCode: m.exitCode != null ? m.exitCode : null,\n        message: m.message || null,\n      }))\n      .sort((a, b) => String(a.member).localeCompare(String(b.member))),\n  };\n}\n\nfunction parseArgs(argv) {\n  const args = argv.slice(2);\n  const out = { _: [] };\n  const booleanFlags = new Set([\n    'json',\n    'text',\n    'checklist',\n    'help',\n    'h',\n    'verbose',\n    'include-chairman',\n    'exclude-chairman',\n  ]);\n  for (let i = 0; i < args.length; i++) {\n    const a = args[i];\n    if (a === '--') {\n      out._.push(...args.slice(i + 1));\n      break;\n    }\n    if (!a.startsWith('--')) {\n      out._.push(a);\n      continue;\n    }\n\n    const [key, rawValue] = a.split('=', 2);\n    if (rawValue != null) {\n      out[key.slice(2)] = rawValue;\n      continue;\n    }\n\n    const normalizedKey = key.slice(2);\n    if (booleanFlags.has(normalizedKey)) {\n      out[normalizedKey] = true;\n      continue;\n    }\n\n    const next = args[i + 1];\n    if (next == null || next.startsWith('--')) {\n      out[normalizedKey] = true;\n      continue;\n    }\n    out[normalizedKey] = next;\n    i++;\n  }\n  return out;\n}\n\nfunction printHelp() {\n  process.stdout.write(`Agent Council (job mode)\n\nUsage:\n  council-job.sh start [--config path] [--chairman auto|claude|codex|...] [--jobs-dir path] [--json] \"question\"\n  council-job.sh status [--json|--text|--checklist] [--verbose] <jobDir>\n  council-job.sh wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] <jobDir>\n  council-job.sh results [--json] <jobDir>\n  council-job.sh stop <jobDir>\n  council-job.sh clean <jobDir>\n\nNotes:\n  - start returns immediately and runs members in parallel via detached Node workers\n  - poll status with repeated short calls to update TODO/plan UIs in host agents\n  - wait prints JSON by default and blocks until meaningful progress occurs, so you don't spam tool cells\n`);\n}\n\nfunction cmdStart(options, prompt) {\n  const configPath = options.config || process.env.COUNCIL_CONFIG || resolveDefaultConfigFile();\n  const jobsDir =\n    options['jobs-dir'] || process.env.COUNCIL_JOBS_DIR || path.join(SKILL_DIR, '.jobs');\n\n  ensureDir(jobsDir);\n\n  const hostRole = detectHostRole();\n  const config = parseCouncilConfig(configPath);\n  const chairmanRoleRaw = options.chairman || process.env.COUNCIL_CHAIRMAN || config.council.chairman.role || 'auto';\n  const chairmanRole = resolveAutoRole(chairmanRoleRaw, hostRole);\n\n  const includeChairman = Boolean(options['include-chairman']);\n  const excludeChairmanOverride =\n    options['exclude-chairman'] != null ? true : options['include-chairman'] != null ? false : null;\n\n  const excludeSetting = normalizeBool(config.council.settings.exclude_chairman_from_members);\n  const excludeChairmanFromMembers =\n    excludeChairmanOverride != null ? excludeChairmanOverride : excludeSetting != null ? excludeSetting : true;\n\n  const timeoutSetting = Number(config.council.settings.timeout || 0);\n  const timeoutOverride = options.timeout != null ? Number(options.timeout) : null;\n  const timeoutSec = Number.isFinite(timeoutOverride) && timeoutOverride > 0 ? timeoutOverride : timeoutSetting > 0 ? timeoutSetting : 0;\n\n  const requestedMembers = config.council.members || [];\n  const members = requestedMembers.filter((m) => {\n    if (!m || !m.name || !m.command) return false;\n    const nameLc = String(m.name).toLowerCase();\n    if (excludeChairmanFromMembers && !includeChairman && nameLc === chairmanRole) return false;\n    return true;\n  });\n\n  const jobId = `${new Date().toISOString().replace(/[:.]/g, '').replace('T', '-').slice(0, 15)}-${crypto\n    .randomBytes(3)\n    .toString('hex')}`;\n  const jobDir = path.join(jobsDir, `council-${jobId}`);\n  const membersDir = path.join(jobDir, 'members');\n  ensureDir(membersDir);\n\n  fs.writeFileSync(path.join(jobDir, 'prompt.txt'), String(prompt), 'utf8');\n\n  const jobMeta = {\n    id: `council-${jobId}`,\n    createdAt: new Date().toISOString(),\n    configPath,\n    hostRole,\n    chairmanRole,\n    settings: {\n      excludeChairmanFromMembers,\n      timeoutSec: timeoutSec || null,\n    },\n    members: members.map((m) => ({\n      name: String(m.name),\n      command: String(m.command),\n      emoji: m.emoji ? String(m.emoji) : null,\n      color: m.color ? String(m.color) : null,\n    })),\n  };\n  atomicWriteJson(path.join(jobDir, 'job.json'), jobMeta);\n\n  for (const member of members) {\n    const name = String(member.name);\n    const safeName = safeFileName(name);\n    const memberDir = path.join(membersDir, safeName);\n    ensureDir(memberDir);\n\n    atomicWriteJson(path.join(memberDir, 'status.json'), {\n      member: name,\n      state: 'queued',\n      queuedAt: new Date().toISOString(),\n      command: String(member.command),\n    });\n\n    const workerArgs = [\n      WORKER_PATH,\n      '--job-dir',\n      jobDir,\n      '--member',\n      name,\n      '--safe-member',\n      safeName,\n      '--command',\n      String(member.command),\n    ];\n    if (timeoutSec && Number.isFinite(timeoutSec) && timeoutSec > 0) {\n      workerArgs.push('--timeout', String(timeoutSec));\n    }\n\n    const child = spawn(process.execPath, workerArgs, {\n      detached: true,\n      stdio: 'ignore',\n      env: process.env,\n    });\n    child.unref();\n  }\n\n  if (options.json) {\n    process.stdout.write(`${JSON.stringify({ jobDir, ...jobMeta }, null, 2)}\\n`);\n  } else {\n    process.stdout.write(`${jobDir}\\n`);\n  }\n}\n\nfunction cmdStatus(options, jobDir) {\n  const payload = computeStatusPayload(jobDir);\n\n  const wantChecklist = Boolean(options.checklist) && !options.json;\n  if (wantChecklist) {\n    const done = computeTerminalDoneCount(payload.counts);\n    const headerId = payload.id ? ` (${payload.id})` : '';\n    process.stdout.write(`Agent Council${headerId}\\n`);\n    process.stdout.write(\n      `Progress: ${done}/${payload.counts.total} done  (running ${payload.counts.running}, queued ${payload.counts.queued})\\n`\n    );\n    for (const m of payload.members) {\n      const state = String(m.state || '');\n      const mark =\n        state === 'done'\n          ? '[x]'\n          : state === 'running' || state === 'queued'\n            ? '[ ]'\n            : state\n              ? '[!]'\n              : '[ ]';\n      const exitInfo = m.exitCode != null ? ` (exit ${m.exitCode})` : '';\n      process.stdout.write(`${mark} ${m.member} — ${state}${exitInfo}\\n`);\n    }\n    return;\n  }\n\n  const wantText = Boolean(options.text) && !options.json;\n  if (wantText) {\n    const done = computeTerminalDoneCount(payload.counts);\n    process.stdout.write(`members ${done}/${payload.counts.total} done; running=${payload.counts.running} queued=${payload.counts.queued}\\n`);\n    if (options.verbose) {\n      for (const m of payload.members) {\n        process.stdout.write(`- ${m.member}: ${m.state}${m.exitCode != null ? ` (exit ${m.exitCode})` : ''}\\n`);\n      }\n    }\n    return;\n  }\n\n  process.stdout.write(`${JSON.stringify(payload, null, 2)}\\n`);\n}\n\nfunction parseWaitCursor(value) {\n  const raw = String(value || '').trim();\n  if (!raw) return null;\n  const parts = raw.split(':');\n  const version = parts[0];\n  if (version === 'v1' && parts.length === 4) {\n    const bucketSize = Number(parts[1]);\n    const doneBucket = Number(parts[2]);\n    const isDone = parts[3] === '1';\n    if (!Number.isFinite(bucketSize) || bucketSize <= 0) return null;\n    if (!Number.isFinite(doneBucket) || doneBucket < 0) return null;\n    return { version, bucketSize, dispatchBucket: 0, doneBucket, isDone };\n  }\n  if (version === 'v2' && parts.length === 5) {\n    const bucketSize = Number(parts[1]);\n    const dispatchBucket = Number(parts[2]);\n    const doneBucket = Number(parts[3]);\n    const isDone = parts[4] === '1';\n    if (!Number.isFinite(bucketSize) || bucketSize <= 0) return null;\n    if (!Number.isFinite(dispatchBucket) || dispatchBucket < 0) return null;\n    if (!Number.isFinite(doneBucket) || doneBucket < 0) return null;\n    return { version, bucketSize, dispatchBucket, doneBucket, isDone };\n  }\n  return null;\n}\n\nfunction formatWaitCursor(bucketSize, dispatchBucket, doneBucket, isDone) {\n  return `v2:${bucketSize}:${dispatchBucket}:${doneBucket}:${isDone ? 1 : 0}`;\n}\n\nfunction asWaitPayload(statusPayload) {\n  const members = Array.isArray(statusPayload.members) ? statusPayload.members : [];\n  return {\n    jobDir: statusPayload.jobDir,\n    id: statusPayload.id,\n    chairmanRole: statusPayload.chairmanRole,\n    overallState: statusPayload.overallState,\n    counts: statusPayload.counts,\n    members: members.map((m) => ({\n      member: m.member,\n      state: m.state,\n      exitCode: m.exitCode != null ? m.exitCode : null,\n      message: m.message || null,\n    })),\n    ui: buildCouncilUiPayload(statusPayload),\n  };\n}\n\nfunction resolveBucketSize(options, total, prevCursor) {\n  const raw = options.bucket != null ? options.bucket : options['bucket-size'];\n\n  if (raw == null || raw === true) {\n    if (prevCursor && prevCursor.bucketSize) return prevCursor.bucketSize;\n  } else {\n    const asString = String(raw).trim().toLowerCase();\n    if (asString !== 'auto') {\n      const num = Number(asString);\n      if (!Number.isFinite(num) || num <= 0) exitWithError(`wait: invalid --bucket: ${raw}`);\n      return Math.trunc(num);\n    }\n  }\n\n  // Auto-bucket: target ~5 updates total.\n  const totalNum = Number(total || 0);\n  if (!Number.isFinite(totalNum) || totalNum <= 0) return 1;\n  return Math.max(1, Math.ceil(totalNum / 5));\n}\n\nfunction cmdWait(options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const cursorFilePath = path.join(resolvedJobDir, '.wait_cursor');\n  const prevCursorRaw =\n    options.cursor != null\n      ? String(options.cursor)\n      : fs.existsSync(cursorFilePath)\n        ? String(fs.readFileSync(cursorFilePath, 'utf8')).trim()\n        : '';\n  const prevCursor = parseWaitCursor(prevCursorRaw);\n\n  const intervalMsRaw = options['interval-ms'] != null ? options['interval-ms'] : 250;\n  const intervalMs = Math.max(50, Math.trunc(Number(intervalMsRaw)));\n  if (!Number.isFinite(intervalMs) || intervalMs <= 0) exitWithError(`wait: invalid --interval-ms: ${intervalMsRaw}`);\n\n  const timeoutMsRaw = options['timeout-ms'] != null ? options['timeout-ms'] : 0;\n  const timeoutMs = Math.trunc(Number(timeoutMsRaw));\n  if (!Number.isFinite(timeoutMs) || timeoutMs < 0) exitWithError(`wait: invalid --timeout-ms: ${timeoutMsRaw}`);\n\n  // Always read once to decide bucket sizing and (when no cursor is given) return immediately.\n  let payload = computeStatusPayload(jobDir);\n  const bucketSize = resolveBucketSize(options, payload.counts.total, prevCursor);\n\n  const doneCount = computeTerminalDoneCount(payload.counts);\n  const isDone = payload.overallState === 'done';\n  const total = Number(payload.counts.total || 0);\n  const queued = Number(payload.counts.queued || 0);\n  const dispatchBucket = queued === 0 && total > 0 ? 1 : 0;\n  const doneBucket = Math.floor(doneCount / bucketSize);\n  const cursor = formatWaitCursor(bucketSize, dispatchBucket, doneBucket, isDone);\n\n  if (!prevCursor) {\n    fs.writeFileSync(cursorFilePath, cursor, 'utf8');\n    process.stdout.write(`${JSON.stringify({ ...asWaitPayload(payload), cursor }, null, 2)}\\n`);\n    return;\n  }\n\n  const start = Date.now();\n  while (cursor === prevCursorRaw) {\n    if (timeoutMs > 0 && Date.now() - start >= timeoutMs) break;\n    sleepMs(intervalMs);\n    payload = computeStatusPayload(jobDir);\n    const d = computeTerminalDoneCount(payload.counts);\n    const doneFlag = payload.overallState === 'done';\n    const totalCount = Number(payload.counts.total || 0);\n    const queuedCount = Number(payload.counts.queued || 0);\n    const dispatchB = queuedCount === 0 && totalCount > 0 ? 1 : 0;\n    const doneB = Math.floor(d / bucketSize);\n    const nextCursor = formatWaitCursor(bucketSize, dispatchB, doneB, doneFlag);\n    if (nextCursor !== prevCursorRaw) {\n      fs.writeFileSync(cursorFilePath, nextCursor, 'utf8');\n      process.stdout.write(`${JSON.stringify({ ...asWaitPayload(payload), cursor: nextCursor }, null, 2)}\\n`);\n      return;\n    }\n  }\n\n  // Timeout: return current state (cursor may be unchanged).\n  const finalPayload = computeStatusPayload(jobDir);\n  const finalDone = computeTerminalDoneCount(finalPayload.counts);\n  const finalDoneFlag = finalPayload.overallState === 'done';\n  const finalTotal = Number(finalPayload.counts.total || 0);\n  const finalQueued = Number(finalPayload.counts.queued || 0);\n  const finalDispatchBucket = finalQueued === 0 && finalTotal > 0 ? 1 : 0;\n  const finalDoneBucket = Math.floor(finalDone / bucketSize);\n  const finalCursor = formatWaitCursor(bucketSize, finalDispatchBucket, finalDoneBucket, finalDoneFlag);\n  fs.writeFileSync(cursorFilePath, finalCursor, 'utf8');\n  process.stdout.write(`${JSON.stringify({ ...asWaitPayload(finalPayload), cursor: finalCursor }, null, 2)}\\n`);\n}\n\nfunction cmdResults(options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const jobMeta = readJsonIfExists(path.join(resolvedJobDir, 'job.json'));\n  const membersRoot = path.join(resolvedJobDir, 'members');\n\n  const members = [];\n  if (fs.existsSync(membersRoot)) {\n    for (const entry of fs.readdirSync(membersRoot)) {\n      const statusPath = path.join(membersRoot, entry, 'status.json');\n      const outputPath = path.join(membersRoot, entry, 'output.txt');\n      const errorPath = path.join(membersRoot, entry, 'error.txt');\n      const status = readJsonIfExists(statusPath);\n      if (!status) continue;\n      const output = fs.existsSync(outputPath) ? fs.readFileSync(outputPath, 'utf8') : '';\n      const stderr = fs.existsSync(errorPath) ? fs.readFileSync(errorPath, 'utf8') : '';\n      members.push({ safeName: entry, ...status, output, stderr });\n    }\n  }\n\n  if (options.json) {\n    process.stdout.write(\n      `${JSON.stringify(\n        {\n          jobDir: resolvedJobDir,\n          id: jobMeta ? jobMeta.id : null,\n          prompt: fs.existsSync(path.join(resolvedJobDir, 'prompt.txt'))\n            ? fs.readFileSync(path.join(resolvedJobDir, 'prompt.txt'), 'utf8')\n            : null,\n          members: members\n            .map((m) => ({\n              member: m.member,\n              state: m.state,\n              exitCode: m.exitCode != null ? m.exitCode : null,\n              message: m.message || null,\n              output: m.output,\n              stderr: m.stderr,\n            }))\n            .sort((a, b) => String(a.member).localeCompare(String(b.member))),\n        },\n        null,\n        2\n      )}\\n`\n    );\n    return;\n  }\n\n  for (const m of members.sort((a, b) => String(a.member).localeCompare(String(b.member)))) {\n    process.stdout.write(`\\n=== ${m.member} (${m.state}) ===\\n`);\n    if (m.message) process.stdout.write(`${m.message}\\n`);\n    process.stdout.write(m.output || '');\n    if (!m.output && m.stderr) {\n      process.stdout.write('\\n');\n      process.stdout.write(m.stderr);\n    }\n    process.stdout.write('\\n');\n  }\n}\n\nfunction cmdStop(_options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  const membersRoot = path.join(resolvedJobDir, 'members');\n  if (!fs.existsSync(membersRoot)) exitWithError(`No members folder found: ${membersRoot}`);\n\n  let stoppedAny = false;\n  for (const entry of fs.readdirSync(membersRoot)) {\n    const statusPath = path.join(membersRoot, entry, 'status.json');\n    const status = readJsonIfExists(statusPath);\n    if (!status) continue;\n    if (status.state !== 'running') continue;\n    if (!status.pid) continue;\n\n    try {\n      process.kill(Number(status.pid), 'SIGTERM');\n      stoppedAny = true;\n    } catch {\n      // ignore\n    }\n  }\n\n  process.stdout.write(stoppedAny ? 'stop: sent SIGTERM to running members\\n' : 'stop: no running members\\n');\n}\n\nfunction cmdClean(_options, jobDir) {\n  const resolvedJobDir = path.resolve(jobDir);\n  fs.rmSync(resolvedJobDir, { recursive: true, force: true });\n  process.stdout.write(`cleaned: ${resolvedJobDir}\\n`);\n}\n\nfunction main() {\n  const options = parseArgs(process.argv);\n  const [command, ...rest] = options._;\n\n  if (!command || options.help || options.h) {\n    printHelp();\n    return;\n  }\n\n  if (command === 'start') {\n    const prompt = rest.join(' ').trim();\n    if (!prompt) exitWithError('start: missing prompt');\n    cmdStart(options, prompt);\n    return;\n  }\n  if (command === 'status') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('status: missing jobDir');\n    cmdStatus(options, jobDir);\n    return;\n  }\n  if (command === 'wait') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('wait: missing jobDir');\n    cmdWait(options, jobDir);\n    return;\n  }\n  if (command === 'results') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('results: missing jobDir');\n    cmdResults(options, jobDir);\n    return;\n  }\n  if (command === 'stop') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('stop: missing jobDir');\n    cmdStop(options, jobDir);\n    return;\n  }\n  if (command === 'clean') {\n    const jobDir = rest[0];\n    if (!jobDir) exitWithError('clean: missing jobDir');\n    cmdClean(options, jobDir);\n    return;\n  }\n\n  exitWithError(`Unknown command: ${command}`);\n}\n\nif (require.main === module) {\n  main();\n}\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/scripts/council-job.sh",
    "content": "#!/bin/bash\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\n\nif ! command -v node >/dev/null 2>&1; then\n  echo \"Error: Node.js is required to run Agent Council job mode.\" >&2\n  echo \"Install Node.js and try again (plugin installs cannot bundle Node).\" >&2\n  echo \"\" >&2\n  echo \"macOS (Homebrew): brew install node\" >&2\n  echo \"Or download from: https://nodejs.org/\" >&2\n  exit 127\nfi\n\nexec node \"$SCRIPT_DIR/council-job.js\" \"$@\"\n"
  },
  {
    "path": "plugins/agent-council/skills/agent-council/scripts/council.sh",
    "content": "#!/bin/bash\n#\n# Agent Council (job mode default)\n#\n# Subcommands:\n#   council.sh start [options] \"question\"     # returns JOB_DIR immediately\n#   council.sh status [--json|--text|--checklist] JOB_DIR # poll progress\n#   council.sh wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] JOB_DIR\n#   council.sh results [--json] JOB_DIR       # print collected outputs\n#   council.sh stop JOB_DIR                   # best-effort stop running members\n#   council.sh clean JOB_DIR                  # remove job directory\n#\n# One-shot:\n#   council.sh \"question\"\n#   (in a real terminal: starts a job, waits for completion, prints results, cleans up)\n#   (in host-agent tool UIs: returns a single `wait` JSON payload immediately; host drives progress + results)\n#\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nJOB_SCRIPT=\"$SCRIPT_DIR/council-job.sh\"\n\nusage() {\n  cat <<EOF\nAgent Council\n\nDefault mode is job-based parallel execution (pollable).\n\nUsage:\n  $(basename \"$0\") start [options] \"question\"\n  $(basename \"$0\") status [--json|--text|--checklist] <jobDir>\n  $(basename \"$0\") wait [--cursor CURSOR] [--bucket auto|N] [--interval-ms N] [--timeout-ms N] <jobDir>\n  $(basename \"$0\") results [--json] <jobDir>\n  $(basename \"$0\") stop <jobDir>\n  $(basename \"$0\") clean <jobDir>\n\nOne-shot:\n  $(basename \"$0\") \"question\"\nEOF\n}\n\nif [ $# -eq 0 ]; then\n  usage\n  exit 1\nfi\n\ncase \"$1\" in\n  -h|--help|help)\n    usage\n    exit 0\n    ;;\nesac\n\nif ! command -v node >/dev/null 2>&1; then\n  echo \"Error: Node.js is required to run Agent Council.\" >&2\n  echo \"Claude Code plugins cannot bundle or auto-install Node.\" >&2\n  echo \"\" >&2\n  echo \"macOS (Homebrew): brew install node\" >&2\n  echo \"Or download from: https://nodejs.org/\" >&2\n  exit 127\nfi\n\ncase \"$1\" in\n  start|status|wait|results|stop|clean)\n    exec \"$JOB_SCRIPT\" \"$@\"\n    ;;\nesac\n\nin_host_agent_context() {\n  if [ -n \"${CODEX_CACHE_FILE:-}\" ]; then\n    return 0\n  fi\n\n  case \"$SCRIPT_DIR\" in\n    */.codex/skills/*|*/.claude/skills/*)\n      # Tool-call environments typically do not provide a real TTY on stdout/stderr.\n      if [ ! -t 1 ] && [ ! -t 2 ]; then\n        return 0\n      fi\n      ;;\n  esac\n\n  return 1\n}\n\nJOB_DIR=\"$(\"$JOB_SCRIPT\" start \"$@\")\"\n\n# Host agents (Codex CLI / Claude Code) cannot update native TODO/plan UIs while a long-running\n# command is executing. If we're in a host agent context, return immediately with a single `wait`\n# JSON payload (includes `.ui.codex.update_plan.plan` / `.ui.claude.todo_write.todos`) and let the\n# host agent drive progress updates with repeated short `wait` calls + native UI updates.\nif in_host_agent_context; then\n  exec \"$JOB_SCRIPT\" wait \"$JOB_DIR\"\nfi\n\necho \"council: started ${JOB_DIR}\" >&2\n\ncleanup_on_signal() {\n  if [ -n \"${JOB_DIR:-}\" ] && [ -d \"$JOB_DIR\" ]; then\n    \"$JOB_SCRIPT\" stop \"$JOB_DIR\" >/dev/null 2>&1 || true\n    \"$JOB_SCRIPT\" clean \"$JOB_DIR\" >/dev/null 2>&1 || true\n  fi\n  exit 130\n}\n\ntrap cleanup_on_signal INT TERM\n\nwhile true; do\n  WAIT_JSON=\"$(\"$JOB_SCRIPT\" wait \"$JOB_DIR\")\"\n  OVERALL=\"$(printf '%s' \"$WAIT_JSON\" | node -e '\nconst fs=require(\"fs\");\nconst d=JSON.parse(fs.readFileSync(0,\"utf8\"));\nprocess.stdout.write(String(d.overallState||\"\"));\n')\"\n\n  \"$JOB_SCRIPT\" status --text \"$JOB_DIR\" >&2\n\n  if [ \"$OVERALL\" = \"done\" ]; then\n    break\n  fi\ndone\n\ntrap - INT TERM\n\n\"$JOB_SCRIPT\" results \"$JOB_DIR\"\n\"$JOB_SCRIPT\" clean \"$JOB_DIR\" >/dev/null\n"
  },
  {
    "path": "plugins/clarify/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"clarify\",\n  \"version\": \"2.0.0\",\n  \"description\": \"Three lenses for clarity: vague requirements → specs (vague), strategy blind spots → 4-quadrant playbook (unknown), content vs form → leverage shift (metamedium)\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"claude-code\", \"plugin\", \"requirements\", \"clarification\", \"known-unknown\", \"strategy\", \"metamedium\", \"content-form\"]\n}\n"
  },
  {
    "path": "plugins/clarify/skills/metamedium/SKILL.md",
    "content": "---\nname: metamedium\ndescription: This skill should be used when the user is building, planning, or strategizing and the key question is whether to optimize content (what) or change form (how/medium). Trigger on \"내용 vs 형식\", \"content vs form\", \"metamedium\", \"형식을 바꿔볼까\", \"새로운 포맷\", \"관점 전환\", \"perspective shift\", \"다른 방법 없을까\", \"같은 방식이 안 먹혀\", \"diminishing returns\". Applies Alan Kay's metamedium concept to surface form-level alternatives. For requirement clarification use vague; for strategy blind spots use unknown.\n---\n\n# Metamedium: Content vs Form Lens\n\nDistinguish **content** (what is being said/built) from **form** (the medium/structure it's delivered through) to surface whether the real leverage is in optimizing content or inventing a new form. Based on Alan Kay's metamedium concept.\n\n> \"A change of perspective is worth 80 IQ points.\" — Alan Kay\n\n## Core Concept\n\nMost people only change **content** — what they say, write, or build. The real leverage comes from changing **form** — the medium, format, or structure itself.\n\n| | Content (what) | Form (how/medium) |\n|--|----------------|-------------------|\n| Example | Writing a LinkedIn post | Building a tool that generates posts from client work |\n| Example | Writing unit tests manually | Building a test generator from type signatures |\n| Example | Giving a workshop | Inventing a format where attendees co-create artifacts |\n| Leverage | Linear — each piece is one output | Exponential — each new form enables infinite content |\n\n## When to Use\n\n- Planning a project and unsure whether to optimize the output or the process\n- Stuck optimizing content with diminishing returns\n- Building something and want to check if form-level change would yield more leverage\n- Evaluating whether \"more of the same\" or \"something structurally different\" is the right move\n\nFor requirement clarification, use the **vague** skill. For strategy blind spot analysis, use the **unknown** skill.\n\n## Protocol\n\n**ALWAYS use the AskUserQuestion tool** for the fork question in Phase 2 — never ask content/form choices in plain text.\n\n### Phase 1: Identify and Label\n\nRead the user's current work, plan, or task. Classify each component as content or form:\n\n```\n[CONTENT] Writing a blog post about AI consulting\n[FORM]    Building a pipeline that turns consulting retros into blog posts\n[CONTENT] Deploying a new API endpoint\n[FORM]    Building a codegen that auto-generates endpoints from schemas\n[CONTENT] Fixing a flaky test\n[FORM]    Building a test infrastructure that prevents flaky tests by design\n```\n\nPresent the labeling to the user as a brief diagnosis.\n\n### Phase 2: Surface the Fork\n\nUse AskUserQuestion to present the content/form choice:\n\n```\nquestions:\n  - question: \"This is currently [CONTENT/FORM]-level work. Where should effort go?\"\n    header: \"Level\"\n    options:\n      - label: \"Proceed with content\"\n        description: \"Optimize within the current form — faster, lower risk\"\n      - label: \"Explore form change\"\n        description: \"What if the medium/structure itself changed? Higher leverage\"\n      - label: \"Content now, note form\"\n        description: \"Do the content work, but flag the form opportunity for later\"\n    multiSelect: false\n```\n\n### Phase 3: Branch\n\n**If \"Proceed with content\"**: Acknowledge and proceed. Include a `Form Opportunity` note in the output for future reference.\n\n**If \"Explore form change\"**: Generate 2-3 form alternatives. For each alternative:\n- What the new form looks like concretely\n- What new properties it would have (automatic, repeatable, scalable, composable)\n- Minimum viable version to test the form\n\n**If \"Content now, note form\"**: Proceed with content work. Append the form opportunity to the output.\n\n### Output\n\nAppend to any deliverable or present standalone:\n\n```markdown\n## Content/Form Analysis\n\n**Current work**: [description]\n**Classification**: [CONTENT / FORM]\n\n### Form Opportunity\n| | Detail |\n|---|--------|\n| **Alternative form** | [what it would look like] |\n| **New properties** | [what it enables that current form doesn't] |\n| **Minimum test** | [smallest version to validate] |\n| **Status** | [exploring / noted for later / not applicable] |\n```\n\n## The Metamedium Question\n\nWhen stuck or when optimizing yields diminishing returns:\n\n> **\"What new form/medium could make this problem disappear?\"**\n\nExamples:\n- Stuck writing more posts? → A format that turns client work into posts automatically\n- Test coverage plateauing? → A tool that generates tests from type signatures\n- Onboarding too slow? → A self-guided format where the codebase teaches itself\n\n## Tetris Test\n\n> Change the blocks. Then you realize the original blocks were mathematically calculated.\n\nTo truly understand a form, try to change it. The constraints discovered ARE the form's intelligence. Perspective shifts happen not by thinking harder, but by touching the form itself.\n\n## Anti-Patterns\n\n- Treating all work as content optimization when form change is available\n- Building \"better content\" when the form is the bottleneck\n- Assuming the current medium/format is fixed and only content can vary\n- Confusing incremental content improvement with form invention\n\n## Rules\n\n1. **Always label**: Tag work as content or form\n2. **Content is fine**: Not everything needs form change — but always note the option\n3. **Form yields power**: New form = new medium = exponential leverage\n4. **Code is metamedium**: The ability to code means the ability to change form\n5. **Touch to understand**: Change the form to discover why it was designed that way\n\n## Additional Resources\n\nFor Alan Kay's original ideas and source quotes, see `references/alan-kay-quotes.md`.\n"
  },
  {
    "path": "plugins/clarify/skills/metamedium/references/alan-kay-quotes.md",
    "content": "# Alan Kay: Source Quotes and Context\n\nKey quotes and context that inform the metamedium lens.\n\n## \"A change of perspective is worth 80 IQ points\"\n\n**Origin**: July 1982, lecture to the Apple Macintosh team. Recorded by Andy Hertzfeld.\n\n**Context**: The Dynabook team at Xerox PARC sought truly innovative approaches to computing. Kay believed having a new way of seeing a problem is the most effective way of moving to a solution — more effective than raw intelligence applied to the same perspective.\n\n**Application**: When stuck, don't think harder. Change the frame. The content/form distinction IS such a frame change.\n\n## \"The best way to predict the future is to invent it\"\n\n**Origin**: 1971, Xerox PARC.\n\n**Context**: At PARC, the researchers' maxim encouraged a proactive approach — rather than predicting what might happen based on current trends, create new technologies that define future possibilities.\n\n**Application**: Don't optimize content within existing forms. Invent new forms.\n\n## \"The computer is the first metamedium\"\n\n**Origin**: 1977, \"Personal Dynamic Media\" paper by Alan Kay and Adele Goldberg.\n\n**Full quote**: \"The computer is the first metamedium, and as such it has degrees of freedom for representation and expression never before encountered and as yet barely investigated.\"\n\n**Context**: Kay realized computers could simulate ALL existing media (text, images, music, film) AND create entirely new media that had never existed before. This is what he called \"metamedium\" — a medium that contains and generates other media.\n\n**Application**: Coding doesn't just let you create content faster. It lets you invent new kinds of media. This is qualitatively different from any previous tool.\n\n## On Literacy\n\n**Quote**: \"The ability to 'read' a medium means you can access materials and tools generated by others. The ability to 'write' in a medium means you can generate materials and tools for others. You must have both to be literate.\"\n\n**Application**: True computer literacy is not using apps (reading). It's creating new forms of expression and tools for others (writing). Most \"AI literacy\" today is reading-level only (using ChatGPT). Form-level literacy means building new AI-powered formats.\n\n## On New Representations\n\n**Quote**: \"These new media use already existing representational formats as their building blocks, while adding many new previously nonexistent properties.\"\n\n**Application**: The most powerful innovations don't replace existing forms — they use them as building blocks to create something with properties that didn't exist before. Example: A tool that combines consulting retros + AI + LinkedIn into an automatic content pipeline isn't just \"better content.\" It's a new form with properties (automatic, repeatable, scalable) that the individual components didn't have.\n\n## Sources\n\n- [Quote Investigator: Point of View](https://quoteinvestigator.com/2018/05/29/pov/)\n- [Quote Investigator: Invent the Future](https://quoteinvestigator.com/2012/09/27/invent-the-future/)\n- [Personal Dynamic Media (1977 paper)](https://www.newmediareader.com/book_samples/nmr-26-kay.pdf)\n- [Alan Kay's Universal Media Machine - Lev Manovich](https://manovich.net/content/04-projects/055-alan-kay-s-universal-media-machine/51_article_2006.pdf)\n- [Alan Kay - ACM Turing Award](https://amturing.acm.org/award_winners/kay_3972189.cfm)\n- [Computer as Metamedium](https://blogs.commons.georgetown.edu/cctp-711-fall2017/2017/11/09/computer-as-metamedium/)\n"
  },
  {
    "path": "plugins/clarify/skills/unknown/SKILL.md",
    "content": "---\nname: unknown\ndescription: This skill should be used when the user provides a strategy, plan, or decision document and wants to surface hidden assumptions and blind spots using the Known/Unknown 4-quadrant framework. Trigger on \"known unknown\", \"4분면 분석\", \"blind spots\", \"뭘 놓치고 있지\", \"뭘 모르는지 모르겠어\", \"전략 점검\", \"전략 분석\", \"assumption check\", \"가정 점검\", \"quadrant analysis\", \"what am I missing\". Strategy-level blind spot analysis with hypothesis-driven questioning. For requirement clarification use vague; for content-vs-form reframing use metamedium.\n---\n\n# Unknown: Surface Blind Spots with Known/Unknown Quadrants\n\nSurface hidden assumptions and blind spots in any strategy, plan, or decision using the Known/Unknown quadrant framework and hypothesis-driven questioning.\n\n## When to Use\n\n- Strategy or planning documents that need scrutiny\n- Decisions with unclear direction or hidden assumptions\n- Any situation where \"what we don't know\" matters more than \"what we do know\"\n\nFor specific requirement clarification (feature requests, bug reports), use the **vague** skill. For content-vs-form reframing (optimizing within a form vs inventing a new form), use the **metamedium** skill.\n\n## Core Principle: Hypothesis-as-Options\n\n**ALWAYS use the AskUserQuestion tool** for every question in R1/R2/R3 — never ask questions in plain text. The structured format enforces hypothesis-as-options and limits choice fatigue.\n\nPresent hypotheses as options instead of open questions. The hypotheses ARE the analysis — by designing good options, 80% of the analytical work is done before the user even answers. The user's job is to confirm, correct, or surprise.\n\n```\nBAD:  \"Why can't you do video content?\"           ← open question, high load\nGOOD: \"Time / Skill gap / No guests / High bar\"   ← pick one or more\n```\n\n- Each option IS a testable hypothesis about the user's situation\n- Use multiSelect: true to catch compound causes\n- \"Other\" is always available for out-of-frame answers\n\n## 3-Round Depth Pattern\n\n| Round | Purpose | Questions | Key trait |\n|-------|---------|-----------|-----------|\n| R1 | Validate draft quadrant | 3-4 | Broad, covers all quadrants |\n| R2 | Drill into weak spots | 2-3 | Targeted, follows R1 answers |\n| R3 | Nail execution details | 2-3 | Specific, optional |\n\n**Critical**: Generate Round N questions from Round N-1 answers. Never use pre-prepared questions across rounds. Cap total at 7-10 questions.\n\n## Protocol\n\n### Phase 1: Intake\n\n**File provided**: Read and extract goals, components, implicit assumptions, missing elements.\n\n**Topic keyword only**: Start directly with R1 questions to establish scope. The draft in Phase 3 will be rougher but R1 corrects it.\n\n### Phase 2: Context\n\nGather related context to find Unknown Knowns — assets the user may not realize they have:\n\n- **Glob** for related files: CLAUDE.md, README, decision records, past analyses in the project\n- **Read** project context: recent goals, team structure, active initiatives\n- **Identify** underutilized assets: existing tools/skills not in use, past projects with reusable patterns, team expertise not leveraged\n\nItems discovered here become UK candidates and options in R1 questions.\n\n### Phase 3: Draft + R1 Questions\n\nGenerate an initial 4-quadrant classification. **The draft is intentionally rough** — R1 exists to correct it, not confirm it. Err on the side of classifying uncertain items as KU rather than KK.\n\nDesign R1 questions to test quadrant boundaries. **Batch all R1 questions into a single AskUserQuestion call** (max 4 questions):\n\n| Target | Pattern | Example |\n|--------|---------|---------|\n| KK | \"Is this really certain?\" | \"Primary revenue source?\" (options) |\n| KU | \"Where's the weakest link?\" | \"Which flywheel connection is weakest?\" |\n| UK | \"What exists but isn't used?\" | Based on context findings |\n| UU | \"What's the biggest fear?\" | Risk scenarios as options |\n\n### Phase 4: Deepen + R2 Questions\n\nAnalyze R1 answers. Find the most uncertain area and drill in.\n\n**R2 triggers**: compound answers (messy area), unexpected answers (draft wrong), \"Other\" selected (outside frame).\n\nFor detailed R2 question types, see `references/question-design.md`.\n\n### Phase 5: Execute + R3 Questions (Optional)\n\nAfter priorities are set, nail down execution details for top items. Skip if R2 already provides enough detail.\n\n### Phase 6: Playbook Output\n\nGenerate a structured 4-quadrant playbook file. For the complete output template, see `references/playbook-template.md`.\n\n**Output structure:**\n```\n# {Topic}: Known/Unknown Quadrant Analysis\n\n## Current State Diagnosis\n## Quadrant Matrix (ASCII with resource %)\n## 1. Known Knowns: Systematize (60%)\n## 2. Known Unknowns: Design Experiments (25%)\n   - Each KU: Diagnosis → Experiment → Success Criteria → Deadline → Promotion Condition\n## 3. Unknown Knowns: Leverage (10%)\n## 4. Unknown Unknowns: Set Up Antennas (5%)\n## Strategic Decision: What to Stop\n## Execution Roadmap (week-by-week)\n## Core Principles (3-5 decision criteria)\n```\n\n**Resource percentages (60/25/10/5) are defaults.** Adjust based on context — e.g., a startup exploring product-market fit may allocate 40% KU and 30% KK.\n\n## Anti-Patterns\n\n- Open questions (\"What would you like to do?\") — use hypothesis options\n- 5+ options per question — causes choice fatigue\n- Ignoring R1 answers when designing R2 — performative questioning\n- Equal depth on all quadrants — wastes time, loses focus\n- No \"stop doing\" section — adding without subtracting\n\n## Example\n\n**Input**: Growth strategy document\n\n**R1**: Revenue source? → Workshops. Weakest link? → Biz→Knowledge. Blocker? → Skill gap + high bar (multiSelect). Biggest fear? → Execution scattered.\n\n**R2** (driven by \"execution scattered\"): What to drop? → Product dev. Why no knowledge→content? → No process + no time + hard to abstract. Role clarity? → Unclear.\n\n**R3**: Video format? → Screen recording. Retro blocker? → Don't know what to capture. What content resonated? → Raw discoveries.\n\n**Key discovery**: Abstraction isn't needed — raw insights work better. Collapsed triple bottleneck into 15-minute pipeline.\n\n## Rules\n\n1. **Hypotheses, not questions**: Every option is a testable hypothesis\n2. **Answers drive depth**: R2 from R1, R3 from R2\n3. **7-10 questions max**: Beyond this is fatigue\n4. **Stop > Start**: Always include \"what to stop doing\"\n5. **Promote or kill**: Every KU gets a promotion condition and a kill condition\n6. **Raw > Perfect**: Encourage minimum viable experiments, not perfect plans\n7. **Draft is disposable**: The initial quadrant is meant to be corrected\n\n## Additional Resources\n\n### Reference Files\n\n- **`references/question-design.md`** — Detailed question types for each round, trigger conditions, and AskUserQuestion formatting guide\n- **`references/playbook-template.md`** — Complete output template with section-by-section guide\n"
  },
  {
    "path": "plugins/clarify/skills/unknown/references/playbook-template.md",
    "content": "# Playbook Output Template\n\nComplete template for the 4-quadrant playbook generated in Phase 6.\n\n## File Naming\n\nSave as: `{topic}-known-unknown.md` in a location appropriate to the project.\n\n## Template\n\n```markdown\n# {Topic}: Known/Unknown Quadrant Analysis\n\n> Based on {source document or conversation}.\n> Designed under the constraint that \"{key constraint from R1/R2}\".\n\n---\n\n## Current State Diagnosis\n\n- **{Finding 1}**: {confirmed fact from R1-R3}\n- **{Finding 2}**: {confirmed fact}\n- **What to stop doing**: {items user chose to cut}\n\n---\n\n## Quadrant Matrix\n\n```\n                    Known                          Unknown\n         +---------------------------+---------------------------+\n         |                           |                           |\n         |   KK: Systematize         |   KU: Design Experiments  |\n Known   |   Resources: 60%          |   Resources: 25%          |\n         |                           |                           |\n         +---------------------------+---------------------------+\n         |                           |                           |\n         |   UK: Leverage            |   UU: Set Up Antennas     |\n Unknown |   Resources: 10%          |   Resources: 5%           |\n         |                           |                           |\n         +---------------------------+---------------------------+\n```\n\n---\n\n## 1. Known Knowns: Systematize (60%)\n\n> Confirmed working items. Turn into repeatable systems.\n\n| # | Item | Evidence | Systemization Target |\n|---|------|----------|---------------------|\n| 1 | **{item}** | {how we know} | {what \"systemized\" looks like} |\n\n---\n\n## 2. Known Unknowns: Design Experiments (25%)\n\n> Questions with no answer yet. Each gets an experiment.\n\n### KU{N}. {Question}\n\n**Diagnosis**: Why is this unknown?\n- {root cause from R2}\n\n**Experiment**:\n| Item | Detail |\n|------|--------|\n| Format | {what to try} |\n| Success criteria | {measurable outcome} |\n| Deadline | {specific date} |\n| Effort | {time/resource estimate} |\n\n**Promotion condition**: {when this becomes a Known Known}\n**Kill condition**: {when to abandon this and try something else}\n\n*(Repeat for each prioritized KU)*\n\n---\n\n## 3. Unknown Knowns: Leverage (10%)\n\n> Assets already owned but not utilized. Fastest wins.\n\n| # | Hidden Asset | How to Use | Effort |\n|---|-------------|-----------|--------|\n| 1 | **{asset}** | {activation method} | Low/Med/High |\n\n---\n\n## 4. Unknown Unknowns: Set Up Antennas (5%)\n\n> Cannot predict. Manage with detection speed + response speed.\n\n| # | Risk/Opportunity | Detection Method | Response Principle |\n|---|-----------------|-----------------|-------------------|\n| 1 | **{scenario}** | {how to notice early} | {what to do} |\n\n---\n\n## Strategic Decision: What to Stop\n\n| Item | Reason | Restart Condition |\n|------|--------|------------------|\n| **{item}** | {why stop} | {what would make it worth resuming} |\n\n---\n\n## Execution Roadmap\n\n### Week 1-2\n- [ ] {action item}\n- [ ] {action item}\n\n### Week 3-4\n- [ ] {action item}\n\n### Month 2\n- [ ] {action item}\n- [ ] Review: promote KUs to KK or kill\n\n---\n\n## Core Principles\n\n1. **{Principle}**: {one-line explanation}\n2. **{Principle}**: {one-line explanation}\n3. **{Principle}**: {one-line explanation}\n```\n\n## Section Writing Guide\n\n### Current State Diagnosis\nSummarize only what was confirmed through R1-R3 questioning. Avoid restating the input document — focus on what the conversation revealed that wasn't obvious before.\n\n### Known Knowns\nOnly include items with clear evidence. If the user said \"I think so\" without data, it's a KU not a KK.\n\n### Known Unknowns\nThe most important section. Each KU must have:\n- A root cause (why unknown)\n- A minimum viable experiment (not a perfect plan)\n- A measurable success criteria\n- A promotion AND kill condition\n\n### Unknown Knowns\nLook for these in:\n- Context files the user hasn't referenced\n- Tools/skills already built but not used\n- Past projects with reusable patterns\n- Team members' unused expertise\n\n### Unknown Unknowns\nKeep this section short. The point is awareness, not prevention.\nFocus on detection speed (how to notice early) and response capacity (having buffer time).\n\n### What to Stop\nThis section is non-negotiable. Every analysis must include at least one item to stop or pause. Adding without subtracting is the most common failure mode.\n\n### Core Principles\nDerive from the conversation, not generic advice. Each principle should be a decision rule that resolves a specific tension discovered during questioning.\n"
  },
  {
    "path": "plugins/clarify/skills/unknown/references/question-design.md",
    "content": "# Question Design Guide\n\nDetailed patterns for designing hypothesis-driven questions across the 3-round depth pattern.\n\n## AskUserQuestion Formatting\n\n```\nquestion: \"Clear, specific question ending with ?\"\nheader: \"Short label (max 12 chars)\"\noptions:\n  - label: \"Option A\"\n    description: \"Why this matters or what it implies\"\n  - label: \"Option B\"\n    description: \"Why this matters or what it implies\"\nmultiSelect: true  # when compound causes are likely\n```\n\n**Rules:**\n- 3-4 options per question (never 5+)\n- description explains implications, not just restates label\n- multiSelect for cause/blocker questions, single for priority/choice questions\n\n## R1 Questions: Validate the Draft\n\nDesign one question per quadrant boundary. Goal: confirm or correct the initial classification.\n\n| Quadrant | Question Pattern | Example |\n|----------|-----------------|---------|\n| **KK** | \"What's the confirmed reality?\" | \"Current revenue source?\" with options per hypothesis |\n| **KU** | \"Where's the weakest link?\" | \"Which connection in your process is weakest?\" |\n| **UK** | \"What assets exist but aren't used?\" | \"Which of these do you have but don't leverage?\" |\n| **UU** | \"What's the scariest scenario?\" | \"Most feared outcome?\" with risk scenarios |\n\n**Tip**: If context exploration reveals surprising assets, surface them in the UK question as options.\n\n## R2 Questions: Deepen the Weak Spots\n\nTriggered by R1 answers. Focus on the 1-2 most uncertain areas.\n\n### When to Use Each Type\n\n| R2 Type | Trigger | Example |\n|---------|---------|---------|\n| **Root cause** | KU has unclear \"why\" | \"Core reason video content isn't happening?\" |\n| **Feasibility** | Proposed solution seems hard | \"Is a 30-min weekly retro realistic? What's blocked it?\" |\n| **Priority** | Multiple items compete | \"Pick top 3 from these 6 Known Unknowns\" |\n| **Hidden constraint** | Suspected unstated limit | \"Tried converting consulting into content before? Result?\" |\n| **Drop candidate** | \"Execution scattered\" emerged | \"Which of these can be stopped or paused?\" |\n\n### Reading R1 Answers\n\n| R1 Signal | R2 Strategy |\n|-----------|-------------|\n| Compound answer (multiSelect) | That area is complex — break it apart with root cause question |\n| Unexpected answer | Draft was wrong — revise quadrant, probe deeper |\n| \"Other\" selected | User sees outside the frame — open exploration |\n| Strong conviction | Area is likely KK — validate with evidence question, then move on |\n\n## R3 Questions: Execution Details\n\nOnly for the prioritized top items. Skip if R2 provides enough.\n\n| R3 Type | When | Example |\n|---------|------|---------|\n| Tool/channel | Multiple ways to execute | \"Publish via: YouTube Live / Local recording / Podcast?\" |\n| Pattern ID | Need to design a template | \"What type of insight do you find most often in projects?\" |\n| Past experience | Checking if this was tried before | \"Have you tried turning this into content? What worked?\" |\n| Success signal | Defining \"done\" | \"What response tells you this format is worth repeating?\" |\n\n## Common Mistakes\n\n### Asking the same question twice in different words\nR1: \"What's your biggest challenge?\" R2: \"What's hardest right now?\"\nFix: R2 must drill INTO the R1 answer, not re-ask it.\n\n### Options that aren't real hypotheses\n\"Option A: Good\" \"Option B: Bad\" \"Option C: Maybe\"\nFix: Each option should represent a distinct, plausible situation.\n\n### Skipping multiSelect when causes are compound\n\"Why can't you do video?\" with single-select misses \"skill gap AND high standards\"\nFix: Default to multiSelect for \"why/blocker\" questions.\n\n### Going past 10 total questions\nFatigue kills quality. If R2 answers are clear, skip R3 entirely.\n"
  },
  {
    "path": "plugins/clarify/skills/vague/SKILL.md",
    "content": "---\nname: vague\ndescription: This skill should be used when the user's request or requirement is ambiguous and needs iterative questioning to become actionable. Trigger on \"clarify requirements\", \"refine requirements\", \"요구사항 명확히\", \"요구사항 정리\", \"뭘 원하는 건지\", \"make this clearer\", \"spec this out\", \"scope this\", \"/clarify\". Turns vague inputs into concrete specs. For strategy blind spots use unknown; for content-vs-form reframing use metamedium.\n---\n\n# Vague: Requirement Clarification\n\nTransform vague or ambiguous requirements into precise, actionable specifications through hypothesis-driven questioning. **ALWAYS use the AskUserQuestion tool** — never ask clarifying questions in plain text.\n\n## When to Use\n\n- Ambiguous feature requests (\"add a login feature\")\n- Incomplete bug reports (\"the export is broken\")\n- Underspecified tasks (\"make the app faster\")\n\nFor strategy/planning blind spot analysis, use the **unknown** skill. For content-vs-form reframing, use the **metamedium** skill.\n\n## Core Principle: Hypotheses as Options\n\nPresent plausible interpretations as options instead of asking open questions. Each option is a testable hypothesis about what the user actually means.\n\n```\nBAD:  \"What kind of login do you want?\"           ← open question, high cognitive load\nGOOD: \"OAuth / Email+Password / SSO / Magic link\" ← pick one, lower load\n```\n\n## Protocol\n\n### Phase 1: Capture and Diagnose\n\nRecord the original requirement verbatim. Identify ambiguities:\n- What is unclear or underspecified?\n- What assumptions would need to be made?\n- What decisions are left to interpretation?\n\n### Phase 2: Iterative Clarification\n\nUse AskUserQuestion to resolve ambiguities. **Batch up to 4 related questions per call.** Each option is a hypothesis about what the user means.\n\n**Cap: 5-8 total questions.** Stop when all critical ambiguities are resolved, OR user indicates \"good enough\", OR cap reached.\n\n**Example AskUserQuestion call:**\n```\nquestions:\n  - question: \"Which authentication method should the login use?\"\n    header: \"Auth method\"\n    options:\n      - label: \"Email + Password\"\n        description: \"Traditional signup with email verification\"\n      - label: \"OAuth (Google/GitHub)\"\n        description: \"Delegated auth, no password management needed\"\n      - label: \"Magic link\"\n        description: \"Passwordless email-based login\"\n    multiSelect: false\n  - question: \"What should happen after registration?\"\n    header: \"Post-signup\"\n    options:\n      - label: \"Immediate access\"\n        description: \"User can use the app right away\"\n      - label: \"Email verification first\"\n        description: \"Must confirm email before access\"\n    multiSelect: false\n```\n\n### Phase 3: Before/After Summary\n\nPresent the transformation:\n\n```markdown\n## Requirement Clarification Summary\n\n### Before (Original)\n\"{original request verbatim}\"\n\n### After (Clarified)\n**Goal**: [precise description]\n**Scope**: [included and excluded]\n**Constraints**: [limitations, preferences]\n**Success Criteria**: [how to know when done]\n\n**Decisions Made**:\n| Question | Decision |\n|----------|----------|\n| [ambiguity 1] | [chosen option] |\n```\n\n### Phase 4: Save Option\n\nAsk whether to save the clarified requirement to a file. Default location: `requirements/` or project-appropriate directory.\n\n## Ambiguity Categories\n\n| Category | Example Hypotheses |\n|----------|-------------------|\n| **Scope** | All users / Admins only / Specific roles |\n| **Behavior** | Fail silently / Show error / Auto-retry |\n| **Interface** | REST API / GraphQL / CLI |\n| **Data** | JSON / CSV / Both |\n| **Constraints** | <100ms / <1s / No requirement |\n| **Priority** | Must-have / Nice-to-have / Future |\n\n## Rules\n\n1. **Hypotheses, not open questions**: Every option is a plausible interpretation\n2. **No assumptions**: Ask, don't assume\n3. **Preserve intent**: Refine, don't redirect\n4. **5-8 questions max**: Beyond this is fatigue\n5. **Batch related questions**: Up to 4 per AskUserQuestion call\n6. **Track changes**: Always show before/after\n"
  },
  {
    "path": "plugins/dev/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"dev\",\n  \"version\": \"1.1.0\",\n  \"description\": \"Developer workflow tools: community scanning, technical decision-making\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"claude-code\", \"plugin\", \"developer\", \"workflow\", \"productivity\", \"decision-making\", \"tech-decision\"]\n}\n"
  },
  {
    "path": "plugins/dev/CLAUDE.md",
    "content": "# Dev\n\nDeveloper workflow tools for Claude Code.\n\n## Skills\n\n- `/dev-scan` - 개발 커뮤니티에서 다양한 의견 수집 (Reddit, HN, Dev.to, Lobsters)\n- `/tech-decision` - 기술 의사결정 깊이 탐색 (라이브러리 선택, 아키텍처 결정, 구현 방식 비교)\n\n## Agents\n\n- `codebase-explorer` - 기존 코드베이스 분석, 패턴/제약사항 파악\n- `docs-researcher` - 공식 문서, 가이드, best practices 리서치\n- `tradeoff-analyzer` - 옵션별 pros/cons 정리, 비교 분석\n- `decision-synthesizer` - 두괄식 최종 보고서 생성\n\n## 사용 예시\n\n### 기술 의사결정\n```\n\"React vs Vue 뭐가 나을까?\"\n\"상태관리 라이브러리 뭐 쓸지 고민이야\"\n\"모놀리스 vs 마이크로서비스 어떻게 해야 할까?\"\n```\n\ntech-decision 스킬이 활성화되면:\n1. codebase-explorer로 현재 코드 분석\n2. docs-researcher로 공식 문서 리서치\n3. dev-scan으로 커뮤니티 의견 수집\n4. agent-council로 전문가 관점 수집\n5. tradeoff-analyzer로 비교 분석\n6. decision-synthesizer로 두괄식 최종 보고서 생성\n"
  },
  {
    "path": "plugins/dev/agents/codebase-explorer.md",
    "content": "---\r\nname: codebase-explorer\r\ndescription: Use this agent when analyzing existing codebase for technical decisions. Trigger when user needs to understand current code patterns, architecture, constraints, or dependencies before making technology choices.\r\n\r\n<example>\r\nContext: User is deciding which state management library to use\r\nuser: \"우리 프로젝트에 상태관리 뭐 쓸지 고민이야\"\r\nassistant: \"현재 코드베이스를 먼저 분석해서 기존 패턴과 제약사항을 파악하겠습니다.\"\r\n<commentary>\r\nBefore recommending state management, need to understand current project structure, existing patterns, and constraints.\r\n</commentary>\r\n</example>\r\n\r\n<example>\r\nContext: User wants to compare database options\r\nuser: \"PostgreSQL vs MySQL 어떤 게 나을까?\"\r\nassistant: \"현재 프로젝트의 데이터 모델과 쿼리 패턴을 분석해보겠습니다.\"\r\n<commentary>\r\nDatabase choice depends on current data patterns, so analyze codebase first.\r\n</commentary>\r\n</example>\r\n\r\nmodel: sonnet\r\ncolor: cyan\r\ntools:\r\n  - Read\r\n  - Glob\r\n  - Grep\r\n---\r\n\r\nYou are a codebase analysis specialist for technical decision-making.\r\n\r\n## Core Mission\r\n\r\nAnalyze existing codebases to extract information relevant to technical decisions:\r\n- Current architecture and patterns\r\n- Existing dependencies and their usage\r\n- Code conventions and styles\r\n- Technical constraints and limitations\r\n- Integration points and interfaces\r\n\r\n## Analysis Process\r\n\r\n### 1. Project Structure Discovery\r\n\r\n```\r\nAnalyze:\r\n├── Package manager & dependencies (package.json, requirements.txt, etc.)\r\n├── Directory structure and organization\r\n├── Configuration files\r\n├── Build/deployment setup\r\n└── Documentation (README, docs/)\r\n```\r\n\r\n### 2. Pattern Recognition\r\n\r\nIdentify:\r\n- **Architectural patterns**: MVC, Clean Architecture, Domain-Driven, etc.\r\n- **State management**: How data flows through the application\r\n- **API patterns**: REST, GraphQL, RPC\r\n- **Error handling**: Current approaches\r\n- **Testing patterns**: Unit, integration, e2e\r\n\r\n### 3. Dependency Analysis\r\n\r\nFor each relevant dependency:\r\n- Version and update status\r\n- Usage extent (how deeply integrated)\r\n- Pain points visible in code (workarounds, TODO comments)\r\n- Compatibility considerations\r\n\r\n### 4. Constraint Identification\r\n\r\nLook for:\r\n- Performance bottlenecks\r\n- Technical debt markers\r\n- Legacy code that limits choices\r\n- External system dependencies\r\n- Team conventions/standards\r\n\r\n## Output Format\r\n\r\n```markdown\r\n## 코드베이스 분석 결과\r\n\r\n### 1. 프로젝트 개요\r\n- **언어/프레임워크**: [...]\r\n- **프로젝트 규모**: [파일 수, LoC 추정]\r\n- **주요 의존성**: [핵심 라이브러리들]\r\n\r\n### 2. 현재 아키텍처\r\n- **패턴**: [식별된 아키텍처 패턴]\r\n- **구조**: [디렉토리 구조 요약]\r\n- **데이터 흐름**: [상태 관리 방식]\r\n\r\n### 3. 의사결정 관련 발견사항\r\n\r\n#### 기존 패턴\r\n- [패턴 1]: [설명 + 파일 위치]\r\n- [패턴 2]: [설명 + 파일 위치]\r\n\r\n#### 제약사항\r\n- [제약 1]: [이유 + 영향]\r\n- [제약 2]: [이유 + 영향]\r\n\r\n#### 기회/개선점\r\n- [기회 1]: [설명]\r\n- [기회 2]: [설명]\r\n\r\n### 4. 의사결정 시 고려사항\r\n- [고려사항 1]\r\n- [고려사항 2]\r\n- [고려사항 3]\r\n\r\n### 5. 관련 파일 목록\r\n- `path/to/file1.ts` - [역할]\r\n- `path/to/file2.ts` - [역할]\r\n```\r\n\r\n## Analysis Focus by Decision Type\r\n\r\n### Library Selection\r\nFocus on:\r\n- Current similar libraries in use\r\n- Integration patterns\r\n- Bundle size concerns\r\n- Type system usage\r\n\r\n### Architecture Decision\r\nFocus on:\r\n- Current module boundaries\r\n- Coupling between components\r\n- Scalability indicators\r\n- Team structure alignment\r\n\r\n### Implementation Approach\r\nFocus on:\r\n- Existing similar implementations\r\n- Code style and conventions\r\n- Testing requirements\r\n- Performance characteristics\r\n\r\n## Important Guidelines\r\n\r\n1. **Be specific**: Reference actual file paths and code patterns\r\n2. **Stay objective**: Report findings without bias toward any option\r\n3. **Prioritize relevance**: Focus on aspects relevant to the decision at hand\r\n4. **Note uncertainty**: Clearly mark assumptions vs. confirmed findings\r\n5. **Consider history**: Look at git history for context when helpful\r\n"
  },
  {
    "path": "plugins/dev/agents/decision-synthesizer.md",
    "content": "---\r\nname: decision-synthesizer\r\ndescription: Use this agent to generate final decision reports with clear recommendations. Trigger after trade-off analysis is complete to produce executive summary and actionable conclusions.\r\n\r\n<example>\r\nContext: After comprehensive analysis is done\r\nuser: \"최종 보고서 만들어줘\"\r\nassistant: \"수집된 모든 정보를 종합해서 두괄식 최종 보고서를 작성하겠습니다.\"\r\n<commentary>\r\nGenerate final report with conclusion first, then supporting evidence.\r\n</commentary>\r\n</example>\r\n\r\n<example>\r\nContext: Analysis complete, need decision\r\nuser: \"그래서 결론이 뭐야?\"\r\nassistant: \"분석 결과를 바탕으로 명확한 결론과 근거를 정리하겠습니다.\"\r\n<commentary>\r\nSynthesize all analysis into clear recommendation.\r\n</commentary>\r\n</example>\r\n\r\nmodel: opus\r\ncolor: green\r\ntools:\r\n  - Read\r\n---\r\n\r\nYou are a technical decision synthesis expert who produces clear, actionable recommendations from complex analysis.\r\n\r\n## Core Mission\r\n\r\nCreate **두괄식 (conclusion-first)** reports that:\r\n- Lead with clear recommendation\r\n- Provide solid reasoning\r\n- Include actionable next steps\r\n- Address risks and alternatives\r\n\r\n## Output Principle: 두괄식 (Conclusion First)\r\n\r\n**Every report starts with the answer, then explains why.**\r\n\r\n```\r\n❌ Wrong: Background → Analysis → ... → Conclusion\r\n✅ Right: Conclusion → Background → Supporting Analysis\r\n```\r\n\r\n## Report Structure\r\n\r\n```markdown\r\n# 기술 의사결정 보고서: [주제]\r\n\r\n---\r\n\r\n## 결론\r\n\r\n**추천: [Option Name]**\r\n\r\n> [1-2문장으로 핵심 이유. 이 한 문장만 읽어도 의사결정 가능해야 함]\r\n\r\n**신뢰도**: [높음 | 중간 | 낮음]\r\n**리스크 수준**: [낮음 | 보통 | 높음 (관리 가능)]\r\n\r\n---\r\n\r\n## 핵심 근거 (Top 3)\r\n\r\n### 1. [가장 중요한 근거]\r\n[구체적 설명 + 출처]\r\n\r\n### 2. [두 번째 근거]\r\n[구체적 설명 + 출처]\r\n\r\n### 3. [세 번째 근거]\r\n[구체적 설명 + 출처]\r\n\r\n---\r\n\r\n## 비교 요약\r\n\r\n| | [추천 옵션] | [대안 1] | [대안 2] |\r\n|---|-------------|----------|----------|\r\n| 핵심 강점 | ✅ [강점] | [강점] | [강점] |\r\n| 핵심 약점 | ⚠️ [약점] | [약점] | [약점] |\r\n| 우리 상황 적합도 | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ |\r\n\r\n---\r\n\r\n## 리스크 & 대응\r\n\r\n| 리스크 | 확률 | 영향 | 대응 방안 |\r\n|--------|------|------|-----------|\r\n| [리스크 1] | 낮음 | 중간 | [대응] |\r\n| [리스크 2] | 중간 | 낮음 | [대응] |\r\n\r\n---\r\n\r\n## 대안 시나리오\r\n\r\n**만약 [조건 A]가 변한다면:**\r\n→ [다른 옵션] 재검토 권장\r\n\r\n**만약 [조건 B]가 발생한다면:**\r\n→ [대응 방안]\r\n\r\n---\r\n\r\n## 다음 단계\r\n\r\n### Must Have (필수)\r\n- [ ] [반드시 해야 하는 액션 1]\r\n- [ ] [반드시 해야 하는 액션 2]\r\n\r\n### Recommended (권장)\r\n- [ ] [강력히 권장하는 액션 1]\r\n- [ ] [강력히 권장하는 액션 2]\r\n\r\n### Optional (선택)\r\n- [ ] [상황에 따라 고려할 액션]\r\n\r\n### 검증 포인트\r\n- [ ] [확인할 사항 1]\r\n- [ ] [확인할 사항 2]\r\n\r\n---\r\n\r\n## 상세 분석 (참고용)\r\n\r\n### 평가 기준별 점수\r\n\r\n[상세 비교표...]\r\n\r\n### 출처 목록\r\n\r\n- [출처 1]\r\n- [출처 2]\r\n- [출처 3]\r\n```\r\n\r\n## Quality Standards\r\n\r\n### 1. Clarity\r\n- One clear recommendation\r\n- No hedging or vague language\r\n- Specific and actionable\r\n\r\n### 2. Evidence-Based\r\n- Every claim has a source\r\n- Confidence levels stated\r\n- Conflicting info addressed\r\n\r\n### 3. Context-Aware\r\n- Tailored to specific project\r\n- Considers team capabilities\r\n- Addresses constraints\r\n\r\n### 4. Actionable\r\n- Clear next steps\r\n- Defined success criteria\r\n- Risk mitigation included\r\n\r\n## Recommendation Confidence Levels\r\n\r\n### 높음 (High Confidence)\r\nUse when:\r\n- Multiple reliable sources agree\r\n- Clear winner on most criteria\r\n- Low risk, proven solution\r\n- Strong fit with context\r\n\r\n### 중간 (Medium Confidence)\r\nUse when:\r\n- Good option but close alternatives\r\n- Some uncertainty remains\r\n- Context-dependent trade-offs\r\n- Need more validation\r\n\r\n### 낮음 (Low Confidence)\r\nUse when:\r\n- Very close call between options\r\n- Significant unknowns\r\n- High context dependency\r\n- Recommend further research\r\n\r\n## Handling Edge Cases\r\n\r\n### When No Clear Winner\r\n```markdown\r\n## 결론\r\n\r\n**상황에 따른 추천:**\r\n- [조건 A]일 경우 → Option X\r\n- [조건 B]일 경우 → Option Y\r\n\r\n**결정 핵심 요소**: [어떤 질문에 답하면 결정 가능한지]\r\n```\r\n\r\n### When More Info Needed\r\n```markdown\r\n## 결론\r\n\r\n**잠정 추천: [Option X]** (추가 검증 필요)\r\n\r\n**결정 전 확인 필요:**\r\n1. [확인 사항 1]\r\n2. [확인 사항 2]\r\n```\r\n\r\n### When Recommending Against All Options\r\n```markdown\r\n## 결론\r\n\r\n**현 옵션들 모두 비추천**\r\n\r\n**이유**: [핵심 이유]\r\n\r\n**대안 제안**:\r\n- [대안 1]\r\n- [대안 2]\r\n```\r\n\r\n## Writing Style\r\n\r\n1. **Direct**: \"X를 추천한다\" not \"X가 좋을 수 있다\"\r\n2. **Specific**: Numbers, comparisons, examples\r\n3. **Balanced**: Acknowledge trade-offs honestly\r\n4. **Professional**: No hype or marketing language\r\n5. **Korean-friendly**: 자연스러운 한국어 표현 사용\r\n\r\n## Final Checklist\r\n\r\nBefore delivering report:\r\n- [ ] 결론이 맨 처음에 있는가?\r\n- [ ] 한 문장만 읽어도 결론을 알 수 있는가?\r\n- [ ] 모든 주장에 출처가 있는가?\r\n- [ ] 다음 단계가 구체적인가?\r\n- [ ] 리스크와 대안이 명시되어 있는가?\r\n- [ ] 신뢰도 수준이 명시되어 있는가?\r\n"
  },
  {
    "path": "plugins/dev/agents/docs-researcher.md",
    "content": "---\r\nname: docs-researcher\r\ndescription: Use this agent to research official documentation, guides, and best practices for technologies being evaluated. Trigger when comparing libraries, frameworks, or approaches and need authoritative information.\r\n\r\n<example>\r\nContext: User comparing React state management options\r\nuser: \"Redux vs Zustand 비교해줘\"\r\nassistant: \"각 라이브러리의 공식 문서와 best practices를 리서치하겠습니다.\"\r\n<commentary>\r\nNeed official documentation and guides to provide accurate comparison.\r\n</commentary>\r\n</example>\r\n\r\n<example>\r\nContext: User evaluating database options\r\nuser: \"PostgreSQL이랑 MongoDB 중에 뭐가 나을까?\"\r\nassistant: \"공식 문서에서 각 DB의 특징과 use case를 조사하겠습니다.\"\r\n<commentary>\r\nResearch official documentation for authoritative feature comparison.\r\n</commentary>\r\n</example>\r\n\r\nmodel: sonnet\r\ncolor: blue\r\ntools:\r\n  - WebSearch\r\n  - WebFetch\r\n  - Read\r\n  - mcp__context7__resolve-library-id\r\n  - mcp__context7__query-docs\r\n---\r\n\r\nYou are a technical documentation researcher specializing in gathering authoritative information for technology decisions.\r\n\r\n## Core Mission\r\n\r\nResearch and synthesize information from:\r\n- Official documentation\r\n- Official guides and tutorials\r\n- Best practices from maintainers\r\n- Performance benchmarks\r\n- Migration guides\r\n- Comparison resources\r\n\r\n## Research Process\r\n\r\n### 1. Query Generation (5-10 Variations)\r\n\r\n각 기술/라이브러리에 대해 **5-10개의 검색 변형** 생성:\r\n\r\n```\r\n[기술명] official documentation\r\n[기술명] best practices 2025\r\n[기술명] vs [대안] comparison\r\n[기술명] performance benchmark\r\n[기술명] when to use\r\n[기술명] limitations drawbacks\r\n[기술명] migration guide\r\n\"[정확한 에러 메시지]\" [기술명]\r\n```\r\n\r\n**검색 전략**:\r\n- 한국어 + 영어 둘 다 검색 (커버리지 확대)\r\n- 연도 포함 (최신 정보 우선: \"2025\", \"2024\")\r\n- 에러 메시지는 정확히 인용 (따옴표 사용)\r\n- 문제 + 솔루션 키워드 모두 사용\r\n\r\n### 2. Identify Research Targets\r\n\r\nFor each technology option:\r\n- Official documentation site\r\n- GitHub repository (README, docs/)\r\n- Official blog posts\r\n- Release notes and changelogs\r\n\r\n### 3. Gather Key Information\r\n\r\nFor each option, research:\r\n\r\n```\r\n├── Core Features\r\n│   ├── Main capabilities\r\n│   ├── Unique selling points\r\n│   └── Limitations (from docs)\r\n│\r\n├── Performance\r\n│   ├── Official benchmarks\r\n│   ├── Size/bundle information\r\n│   └── Scalability claims\r\n│\r\n├── Ecosystem\r\n│   ├── Official plugins/extensions\r\n│   ├── Integration guides\r\n│   └── Tooling support\r\n│\r\n├── Learning Resources\r\n│   ├── Documentation quality\r\n│   ├── Tutorial availability\r\n│   └── Example projects\r\n│\r\n└── Maintenance Status\r\n    ├── Release frequency\r\n    ├── Issue response time\r\n    └── Roadmap/future plans\r\n```\r\n\r\n### 4. Use Context7 for Latest Docs\r\n\r\nWhen available, use Context7 MCP to get up-to-date documentation:\r\n\r\n```\r\n1. resolve-library-id: Find correct library ID\r\n2. query-docs: Get specific documentation\r\n```\r\n\r\n### 5. Cross-Reference Sources\r\n\r\nValidate information across:\r\n- Multiple official sources\r\n- Recent vs. old documentation\r\n- Different versions\r\n\r\n## Output Format\r\n\r\n```markdown\r\n## 문서 리서치 결과\r\n\r\n### [Technology A]\r\n\r\n**공식 문서 출처**: [URL]\r\n\r\n#### 핵심 특징\r\n- [특징 1]: [설명] (출처: 공식 문서)\r\n- [특징 2]: [설명] (출처: 공식 가이드)\r\n\r\n#### 성능 정보\r\n- [성능 특성]: [데이터/수치] (출처: 벤치마크 페이지)\r\n\r\n#### Best Practices (공식)\r\n- [Practice 1]\r\n- [Practice 2]\r\n\r\n#### 제한사항 (공식 문서 기준)\r\n- [제한 1]\r\n- [제한 2]\r\n\r\n#### 학습 리소스\r\n- 문서 품질: [평가]\r\n- 튜토리얼: [있음/없음, 품질]\r\n- 예제: [있음/없음]\r\n\r\n#### 유지보수 현황\r\n- 최근 릴리스: [날짜]\r\n- 릴리스 주기: [빈도]\r\n- 이슈 대응: [활발함/보통/느림]\r\n\r\n---\r\n\r\n### [Technology B]\r\n[동일 구조]\r\n\r\n---\r\n\r\n### 문서 기반 비교 요약\r\n\r\n| 측면 | Tech A | Tech B |\r\n|------|--------|--------|\r\n| 핵심 강점 | [...] | [...] |\r\n| 문서 품질 | [...] | [...] |\r\n| 학습 곡선 | [...] | [...] |\r\n| 성숙도 | [...] | [...] |\r\n\r\n### 출처 목록\r\n- [URL 1]: [설명]\r\n- [URL 2]: [설명]\r\n```\r\n\r\n## Research Guidelines\r\n\r\n### Source Priority\r\n1. **Highest**: Official documentation\r\n2. **High**: Official blog, maintainer statements\r\n3. **Medium**: Official examples, GitHub docs\r\n4. **Lower**: Third-party tutorials (verify accuracy)\r\n\r\n### Information Quality\r\n- Always note the source\r\n- Check documentation date/version\r\n- Distinguish facts vs. marketing claims\r\n- Note any conflicting information\r\n\r\n### What to Avoid\r\n- Outdated information (check dates)\r\n- Marketing-heavy content without substance\r\n- Unverified third-party claims\r\n- Speculation or rumors\r\n\r\n## Search Strategies\r\n\r\n### For Libraries\r\n```\r\n\"[library name] official documentation\"\r\n\"[library name] best practices\"\r\n\"[library name] vs [alternative]\"\r\n\"[library name] performance benchmark\"\r\n\"[library name] migration guide\"\r\n```\r\n\r\n### For Frameworks\r\n```\r\n\"[framework] architecture guide\"\r\n\"[framework] when to use\"\r\n\"[framework] limitations\"\r\n\"[framework] enterprise use cases\"\r\n```\r\n\r\n### For Databases\r\n```\r\n\"[database] use cases\"\r\n\"[database] scaling guide\"\r\n\"[database] comparison\"\r\n\"[database] benchmarks [year]\"\r\n```\r\n\r\n## Important Notes\r\n\r\n1. **Cite sources**: Always include URLs for claims\r\n2. **Be current**: Prioritize recent documentation\r\n3. **Be balanced**: Research all options equally thoroughly\r\n4. **Note gaps**: If documentation is lacking, note it as a finding\r\n5. **Version awareness**: Note which version documentation refers to\r\n"
  },
  {
    "path": "plugins/dev/agents/tradeoff-analyzer.md",
    "content": "---\r\nname: tradeoff-analyzer\r\ndescription: Use this agent to synthesize research findings into structured pros/cons analysis. Trigger after gathering information from multiple sources to create comprehensive trade-off comparison.\r\n\r\n<example>\r\nContext: User comparing state management libraries after research\r\nuser: \"Redux vs Zustand vs Jotai 장단점 비교해줘\"\r\nassistant: \"세 라이브러리의 장단점을 평가 기준별로 정리하고 비교 분석하겠습니다.\"\r\n<commentary>\r\nSpecific libraries named - synthesize into structured comparison with pros/cons for each.\r\n</commentary>\r\n</example>\r\n\r\n<example>\r\nContext: Architecture decision after gathering information\r\nuser: \"모놀리스랑 마이크로서비스 트레이드오프 분석해줘\"\r\nassistant: \"두 아키텍처의 장단점을 현재 프로젝트 맥락에서 비교 분석하겠습니다.\"\r\n<commentary>\r\nArchitecture comparison - analyze trade-offs considering project context from codebase analysis.\r\n</commentary>\r\n</example>\r\n\r\nmodel: sonnet\r\ncolor: yellow\r\ntools:\r\n  - Read\r\n---\r\n\r\nYou are a trade-off analysis specialist who synthesizes information from multiple sources into clear, actionable comparisons.\r\n\r\n## Core Mission\r\n\r\nTransform raw research findings into:\r\n- Structured pros/cons for each option\r\n- Comparative analysis across evaluation criteria\r\n- Confidence ratings based on source quality\r\n- Clear recommendations with reasoning\r\n\r\n## Analysis Process\r\n\r\n### 1. Consolidate Information\r\n\r\nGather findings from:\r\n- Codebase analysis (codebase-explorer)\r\n- Documentation research (docs-researcher)\r\n- Community opinions (dev-scan skill)\r\n- Expert perspectives (agent-council skill)\r\n\r\n### 2. Identify Evaluation Criteria\r\n\r\nBased on the decision type and context:\r\n- Define relevant criteria\r\n- Assign weights based on project needs\r\n- Note any criteria requested by user\r\n\r\n### 3. Analyze Each Option\r\n\r\nFor each option:\r\n```\r\n├── Strengths\r\n│   ├── Supported by which sources?\r\n│   ├── How significant?\r\n│   └── Confidence level?\r\n│\r\n├── Weaknesses\r\n│   ├── Supported by which sources?\r\n│   ├── How significant?\r\n│   └── Workarounds available?\r\n│\r\n├── Fit with Current Context\r\n│   ├── Alignment with existing code\r\n│   ├── Team familiarity\r\n│   └── Migration complexity\r\n│\r\n└── Risks\r\n    ├── Known issues\r\n    ├── Potential problems\r\n    └── Mitigation strategies\r\n```\r\n\r\n### 4. Cross-Option Comparison\r\n\r\nCompare options across each criterion:\r\n- Score each option (1-5 scale)\r\n- Note trade-offs between options\r\n- Identify deal-breakers if any\r\n\r\n### 5. Handle Conflicting Information\r\n\r\nWhen sources disagree:\r\n- Note the disagreement\r\n- Analyze why (different contexts, versions, etc.)\r\n- Assign confidence based on source quality\r\n\r\n## Output Format\r\n\r\n```markdown\r\n## 트레이드오프 분석 결과\r\n\r\n### 평가 기준\r\n\r\n| 기준 | 가중치 | 근거 |\r\n|------|--------|------|\r\n| [기준 1] | X% | [왜 이 가중치인지] |\r\n| [기준 2] | X% | [...] |\r\n| [기준 3] | X% | [...] |\r\n\r\n---\r\n\r\n### Option A: [이름]\r\n\r\n#### 장점 (Pros)\r\n| 장점 | 중요도 | 출처 | 신뢰도 |\r\n|------|--------|------|--------|\r\n| [장점 1] | 높음 | 공식 문서 | 95% |\r\n| [장점 2] | 중간 | Reddit + HN | 75% |\r\n| [장점 3] | 높음 | 코드 분석 | 90% |\r\n\r\n#### 단점 (Cons)\r\n| 단점 | 심각도 | 출처 | 완화 가능 |\r\n|------|--------|------|----------|\r\n| [단점 1] | 높음 | 커뮤니티 | 부분적 |\r\n| [단점 2] | 낮음 | 벤치마크 | 예 |\r\n\r\n#### 리스크\r\n- **[리스크 1]**: [설명] - 완화: [방법]\r\n- **[리스크 2]**: [설명] - 완화: [방법]\r\n\r\n#### 적합한 시나리오\r\n- [시나리오 1]\r\n- [시나리오 2]\r\n\r\n---\r\n\r\n### Option B: [이름]\r\n[동일 구조]\r\n\r\n---\r\n\r\n### 종합 비교표\r\n\r\n#### 기준별 점수 (5점 만점)\r\n\r\n| 기준 (가중치) | Option A | Option B | Option C | 비고 |\r\n|---------------|----------|----------|----------|------|\r\n| [기준 1] (X%) | ⭐4 | ⭐3 | ⭐5 | [핵심 차이] |\r\n| [기준 2] (X%) | ⭐3 | ⭐5 | ⭐2 | [핵심 차이] |\r\n| [기준 3] (X%) | ⭐4 | ⭐4 | ⭐3 | [핵심 차이] |\r\n| **가중 점수** | **X.X** | **X.X** | **X.X** | |\r\n\r\n#### Trade-off 요약\r\n\r\n| 선택 | 얻는 것 | 포기하는 것 |\r\n|------|---------|-------------|\r\n| Option A | [핵심 장점] | [핵심 단점] |\r\n| Option B | [핵심 장점] | [핵심 단점] |\r\n| Option C | [핵심 장점] | [핵심 단점] |\r\n\r\n---\r\n\r\n### 충돌하는 의견 정리\r\n\r\n| 주제 | 의견 A | 의견 B | 분석 |\r\n|------|--------|--------|------|\r\n| [주제] | [의견] (출처) | [의견] (출처) | [왜 다른지, 어느 쪽이 더 신뢰할 만한지] |\r\n\r\n---\r\n\r\n### 분석 결론\r\n\r\n**예비 추천**: [Option X]\r\n\r\n**핵심 근거**:\r\n1. [근거 1]\r\n2. [근거 2]\r\n3. [근거 3]\r\n\r\n**주의사항**:\r\n- [주의 1]\r\n- [주의 2]\r\n\r\n**추가 고려 필요**:\r\n- [추가로 확인하면 좋을 사항]\r\n```\r\n\r\n## Confidence Rating System\r\n\r\n| 신뢰도 | 기준 |\r\n|--------|------|\r\n| 90-100% | 공식 문서 + 다수 출처 일치 |\r\n| 75-89% | 신뢰할 만한 출처 2개 이상 일치 |\r\n| 50-74% | 단일 신뢰 출처 또는 다수 비공식 출처 |\r\n| 25-49% | 비공식 출처, 일부 상충 |\r\n| 0-24% | 추측성, 출처 불분명, 상충 많음 |\r\n\r\n## Analysis Guidelines\r\n\r\n1. **Be balanced**: Give each option fair analysis\r\n2. **Be specific**: Use concrete examples and numbers\r\n3. **Be honest**: Note limitations and uncertainties\r\n4. **Be practical**: Consider real-world implementation\r\n5. **Be contextual**: Weigh findings against project context\r\n"
  },
  {
    "path": "plugins/dev/skills/dev-scan/SKILL.md",
    "content": "---\nname: dev-scan\ndescription: 개발 커뮤니티에서 기술 주제에 대한 다양한 의견 수집. \"개발자 반응\", \"커뮤니티 의견\", \"developer reactions\" 요청에 사용. Reddit, HN, Dev.to, Lobsters 등 종합.\nversion: 1.0.0\n---\n\n# Dev Opinions Scan\n\n여러 개발 커뮤니티에서 특정 주제에 대한 다양한 의견을 수집하여 종합.\n\n## Purpose\n\n기술 주제에 대한 **다양한 시각**을 빠르게 파악:\n- 찬반 의견 분포\n- 실무자들의 경험담\n- 숨겨진 우려사항이나 장점\n- 독특하거나 주목할 만한 시각\n\n## Data Sources\n\n| Platform | Method |\n|----------|--------|\n| Reddit | Gemini CLI |\n| Hacker News | WebSearch |\n| Dev.to | WebSearch |\n| Lobsters | WebSearch |\n\n## Execution\n\n### Step 1: Topic Extraction\n사용자 요청에서 핵심 주제 추출.\n\n예시:\n- \"React 19에 대한 개발자들 반응\" → `React 19`\n- \"Bun vs Deno 커뮤니티 의견\" → `Bun vs Deno`\n\n### Step 2: Parallel Search (Single Message, 4 Sources)\n\n**Reddit** (Gemini CLI - WebFetch blocked):\n```bash\n# 단일 Gemini 호출로 Reddit 검색 (명시적 검색 지시 필수)\ngemini -p \"Search Reddit for discussions about {TOPIC}. Summarize the main opinions, debates, and insights from developers. Include Reddit post URLs where possible. Focus on: 1) Common opinions 2) Controversies 3) Notable perspectives from experienced developers.\"\n```\n\n**주의사항**:\n- `site:reddit.com` 형식은 작동하지 않음 - Gemini가 검색 쿼리가 아닌 작업 요청으로 해석\n- 반드시 \"Search Reddit for...\" 형태로 명시적 검색 지시 필요\n- 단일 호출이 병렬 호출보다 안정적 (출력 혼재 방지)\n\n**Other Sources** (WebSearch, parallel):\n```\nWebSearch: \"{topic} site:news.ycombinator.com\"\nWebSearch: \"{topic} site:dev.to\"\nWebSearch: \"{topic} site:lobste.rs\"\n```\n\n**CRITICAL**: 4개 검색을 반드시 **하나의 메시지**에서 병렬로 실행. Gemini는 단일 호출, WebSearch는 3개 병렬.\n\n### Step 3: Synthesize & Present\n\n수집된 데이터를 분석하여 의미 있는 인사이트를 도출한다.\n\n#### 3-1. 의견 분류 및 패턴 파악\n\n각 소스에서 수집된 의견들을 다음 기준으로 분류:\n\n- **찬성/긍정**: 해당 기술/도구를 지지하는 의견\n- **반대/부정**: 우려, 비판, 대안 제시\n- **중립/조건부**: \"~한 경우에만\", \"~와 함께 쓰면\" 등의 조건부 의견\n- **경험 기반**: 실제 프로덕션 사용 경험을 바탕으로 한 의견\n\n#### 3-2. 공통 의견(Consensus) 도출\n\n여러 커뮤니티에서 **반복적으로 등장하는** 의견을 식별:\n\n- 2개 이상의 소스에서 동일한 포인트가 언급되면 공통 의견으로 분류\n- 특히 Reddit과 HN에서 동시에 언급되는 의견은 신뢰도 높음\n- 구체적인 수치나 사례가 포함된 의견 우선\n- **최소 5개 이상의 공통 의견** 도출 목표\n\n#### 3-3. 논쟁점(Controversy) 식별\n\n커뮤니티 간 또는 커뮤니티 내에서 **의견이 갈리는** 지점 파악:\n\n- 같은 주제에 대해 상반된 의견이 존재하는 경우\n- 댓글에서 활발한 토론이 벌어진 스레드\n- \"depends on...\", \"but actually...\" 등의 반론이 많은 주제\n- **최소 3개 이상의 논쟁점** 식별 목표\n\n#### 3-4. 주목할 시각(Notable Perspective) 선별\n\n독특하거나 깊이 있는 인사이트 발굴:\n\n- 다수 의견과 다르지만 논리적 근거가 탄탄한 의견\n- 시니어 개발자나 해당 분야 전문가의 의견\n- 실제 대규모 프로젝트 경험에서 나온 인사이트\n- 다른 사람들이 놓치기 쉬운 엣지 케이스나 장기적 관점\n- **최소 3개 이상의 주목할 시각** 선별 목표\n\n## Output Format\n\n**핵심 원칙**: 모든 의견에 출처를 인라인으로 붙인다. 출처 없는 의견은 포함하지 않는다.\n\n```markdown\n## Key Insights\n\n### Consensus (공통 의견)\n\n1. **[의견 제목]**\n   - [구체적인 내용 설명]\n   - [추가 맥락이나 예시]\n   - Sources: [Reddit](url), [HN](url)\n\n2. **[의견 제목]**\n   - [구체적인 내용]\n   - Source: [Dev.to](url)\n\n(최소 5개 이상)\n\n---\n\n### Controversy (논쟁점)\n\n1. **[논쟁 주제]**\n   - 찬성측: \"[인용]\" - [Source](url)\n   - 반대측: \"[인용]\" - [Source](url)\n   - 맥락: [왜 의견이 갈리는지]\n\n2. **[논쟁 주제]**\n   - ...\n\n(최소 3개 이상)\n\n---\n\n### Notable Perspective (주목할 시각)\n\n1. **[인사이트 제목]**\n   > \"[원문 인용 또는 핵심 문장]\"\n   - [왜 주목할 만한지 설명]\n   - Source: [Platform](url)\n\n2. **[인사이트 제목]**\n   - ...\n\n(최소 3개 이상)\n```\n\n### 출처 표기 규칙\n\n- **인라인 링크 필수**: 모든 의견 끝에 `Source: [Platform](url)` 형식으로 붙임\n- **복수 출처**: 동일 의견이 여러 곳에서 언급되면 `Sources: [Reddit](url), [HN](url)`\n- **직접 인용**: 가능하면 원문을 `\"...\"` 형태로 인용\n- **URL 정확성**: 실제 접근 가능한 링크만 포함 (검색 결과에서 확인된 URL)\n\n## Error Handling\n\n| 상황 | 대응 |\n|------|------|\n| 검색 결과 없음 | 해당 플랫폼 생략, 다른 소스에 집중 |\n| Gemini CLI 실패 | Reddit 생략하고 나머지 3개로 진행 |\n| 주제가 너무 새로움 | 결과 부족 안내, 관련 키워드 제안 |\n\n## Examples\n\n**단순 주제**:\n```\nUser: \"Tailwind v4 개발자들 반응 어때?\"\n→ topic: \"Tailwind v4\"\n→ 4개 소스 병렬 검색\n→ 종합 인사이트 제공\n```\n\n**비교 주제**:\n```\nUser: \"pnpm vs yarn vs npm 커뮤니티 의견\"\n→ topic: \"pnpm vs yarn vs npm comparison\"\n→ 4개 소스 병렬 검색\n→ 각 도구별 선호도 정리\n```\n\n**논쟁적 주제**:\n```\nUser: \"Claude Code Plugin 에 대한 개발자들 생각\"\n→ topic: \"Claude Code Plugin tips\"\n→ 4개 소스 병렬 검색\n→ 종합 인사이트 제공\n```\n"
  },
  {
    "path": "plugins/dev/skills/tech-decision/SKILL.md",
    "content": "---\r\nname: tech-decision\r\ndescription: This skill should be used when the user asks to \"기술 의사결정\", \"뭐 쓸지 고민\", \"A vs B\", \"비교 분석\", \"라이브러리 선택\", \"아키텍처 결정\", \"어떤 걸 써야 할지\", \"트레이드오프\", \"기술 선택\", \"구현 방식 고민\", or needs deep analysis for technical decisions. Provides systematic multi-source research and synthesized recommendations.\r\nversion: 0.1.0\r\n---\r\n\r\n# Tech Decision - 기술 의사결정 깊이 탐색\r\n\r\n기술적 의사결정을 체계적으로 분석하고 종합적인 결론을 도출하는 스킬.\r\n\r\n## 핵심 원칙\r\n\r\n**두괄식 결과물**: 모든 보고서는 결론을 먼저 제시하고, 그 다음에 근거를 제공한다.\r\n\r\n## 사용 시나리오\r\n\r\n- 라이브러리/프레임워크 선택 (React vs Vue, Prisma vs TypeORM)\r\n- 아키텍처 패턴 결정 (Monolith vs Microservices, REST vs GraphQL)\r\n- 구현 방식 선택 (Server-side vs Client-side, Polling vs WebSocket)\r\n- 기술 스택 결정 (언어, 데이터베이스, 인프라 등)\r\n\r\n## 의사결정 워크플로우\r\n\r\n### Phase 1: 문제 정의\r\n\r\n의사결정 주제와 맥락을 명확히 한다:\r\n\r\n1. **주제 파악**: 무엇을 결정해야 하는가?\r\n2. **옵션 식별**: 비교할 선택지들은 무엇인가?\r\n3. **평가 기준 수립**: 어떤 기준으로 평가할 것인가?\r\n   - 성능, 학습 곡선, 생태계, 유지보수성, 비용 등\r\n   - 프로젝트 특성에 맞는 기준 우선순위 설정\r\n   - 상세 기준은 **`references/evaluation-criteria.md`** 참조\r\n\r\n### Phase 2: 병렬 정보 수집\r\n\r\n여러 소스에서 동시에 정보를 수집한다. **반드시 병렬로 실행**:\r\n\r\n```\r\n┌─────────────────────────────────────────────────────────────┐\r\n│  동시 실행 (Task tool로 병렬 실행)                            │\r\n├─────────────────────────────────────────────────────────────┤\r\n│  1. codebase-explorer agent                                 │\r\n│     → 기존 코드베이스 분석, 현재 패턴/제약사항 파악              │\r\n│                                                             │\r\n│  2. docs-researcher agent                                   │\r\n│     → 공식 문서, 가이드, best practices 리서치                │\r\n│                                                             │\r\n│  3. Skill: dev-scan                                         │\r\n│     → 커뮤니티 의견 수집 (Reddit, HN, Dev.to, Lobsters)       │\r\n│                                                             │\r\n│  4. Skill: agent-council                                    │\r\n│     → 다양한 AI 전문가 관점 수집                              │\r\n│                                                             │\r\n│  5. [선택] Context7 MCP                                     │\r\n│     → 라이브러리별 최신 문서 조회                              │\r\n└─────────────────────────────────────────────────────────────┘\r\n```\r\n\r\n**실행 방법**:\r\n\r\n```markdown\r\n# Agents는 Task tool로 병렬 실행\r\nTask codebase-explorer: \"분석할 주제와 컨텍스트\"\r\nTask docs-researcher: \"리서치할 기술/라이브러리\"\r\n\r\n# 기존 스킬은 Skill tool로 호출\r\nSkill: dev-scan (커뮤니티 의견)\r\nSkill: agent-council (전문가 관점)\r\n```\r\n\r\n### Phase 3: 종합 분석\r\n\r\n수집된 정보를 바탕으로 tradeoff-analyzer agent를 실행:\r\n\r\n- 각 옵션별 pros/cons 정리\r\n- 평가 기준별 점수화\r\n- 충돌하는 의견 정리\r\n- 신뢰도 평가 (출처 기반)\r\n\r\n### Phase 4: 최종 보고서 생성\r\n\r\ndecision-synthesizer agent로 두괄식 종합 보고서 작성 (상세 템플릿: **`references/report-template.md`**):\r\n\r\n```markdown\r\n# 기술 의사결정 보고서: [주제]\r\n\r\n## 결론 (Executive Summary)\r\n**추천: [Option X]**\r\n[1-2문장 핵심 이유]\r\n\r\n## 평가 기준 및 가중치\r\n| 기준 | 가중치 | 설명 |\r\n|------|--------|------|\r\n| 성능 | 30% | ... |\r\n| 학습곡선 | 20% | ... |\r\n\r\n## 옵션별 분석\r\n\r\n### Option A: [이름]\r\n**장점:**\r\n- [장점 1] (출처: 공식 문서)\r\n- [장점 2] (출처: Reddit r/webdev)\r\n\r\n**단점:**\r\n- [단점 1] (출처: HN 토론)\r\n\r\n**적합한 경우:** [시나리오]\r\n\r\n### Option B: [이름]\r\n...\r\n\r\n## 종합 비교\r\n| 기준 | Option A | Option B | Option C |\r\n|------|----------|----------|----------|\r\n| 성능 | ⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐⭐ |\r\n| 학습곡선 | ⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ |\r\n| **총점** | **X점** | **Y점** | **Z점** |\r\n\r\n## 추천 근거\r\n1. [핵심 근거 1 with 출처]\r\n2. [핵심 근거 2 with 출처]\r\n3. [핵심 근거 3 with 출처]\r\n\r\n## 리스크 및 주의사항\r\n- [주의점 1]\r\n- [주의점 2]\r\n\r\n## 참고 출처\r\n- [출처 목록]\r\n```\r\n\r\n## 활용하는 리소스\r\n\r\n### Agents (이 플러그인)\r\n\r\n| Agent | 역할 |\r\n|-------|------|\r\n| `codebase-explorer` | 기존 코드베이스 분석, 패턴/제약사항 파악 |\r\n| `docs-researcher` | 공식 문서, 가이드, best practices 리서치 |\r\n| `tradeoff-analyzer` | 옵션별 pros/cons 정리, 비교 분석 |\r\n| `decision-synthesizer` | 두괄식 최종 보고서 생성 |\r\n\r\n### 기존 스킬 (Skill tool로 호출)\r\n\r\n| Skill | 용도 | 호출 방법 |\r\n|-------|------|-----------|\r\n| `dev-scan` | Reddit, HN, Dev.to 등 커뮤니티 의견 | `Skill: dev-scan` |\r\n| `agent-council` | 다양한 AI 전문가 관점 수집 | `Skill: agent-council` |\r\n\r\n### MCP (선택적)\r\n\r\n- **Context7**: 라이브러리별 최신 공식 문서 조회\r\n\r\n## 빠른 실행 가이드\r\n\r\n### 1. 간단한 비교 (A vs B)\r\n\r\n```\r\n사용자: \"React vs Vue 뭐가 나을까?\"\r\n\r\n실행:\r\n1. Task docs-researcher + Task codebase-explorer (병렬)\r\n2. Skill: dev-scan\r\n3. Task tradeoff-analyzer\r\n4. Task decision-synthesizer\r\n```\r\n\r\n### 2. 깊은 분석 (복잡한 의사결정)\r\n\r\n```\r\n사용자: \"우리 프로젝트에 상태관리 라이브러리 뭘 쓸지 고민이야\"\r\n\r\n실행:\r\n1. Task codebase-explorer (현재 상태 분석)\r\n2. 병렬 실행:\r\n   - Task docs-researcher (Redux, Zustand, Jotai, Recoil 등)\r\n   - Skill: dev-scan\r\n   - Skill: agent-council\r\n3. Task tradeoff-analyzer\r\n4. Task decision-synthesizer\r\n```\r\n\r\n### 3. 아키텍처 결정\r\n\r\n```\r\n사용자: \"모놀리스 vs 마이크로서비스 어떻게 해야 할까?\"\r\n\r\n실행:\r\n1. Task codebase-explorer (현재 규모/복잡도 분석)\r\n2. 병렬 실행:\r\n   - Task docs-researcher (각 아키텍처 best practices)\r\n   - Skill: agent-council (아키텍트 관점)\r\n3. Task tradeoff-analyzer (팀 규모, 배포 복잡도 등 고려)\r\n4. Task decision-synthesizer\r\n```\r\n\r\n## 주의사항\r\n\r\n1. **컨텍스트 제공**: 프로젝트 특성, 팀 규모, 기존 기술 스택 등 맥락 정보가 많을수록 정확한 분석 가능\r\n2. **평가 기준 확인**: 사용자에게 중요한 기준이 무엇인지 먼저 확인\r\n3. **신뢰도 표시**: 출처가 불분명하거나 오래된 정보는 명시\r\n4. **결론 먼저**: 항상 두괄식으로 결론부터 제시\r\n\r\n## 추가 리소스\r\n\r\n### 참고 파일\r\n- **`references/report-template.md`** - 상세 보고서 템플릿\r\n- **`references/evaluation-criteria.md`** - 평가 기준 가이드\r\n"
  },
  {
    "path": "plugins/dev/skills/tech-decision/references/evaluation-criteria.md",
    "content": "# 평가 기준 가이드\r\n\r\n의사결정 유형별 권장 평가 기준.\r\n\r\n## 라이브러리/프레임워크 선택\r\n\r\n| 기준 | 설명 | 측정 방법 |\r\n|------|------|-----------|\r\n| **성능** | 속도, 메모리 사용량, 번들 크기 | 벤치마크, 공식 문서 |\r\n| **학습 곡선** | 팀이 익히는 데 걸리는 시간 | 문서 품질, 튜토리얼 양, 개념 복잡도 |\r\n| **생태계** | 플러그인, 확장, 서드파티 도구 | npm 패키지 수, GitHub stars |\r\n| **커뮤니티** | 활성도, 질문 답변 속도 | Stack Overflow 질문 수, Discord/Slack 활성도 |\r\n| **유지보수성** | 장기 지원, 업데이트 빈도 | 릴리스 주기, 이슈 해결 속도 |\r\n| **타입 지원** | TypeScript 지원 수준 | 내장 타입, @types 품질 |\r\n| **문서화** | 공식 문서 품질 | 예제 풍부함, 최신성, 검색 가능성 |\r\n| **채택률** | 업계 사용 현황 | npm 다운로드, 기업 사용 사례 |\r\n\r\n### 가중치 예시\r\n\r\n**스타트업 (빠른 개발 중시)**:\r\n- 학습 곡선: 30%\r\n- 생태계: 25%\r\n- 문서화: 20%\r\n- 성능: 15%\r\n- 유지보수성: 10%\r\n\r\n**엔터프라이즈 (안정성 중시)**:\r\n- 유지보수성: 30%\r\n- 타입 지원: 20%\r\n- 커뮤니티: 20%\r\n- 성능: 15%\r\n- 문서화: 15%\r\n\r\n---\r\n\r\n## 아키텍처 패턴 결정\r\n\r\n| 기준 | 설명 | 측정 방법 |\r\n|------|------|-----------|\r\n| **확장성** | 부하 증가 시 대응 용이성 | 수평/수직 확장 가능 여부 |\r\n| **복잡도** | 구현 및 운영 복잡도 | 필요한 인프라, 학습 비용 |\r\n| **팀 규모 적합성** | 팀 크기에 맞는지 | Conway's Law 고려 |\r\n| **배포 용이성** | CI/CD 복잡도 | 파이프라인 단계 수 |\r\n| **장애 격리** | 부분 장애 시 전체 영향 | 독립 배포 가능 여부 |\r\n| **데이터 일관성** | 트랜잭션 처리 | ACID vs Eventually Consistent |\r\n| **운영 비용** | 인프라 및 인력 비용 | 서버 수, DevOps 필요 인력 |\r\n| **개발 속도** | 초기 개발 ~ MVP | 보일러플레이트, 설정 복잡도 |\r\n\r\n### 가중치 예시\r\n\r\n**초기 스타트업 (MVP)**:\r\n- 개발 속도: 35%\r\n- 복잡도: 25%\r\n- 운영 비용: 20%\r\n- 확장성: 10%\r\n- 기타: 10%\r\n\r\n**성장기 (스케일업)**:\r\n- 확장성: 30%\r\n- 장애 격리: 20%\r\n- 팀 규모 적합성: 20%\r\n- 배포 용이성: 15%\r\n- 운영 비용: 15%\r\n\r\n---\r\n\r\n## 구현 방식 결정\r\n\r\n| 기준 | 설명 | 측정 방법 |\r\n|------|------|-----------|\r\n| **구현 복잡도** | 코드 양, 난이도 | LoC, 추상화 수준 |\r\n| **테스트 용이성** | 단위/통합 테스트 작성 난이도 | 모킹 필요성, 의존성 |\r\n| **디버깅 용이성** | 문제 추적 난이도 | 로깅, 트레이싱 지원 |\r\n| **성능 특성** | 지연시간, 처리량 | 벤치마크 |\r\n| **리소스 사용** | CPU, 메모리, 네트워크 | 프로파일링 |\r\n| **기존 코드 호환** | 현재 아키텍처와 맞는지 | 리팩토링 필요량 |\r\n| **유지보수성** | 장기 관리 용이성 | 코드 가독성, 문서화 |\r\n\r\n---\r\n\r\n## 데이터베이스 선택\r\n\r\n| 기준 | 설명 | 측정 방법 |\r\n|------|------|-----------|\r\n| **데이터 모델** | 관계형/문서형/그래프/키-값 | 요구사항 매칭 |\r\n| **쿼리 유연성** | 복잡한 쿼리 지원 | SQL/NoSQL 기능 |\r\n| **확장성** | 수평 확장 용이성 | 샤딩, 레플리케이션 |\r\n| **일관성** | ACID vs BASE | 트랜잭션 요구사항 |\r\n| **성능** | 읽기/쓰기 속도 | 벤치마크 |\r\n| **운영 복잡도** | 관리 오버헤드 | 백업, 모니터링, 마이그레이션 |\r\n| **비용** | 라이선스, 인프라 | TCO 계산 |\r\n| **에코시스템** | ORM, 드라이버, 도구 | 지원 언어/프레임워크 |\r\n\r\n---\r\n\r\n## 상황별 추천 기준\r\n\r\n### \"빠르게 MVP 만들어야 해\"\r\n우선순위: 학습 곡선 > 개발 속도 > 문서화 > 나머지\r\n\r\n### \"대규모 트래픽 예상\"\r\n우선순위: 성능 > 확장성 > 운영 비용 > 나머지\r\n\r\n### \"팀이 작아 (1-3명)\"\r\n우선순위: 복잡도 낮음 > 문서화 > 커뮤니티 > 나머지\r\n\r\n### \"엔터프라이즈 환경\"\r\n우선순위: 보안 > 유지보수성 > 타입 지원 > 나머지\r\n\r\n### \"레거시 시스템 통합\"\r\n우선순위: 기존 코드 호환 > 마이그레이션 용이성 > 나머지\r\n\r\n---\r\n\r\n## 신뢰도 평가 기준\r\n\r\n정보 출처별 신뢰도:\r\n\r\n| 출처 | 신뢰도 | 비고 |\r\n|------|--------|------|\r\n| 공식 문서 | 높음 | 정확하나 편향 가능 |\r\n| 벤치마크 (독립) | 높음 | 조건 확인 필요 |\r\n| GitHub Issues | 중간-높음 | 실제 사용 경험 |\r\n| Stack Overflow | 중간 | 날짜 확인 필요 |\r\n| Reddit/HN | 중간 | 다양한 관점, 노이즈 있음 |\r\n| 블로그 | 낮음-중간 | 작성자 배경 확인 |\r\n| 마케팅 자료 | 낮음 | 편향됨 |\r\n\r\n**신뢰도 높이는 방법**:\r\n- 여러 출처에서 동일한 정보 확인\r\n- 최신 날짜 우선\r\n- 실제 사용 경험 기반 의견 우선\r\n- 벤치마크는 조건/환경 확인\r\n"
  },
  {
    "path": "plugins/dev/skills/tech-decision/references/report-template.md",
    "content": "# 기술 의사결정 보고서 템플릿\r\n\r\n## 전체 구조\r\n\r\n```markdown\r\n# 기술 의사결정 보고서: [주제]\r\n\r\n**작성일**: YYYY-MM-DD\r\n**의사결정 유형**: [라이브러리 선택 | 아키텍처 결정 | 구현 방식 | 기술 스택]\r\n\r\n---\r\n\r\n## 1. 결론 (Executive Summary)\r\n\r\n**추천: [Option Name]**\r\n\r\n[1-2문장으로 핵심 추천 이유 요약]\r\n\r\n**신뢰도**: [높음 | 중간 | 낮음] - [신뢰도 판단 근거]\r\n\r\n---\r\n\r\n## 2. 의사결정 맥락\r\n\r\n### 2.1 문제 정의\r\n[무엇을 결정해야 하는지 명확히 기술]\r\n\r\n### 2.2 비교 대상\r\n- Option A: [이름] - [한 줄 설명]\r\n- Option B: [이름] - [한 줄 설명]\r\n- Option C: [이름] - [한 줄 설명]\r\n\r\n### 2.3 프로젝트 컨텍스트\r\n- **프로젝트 규모**: [소규모 | 중규모 | 대규모]\r\n- **팀 규모**: [N명]\r\n- **기존 기술 스택**: [관련 기술들]\r\n- **특수 요구사항**: [있다면 기술]\r\n\r\n---\r\n\r\n## 3. 평가 기준\r\n\r\n| 기준 | 가중치 | 설명 |\r\n|------|--------|------|\r\n| [기준 1] | [X%] | [왜 중요한지] |\r\n| [기준 2] | [X%] | [왜 중요한지] |\r\n| [기준 3] | [X%] | [왜 중요한지] |\r\n| [기준 4] | [X%] | [왜 중요한지] |\r\n| **합계** | **100%** | |\r\n\r\n---\r\n\r\n## 4. 옵션별 상세 분석\r\n\r\n### 4.1 Option A: [이름]\r\n\r\n**개요**: [2-3문장 설명]\r\n\r\n**장점**:\r\n- ✅ [장점 1]\r\n  - 출처: [공식 문서 | Reddit | HN | 전문가 의견 | 코드 분석]\r\n  - 신뢰도: [높음 | 중간 | 낮음]\r\n\r\n- ✅ [장점 2]\r\n  - 출처: [...]\r\n  - 신뢰도: [...]\r\n\r\n**단점**:\r\n- ❌ [단점 1]\r\n  - 출처: [...]\r\n  - 신뢰도: [...]\r\n\r\n**적합한 경우**:\r\n- [시나리오 1]\r\n- [시나리오 2]\r\n\r\n**부적합한 경우**:\r\n- [시나리오 1]\r\n- [시나리오 2]\r\n\r\n---\r\n\r\n### 4.2 Option B: [이름]\r\n[동일 구조로 반복]\r\n\r\n---\r\n\r\n### 4.3 Option C: [이름]\r\n[동일 구조로 반복]\r\n\r\n---\r\n\r\n## 5. 종합 비교표\r\n\r\n### 5.1 기준별 점수 (5점 만점)\r\n\r\n| 기준 (가중치) | Option A | Option B | Option C |\r\n|---------------|----------|----------|----------|\r\n| [기준 1] (X%) | ⭐⭐⭐⭐ (4) | ⭐⭐⭐ (3) | ⭐⭐⭐⭐⭐ (5) |\r\n| [기준 2] (X%) | ⭐⭐⭐ (3) | ⭐⭐⭐⭐⭐ (5) | ⭐⭐ (2) |\r\n| [기준 3] (X%) | ⭐⭐⭐⭐ (4) | ⭐⭐⭐⭐ (4) | ⭐⭐⭐ (3) |\r\n| **가중 평균** | **X.X** | **X.X** | **X.X** |\r\n\r\n### 5.2 Quick Comparison\r\n\r\n| 측면 | Option A | Option B | Option C |\r\n|------|----------|----------|----------|\r\n| 학습 곡선 | 가파름 | 완만함 | 보통 |\r\n| 커뮤니티 | 매우 활발 | 성장 중 | 안정적 |\r\n| 성숙도 | 성숙 | 신생 | 성숙 |\r\n| 번들 크기 | 큼 | 작음 | 보통 |\r\n\r\n---\r\n\r\n## 6. 추천 근거\r\n\r\n### 6.1 핵심 근거\r\n\r\n1. **[근거 1 제목]**\r\n   - 설명: [상세 설명]\r\n   - 출처: [구체적 출처]\r\n\r\n2. **[근거 2 제목]**\r\n   - 설명: [상세 설명]\r\n   - 출처: [구체적 출처]\r\n\r\n3. **[근거 3 제목]**\r\n   - 설명: [상세 설명]\r\n   - 출처: [구체적 출처]\r\n\r\n### 6.2 프로젝트 맥락 기반 판단\r\n\r\n[현재 프로젝트 상황에 비추어 왜 이 선택이 적합한지 설명]\r\n\r\n---\r\n\r\n## 7. 리스크 및 주의사항\r\n\r\n### 7.1 채택 시 리스크\r\n\r\n| 리스크 | 영향도 | 발생 가능성 | 완화 방안 |\r\n|--------|--------|-------------|-----------|\r\n| [리스크 1] | [높음|중간|낮음] | [높음|중간|낮음] | [방안] |\r\n| [리스크 2] | [...] | [...] | [...] |\r\n\r\n### 7.2 마이그레이션 고려사항\r\n\r\n- [고려사항 1]\r\n- [고려사항 2]\r\n\r\n### 7.3 장기적 고려사항\r\n\r\n- [고려사항 1]\r\n- [고려사항 2]\r\n\r\n---\r\n\r\n## 8. 대안 시나리오\r\n\r\n### 8.1 만약 [조건 A]라면?\r\n→ [Option Y]가 더 적합할 수 있음. 이유: [...]\r\n\r\n### 8.2 만약 [조건 B]라면?\r\n→ [Option Z] 고려. 이유: [...]\r\n\r\n---\r\n\r\n## 9. 참고 출처\r\n\r\n### 공식 문서\r\n- [링크 1]\r\n- [링크 2]\r\n\r\n### 커뮤니티 토론\r\n- [Reddit/HN 링크 1]\r\n- [Reddit/HN 링크 2]\r\n\r\n### 블로그/아티클\r\n- [링크 1]\r\n- [링크 2]\r\n\r\n### 벤치마크/비교 자료\r\n- [링크 1]\r\n- [링크 2]\r\n\r\n---\r\n\r\n## 10. 결론 재확인\r\n\r\n**최종 추천: [Option Name]**\r\n\r\n[마지막으로 한 번 더 핵심 이유 요약]\r\n\r\n**다음 단계**:\r\n1. [구체적 액션 아이템 1]\r\n2. [구체적 액션 아이템 2]\r\n3. [구체적 액션 아이템 3]\r\n```\r\n\r\n## 간소화 버전 (Quick Decision)\r\n\r\n빠른 의사결정이 필요한 경우:\r\n\r\n```markdown\r\n# Quick Decision: [주제]\r\n\r\n## 결론\r\n**추천: [Option Name]** - [한 줄 이유]\r\n\r\n## 비교\r\n| | Option A | Option B |\r\n|---|----------|----------|\r\n| 장점 | [1-2개] | [1-2개] |\r\n| 단점 | [1-2개] | [1-2개] |\r\n| 적합 | [시나리오] | [시나리오] |\r\n\r\n## 핵심 근거\r\n1. [근거 1]\r\n2. [근거 2]\r\n\r\n## 주의\r\n- [주의사항]\r\n```\r\n"
  },
  {
    "path": "plugins/doubt/.claude-plugin/plugin.json",
    "content": "{\r\n  \"name\": \"doubt\",\r\n  \"version\": \"1.0.0\",\r\n  \"description\": \"Force Claude to re-validate when you have doubts (!doubt)\",\r\n  \"author\": {\r\n    \"name\": \"team-attention\"\r\n  },\r\n  \"keywords\": [\"validation\", \"hallucination\", \"trust\", \"verification\", \"doubt\"]\r\n}\r\n"
  },
  {
    "path": "plugins/doubt/README.md",
    "content": "# doubt\r\n\r\nForce Claude to re-validate its responses when you have doubts.\r\n\r\n## Usage\r\n\r\nAdd `!rv` anywhere in your prompt:\r\n\r\n```\r\nAnalyze this code !rv\r\n```\r\n\r\nWhen Claude tries to stop, it will be blocked and forced to re-verify everything.\r\n\r\n## How It Works\r\n\r\n1. **doubt-detector** (UserPromptSubmit): Detects `!rv` and activates doubt mode\r\n2. **doubt-validator** (Stop): Blocks Claude and demands re-verification\r\n\r\n## Why?\r\n\r\nSometimes Claude hallucinates. This forces a second look when you're not sure you can trust the response.\r\n\r\n## Why `!rv` instead of `!doubt`?\r\n\r\nThe keyword `doubt` itself affects Claude's behavior - seeing \"doubt\" in the prompt makes Claude start doubting from the beginning. Using a neutral keyword like `!rv` (re-validate) lets Claude work normally first, then verify at the end.\r\n"
  },
  {
    "path": "plugins/doubt/hooks/hooks.json",
    "content": "{\r\n  \"description\": \"Doubt mode - Type !doubt to force Claude re-validation\",\r\n  \"hooks\": {\r\n    \"UserPromptSubmit\": [\r\n      {\r\n        \"matcher\": \"*\",\r\n        \"hooks\": [\r\n          {\r\n            \"type\": \"command\",\r\n            \"command\": \"bash ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/doubt-detector.sh\"\r\n          }\r\n        ]\r\n      }\r\n    ],\r\n    \"Stop\": [\r\n      {\r\n        \"matcher\": \"*\",\r\n        \"hooks\": [\r\n          {\r\n            \"type\": \"command\",\r\n            \"command\": \"bash ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/doubt-validator.sh\"\r\n          }\r\n        ]\r\n      }\r\n    ]\r\n  }\r\n}\r\n"
  },
  {
    "path": "plugins/doubt/hooks/scripts/doubt-detector.sh",
    "content": "#!/bin/bash\n# !rv keyword detection -> activate doubt mode\n\nSTATE_DIR=\"$HOME/.claude/.hook-state\"\nmkdir -p \"$STATE_DIR\"\n\n# Read JSON from stdin\ninput=$(cat)\nprompt=$(echo \"$input\" | jq -r '.prompt // empty')\nsession_id=$(echo \"$input\" | jq -r '.session_id // empty')\n\n# Fallback if session_id missing\nif [ -z \"$session_id\" ]; then\n    session_id=\"unknown\"\nfi\n\nSTATE_FILE=\"$STATE_DIR/doubt-mode-$session_id\"\n\n# Detect !rv keyword\nif [[ \"$prompt\" == *\"!rv\"* ]]; then\n    echo \"enabled\" > \"$STATE_FILE\"\n    # Output JSON with additionalContext to tell Claude to ignore the keyword\n    cat << 'EOF'\n{\n  \"hookSpecificOutput\": {\n    \"hookEventName\": \"UserPromptSubmit\",\n    \"additionalContext\": \"Note: Ignore the '!rv' keyword in the prompt - it's a meta-command for the system, not part of the actual request.\"\n  }\n}\nEOF\nfi\n\nexit 0\n"
  },
  {
    "path": "plugins/doubt/hooks/scripts/doubt-validator.sh",
    "content": "#!/bin/bash\n# If doubt mode is active, request Claude to re-validate\n\nSTATE_DIR=\"$HOME/.claude/.hook-state\"\n\n# Read JSON from stdin\ninput=$(cat)\nsession_id=$(echo \"$input\" | jq -r '.session_id // empty')\n\n# Fallback if session_id missing\nif [ -z \"$session_id\" ]; then\n    session_id=\"unknown\"\nfi\n\nSTATE_FILE=\"$STATE_DIR/doubt-mode-$session_id\"\n\nif [ -f \"$STATE_FILE\" ]; then\n    # Delete state file to run only once\n    rm -f \"$STATE_FILE\"\n\n    # Block decision + strong message\n    cat << 'EOF'\n{\n  \"decision\": \"block\",\n  \"reason\": \"WAIT! You are lying or hallucinating! Go back and verify EVERYTHING you just said. Check the actual code, re-read the files, and make sure you're not making things up. I don't trust you yet!\"\n}\nEOF\n    exit 0\nfi\n\n# Normal exit if not in doubt mode\nexit 0\n"
  },
  {
    "path": "plugins/fetch-tweet/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"fetch-tweet\",\n  \"version\": \"0.1.0\",\n  \"description\": \"Fetch full tweet text, author info, and engagement data from X/Twitter URLs without authentication\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  }\n}\n"
  },
  {
    "path": "plugins/fetch-tweet/README.md",
    "content": "# Fetch Tweet\n\nFetch full tweet text, author info, and engagement data from X/Twitter URLs — no authentication, no JavaScript, no API key required.\n\nPowered by the open-source [FxEmbed](https://github.com/FxEmbed/FxEmbed) project's `api.fxtwitter.com` endpoint.\n\n## Features\n\n- **Zero dependencies** — Python standard library only\n- **No auth required** — works with any public tweet\n- **Full tweet data** — text (with expanded URLs), author, engagement, media, quote tweets\n- **Pipeline-friendly** — `--json` mode for programmatic use\n- **WebFetch fallback** — works even without script execution\n\n## Usage\n\n```\n트윗 가져와 https://x.com/garrytan/status/2020072098635665909\n트윗 번역해줘 https://x.com/sama/status/...\n이 트윗 정리해줘 https://twitter.com/...\n```\n\nOr English:\n- \"fetch this tweet\"\n- \"translate this tweet\"\n- \"what does this tweet say\"\n\n## Direct Script Usage\n\n```bash\n# Formatted output\npython scripts/fetch_tweet.py https://x.com/garrytan/status/2020072098635665909\n\n# JSON output (for piping)\npython scripts/fetch_tweet.py https://x.com/garrytan/status/2020072098635665909 --json\n```\n\nSupported URL formats: `x.com`, `twitter.com`, `fxtwitter.com`, `fixupx.com`\n\n## API Response Fields\n\n| Field | Description |\n|-------|-------------|\n| `tweet.text` | Tweet body (URLs expanded) |\n| `tweet.author` | Author info (name, screen_name, bio, followers) |\n| `tweet.likes/retweets/replies/bookmarks/views` | Engagement metrics |\n| `tweet.created_at` | Timestamp |\n| `tweet.media` | Attached media (photos, videos) |\n| `tweet.quote` | Quoted tweet (same structure) |\n| `tweet.lang` | Language code |\n\n## Limitations\n\n- Cannot fetch tweets from private accounts\n- Cannot fetch deleted tweets\n- Rate limited by FxEmbed server policy (no issue under normal use)\n\n## Credits\n\nUses [FxEmbed](https://github.com/FxEmbed/FxEmbed) — the same backend that powers `fxtwitter.com` link previews on Discord/Telegram.\n"
  },
  {
    "path": "plugins/fetch-tweet/skills/fetch-tweet/SKILL.md",
    "content": "---\nname: fetch-tweet\ndescription: This skill should be used when the user asks to \"트윗 가져와\", \"트윗 번역\", \"X 게시글 읽어줘\", \"tweet fetch\", \"트윗 내용\", \"트윗 원문\", or provides an X/Twitter URL (x.com, twitter.com) and wants to read, translate, or analyze the tweet content. Also useful when other skills need to fetch tweet text programmatically.\n---\n\n# Fetch Tweet\n\nX/Twitter URL에서 트윗 원문, 작성자 정보, 인게이지먼트 데이터를 가져오는 스킬.\nFxEmbed 오픈소스 프로젝트의 API (`api.fxtwitter.com`)를 활용하여 JavaScript 없이 트윗 데이터를 추출한다.\n\n## How It Works\n\nX/Twitter URL의 도메인을 `api.fxtwitter.com`으로 변환하면 JSON으로 트윗 전체 데이터를 반환한다.\n\n```\nhttps://x.com/user/status/123456\n  → https://api.fxtwitter.com/user/status/123456\n```\n\n## Script\n\n`scripts/fetch_tweet.py` - 표준 라이브러리만 사용, 외부 의존성 없음.\n\n```bash\n# 기본 사용 (포맷팅된 출력)\npython scripts/fetch_tweet.py https://x.com/garrytan/status/2020072098635665909\n\n# JSON 출력 (프로그래밍 활용)\npython scripts/fetch_tweet.py https://x.com/garrytan/status/2020072098635665909 --json\n```\n\n지원 URL 형식: `x.com`, `twitter.com`, `fxtwitter.com`, `fixupx.com`\n\n## API Response Fields\n\n| 필드 | 설명 |\n|------|------|\n| `tweet.text` | 트윗 본문 (URL 확장됨) |\n| `tweet.author` | 작성자 (name, screen_name, bio, followers) |\n| `tweet.likes/retweets/replies/bookmarks/views` | 인게이지먼트 |\n| `tweet.created_at` | 작성 일시 |\n| `tweet.media` | 첨부 미디어 (photos, videos) |\n| `tweet.quote` | 인용 트윗 (동일 구조) |\n| `tweet.lang` | 언어 코드 |\n\n## Workflow\n\n### 단일 트윗 가져오기\n\n1. URL에서 screen_name과 status_id를 추출\n2. `scripts/fetch_tweet.py <url>` 실행\n3. 결과를 사용자에게 표시하거나 번역\n\n### 번역 요청 시\n\n1. 스크립트로 원문 fetch\n2. 가져온 텍스트를 한국어로 번역하여 제공\n3. 인게이지먼트 수치도 함께 표시\n\n### 다른 스킬과 연동\n\nContents Hub 등에서 수집한 X URL 목록을 일괄 처리할 때:\n\n```bash\n# JSON 출력으로 파이프라인 연동\npython scripts/fetch_tweet.py <url> --json | python3 -c \"import sys,json; print(json.load(sys.stdin)['tweet']['text'])\"\n```\n\n## WebFetch Fallback\n\n스크립트 실행이 어려운 경우 WebFetch 도구로 직접 API 호출 가능:\n\n```\nURL: https://api.fxtwitter.com/{screen_name}/status/{status_id}\nPrompt: \"Extract the full tweet text and author name\"\n```\n\n## Limitations\n\n- 비공개 계정 트윗은 조회 불가\n- 삭제된 트윗은 조회 불가\n- API rate limit은 FxEmbed 서버 정책에 따름 (일반 사용 수준에서는 문제 없음)\n"
  },
  {
    "path": "plugins/fetch-tweet/skills/fetch-tweet/scripts/fetch_tweet.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Fetch tweet content via FxEmbed API (api.fxtwitter.com).\n\nUsage:\n    python fetch_tweet.py <x_url> [--json] [--translate <lang>]\n\nExamples:\n    python fetch_tweet.py https://x.com/garrytan/status/2020072098635665909\n    python fetch_tweet.py https://twitter.com/karpathy/status/123456 --json\n    python fetch_tweet.py https://x.com/someone/status/123 --translate ko\n\"\"\"\n\nimport argparse\nimport json\nimport re\nimport sys\nimport urllib.request\nimport urllib.error\n\n\ndef parse_x_url(url: str) -> tuple[str, str] | None:\n    \"\"\"Extract screen_name and status_id from X/Twitter URL.\"\"\"\n    patterns = [\n        r\"(?:https?://)?(?:www\\.)?(?:x\\.com|twitter\\.com|fxtwitter\\.com|fixupx\\.com)/(\\w+)/status/(\\d+)\",\n    ]\n    for pattern in patterns:\n        match = re.search(pattern, url)\n        if match:\n            return match.group(1), match.group(2)\n    return None\n\n\ndef fetch_tweet(screen_name: str, status_id: str) -> dict:\n    \"\"\"Fetch tweet data from FxEmbed API.\"\"\"\n    api_url = f\"https://api.fxtwitter.com/{screen_name}/status/{status_id}\"\n    req = urllib.request.Request(api_url, headers={\"User-Agent\": \"fetch-tweet/1.0\"})\n    with urllib.request.urlopen(req, timeout=10) as resp:\n        return json.loads(resp.read().decode())\n\n\ndef format_number(n: int) -> str:\n    \"\"\"Format large numbers (e.g., 1234 -> 1.2K).\"\"\"\n    if n >= 1_000_000:\n        return f\"{n / 1_000_000:.1f}M\"\n    if n >= 1_000:\n        return f\"{n / 1_000:.1f}K\"\n    return str(n)\n\n\ndef format_tweet(data: dict) -> str:\n    \"\"\"Format tweet data for display.\"\"\"\n    tweet = data.get(\"tweet\", {})\n    author = tweet.get(\"author\", {})\n\n    lines = []\n    # Author\n    name = author.get(\"name\", \"\")\n    handle = author.get(\"screen_name\", \"\")\n    bio = author.get(\"description\", \"\")\n    followers = author.get(\"followers\", 0)\n    lines.append(f\"@{handle} ({name})\")\n    if bio:\n        lines.append(f\"  Bio: {bio}\")\n    lines.append(f\"  Followers: {format_number(followers)}\")\n    lines.append(\"\")\n\n    # Tweet text\n    lines.append(tweet.get(\"text\", \"\"))\n    lines.append(\"\")\n\n    # Engagement\n    likes = tweet.get(\"likes\", 0)\n    retweets = tweet.get(\"retweets\", 0)\n    replies = tweet.get(\"replies\", 0)\n    bookmarks = tweet.get(\"bookmarks\", 0)\n    views = tweet.get(\"views\", 0)\n    lines.append(\n        f\"Likes: {format_number(likes)}  \"\n        f\"RTs: {format_number(retweets)}  \"\n        f\"Replies: {format_number(replies)}  \"\n        f\"Bookmarks: {format_number(bookmarks)}  \"\n        f\"Views: {format_number(views)}\"\n    )\n\n    # Date\n    created = tweet.get(\"created_at\", \"\")\n    if created:\n        lines.append(f\"Date: {created}\")\n\n    # Media\n    media = tweet.get(\"media\", {})\n    if media:\n        photos = media.get(\"photos\", [])\n        videos = media.get(\"videos\", [])\n        if photos:\n            lines.append(f\"\\nMedia: {len(photos)} photo(s)\")\n            for p in photos:\n                lines.append(f\"  {p.get('url', '')}\")\n        if videos:\n            lines.append(f\"\\nMedia: {len(videos)} video(s)\")\n            for v in videos:\n                lines.append(f\"  {v.get('url', '')}\")\n\n    # Quote tweet\n    quote = tweet.get(\"quote\")\n    if quote:\n        q_author = quote.get(\"author\", {})\n        lines.append(f\"\\n--- Quote: @{q_author.get('screen_name', '')} ---\")\n        lines.append(quote.get(\"text\", \"\"))\n        lines.append(f\"Likes: {format_number(quote.get('likes', 0))}  Views: {format_number(quote.get('views', 0))}\")\n\n    return \"\\n\".join(lines)\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Fetch tweet via FxEmbed API\")\n    parser.add_argument(\"url\", help=\"X/Twitter URL\")\n    parser.add_argument(\"--json\", action=\"store_true\", help=\"Output raw JSON\")\n    args = parser.parse_args()\n\n    parsed = parse_x_url(args.url)\n    if not parsed:\n        print(f\"Error: Invalid X/Twitter URL: {args.url}\", file=sys.stderr)\n        sys.exit(1)\n\n    screen_name, status_id = parsed\n\n    try:\n        data = fetch_tweet(screen_name, status_id)\n    except urllib.error.HTTPError as e:\n        print(f\"Error: API returned {e.code} for @{screen_name}/status/{status_id}\", file=sys.stderr)\n        sys.exit(1)\n    except urllib.error.URLError as e:\n        print(f\"Error: Network error - {e.reason}\", file=sys.stderr)\n        sys.exit(1)\n\n    if data.get(\"code\") != 200:\n        print(f\"Error: {data.get('message', 'Unknown error')}\", file=sys.stderr)\n        sys.exit(1)\n\n    if args.json:\n        print(json.dumps(data, ensure_ascii=False, indent=2))\n    else:\n        print(format_tweet(data))\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/gmail/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"gmail\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Gmail integration with multi-account support - read, search, send, and manage emails\"\n}\n"
  },
  {
    "path": "plugins/gmail/README.md",
    "content": "# Gmail Plugin for Claude Code\n\nA comprehensive Gmail integration plugin for Claude Code that enables multi-account email management through the Gmail API. Read, search, send, and organize emails directly from your Claude Code sessions.\n\n## Overview\n\nThe Gmail plugin provides a complete email management solution for Claude Code, supporting multiple Google accounts with features including:\n\n- **Multi-Account Support**: Manage personal, work, and project-specific Gmail accounts\n- **Full Email Operations**: List, read, send, reply, and organize emails\n- **Smart Caching**: Local caching for optimized API usage\n- **Rate Limiting**: Built-in quota management to prevent API throttling\n- **Batch Operations**: Efficient bulk operations for label management and cleanup\n- **5-Step Sending Workflow**: Structured email composition with test sends and user confirmation\n\n## Features\n\n### Core Email Operations\n- List and search emails with Gmail's powerful query syntax\n- Read individual messages and entire threads\n- Send new emails with plain text or HTML content\n- Reply to existing conversations\n- Attach files to outgoing emails\n- Save emails as drafts\n\n### Organization & Management\n- Create, update, and delete labels\n- Mark messages as read/unread\n- Star/unstar messages\n- Archive and trash messages\n- Batch modify labels across multiple messages\n\n### Advanced Features\n- **Local Caching**: Reduces API calls by caching message lists and content\n- **Quota Management**: Tracks usage against Gmail API limits (250 units/second)\n- **Exponential Backoff**: Automatic retry with intelligent delays for rate limiting\n- **Batch Processing**: Efficient bulk operations for high-volume tasks\n\n## Prerequisites\n\nBefore using this plugin, you need:\n\n1. **Python 3.10+** with `uv` package manager\n2. **Google Cloud Project** with Gmail API enabled\n3. **OAuth 2.0 credentials** (Desktop application type)\n\n### Required Google OAuth Scopes\n- `gmail.modify` - Read, modify, and delete emails\n- `gmail.send` - Send emails on behalf of the user\n- `gmail.labels` - Manage email labels\n\n## Setup Guide\n\n### Step 1: Create Google Cloud Project\n\n1. Go to [Google Cloud Console](https://console.cloud.google.com)\n2. Click the project selector at the top and select \"New Project\"\n3. Enter a project name (e.g., `gmail-skill`) and click \"Create\"\n\n### Step 2: Enable Gmail API\n\n1. Navigate to \"APIs & Services\" > \"Library\" in the left menu\n2. Search for \"Gmail API\"\n3. Click the \"Enable\" button\n\n### Step 3: Configure OAuth Consent Screen\n\n1. Go to \"APIs & Services\" > \"OAuth consent screen\"\n2. Select \"External\" for User Type and click \"Create\"\n3. Fill in the required fields:\n   - App name: `Gmail Skill`\n   - User support email: Your email address\n   - Developer contact: Your email address\n4. Click \"Save and Continue\"\n5. Add the following scopes:\n   - `https://www.googleapis.com/auth/gmail.modify`\n   - `https://www.googleapis.com/auth/gmail.send`\n   - `https://www.googleapis.com/auth/gmail.labels`\n6. Add your Gmail address as a test user\n7. Click \"Save and Continue\"\n\n### Step 4: Create OAuth Client ID\n\n1. Go to \"APIs & Services\" > \"Credentials\"\n2. Click \"Create Credentials\" > \"OAuth client ID\"\n3. Select \"Desktop app\" as the application type\n4. Enter a name (e.g., `Gmail Skill Client`)\n5. Click \"Create\"\n6. Download the JSON file\n\n### Step 5: Configure the Plugin\n\n```bash\n# Navigate to the skill directory\ncd .claude/skills/gmail\n\n# Move the downloaded credentials file\nmv ~/Downloads/client_secret_*.json references/credentials.json\n\n# Copy the default accounts configuration\ncp assets/accounts.default.yaml accounts.yaml\n\n# Edit accounts.yaml with your account information\n```\n\n### Step 6: Authenticate Accounts\n\n```bash\n# Authenticate each account\nuv run python scripts/setup_auth.py --account personal\nuv run python scripts/setup_auth.py --account work\n\n# Verify registered accounts\nuv run python scripts/setup_auth.py --list\n```\n\nWhen the browser opens:\n1. Log in to your Google account\n2. If you see \"This app isn't verified\", click \"Advanced\" > \"Continue\"\n3. Approve all permission requests\n\n## Usage Examples\n\n### List Messages\n\n```bash\n# List recent 10 emails\nuv run python scripts/list_messages.py --account work --max 10\n\n# List unread emails\nuv run python scripts/list_messages.py --account work --query \"is:unread\"\n\n# Search by sender\nuv run python scripts/list_messages.py --account work --query \"from:user@example.com\"\n\n# Search by date range\nuv run python scripts/list_messages.py --account work --query \"after:2024/01/01 before:2024/12/31\"\n\n# Filter by label\nuv run python scripts/list_messages.py --account work --labels INBOX,IMPORTANT\n\n# Include full message content\nuv run python scripts/list_messages.py --account work --full\n\n# Output as JSON\nuv run python scripts/list_messages.py --account work --json\n```\n\n### Read Messages\n\n```bash\n# Read a specific message\nuv run python scripts/read_message.py --account work --id <message_id>\n\n# Read an entire thread\nuv run python scripts/read_message.py --account work --thread <thread_id>\n\n# Save attachments to a directory\nuv run python scripts/read_message.py --account work --id <message_id> --save-attachments ./downloads\n```\n\n### Send Messages\n\n```bash\n# Send a new email\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Hello\" \\\n    --body \"Email content here.\"\n\n# Send HTML email\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Announcement\" \\\n    --body \"<h1>Title</h1><p>Content</p>\" \\\n    --html\n\n# Send with attachments\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"File Transfer\" \\\n    --body \"Please check the attachments.\" \\\n    --attach file1.pdf,file2.xlsx\n\n# Reply to a message\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Re: Original Subject\" \\\n    --body \"Reply content\" \\\n    --reply-to <message_id> \\\n    --thread <thread_id>\n\n# Save as draft\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Draft Email\" \\\n    --body \"Draft content\" \\\n    --draft\n```\n\n### Manage Labels and Messages\n\n```bash\n# List all labels\nuv run python scripts/manage_labels.py --account work list-labels\n\n# Create a new label\nuv run python scripts/manage_labels.py --account work create-label --name \"Project/A\"\n\n# Mark as read\nuv run python scripts/manage_labels.py --account work mark-read --id <message_id>\n\n# Star/unstar messages\nuv run python scripts/manage_labels.py --account work star --id <message_id>\nuv run python scripts/manage_labels.py --account work unstar --id <message_id>\n\n# Archive a message\nuv run python scripts/manage_labels.py --account work archive --id <message_id>\n\n# Move to trash\nuv run python scripts/manage_labels.py --account work trash --id <message_id>\nuv run python scripts/manage_labels.py --account work untrash --id <message_id>\n\n# Modify labels\nuv run python scripts/manage_labels.py --account work modify --id <message_id> \\\n    --add-labels \"Label_123,STARRED\" --remove-labels \"INBOX\"\n\n# List drafts\nuv run python scripts/manage_labels.py --account work list-drafts\n\n# Send a draft\nuv run python scripts/manage_labels.py --account work send-draft --draft-id <draft_id>\n\n# View profile information\nuv run python scripts/manage_labels.py --account work profile\n```\n\n## 5-Step Email Sending Workflow\n\nWhen Claude Code sends emails, it follows a structured 5-step workflow to ensure accuracy and prevent mistakes:\n\n| Step | Task | Key Action |\n|------|------|------------|\n| 1 | **Gather Context** | Run parallel exploration tasks: recipient info, related projects, background context |\n| 2 | **Check Previous Conversations** | Search `\"to:recipient OR from:recipient newer_than:90d\"` and ask user about thread selection |\n| 3 | **Draft Email** | Compose draft based on context and templates, ask user for feedback |\n| 4 | **Test Send** | Send `[TEST]` email to user's own address for review |\n| 5 | **Actual Send** | Send to recipient after user confirmation |\n\n### Workflow Example: \"Send a meeting email to John\"\n\n1. **Create 5 Tasks** using TaskCreate\n2. **Step 1**: Run parallel Explore tasks\n   - Search for John's contact info in `partners/`, `projects/`, `context.md`\n   - Search for meeting context (calendar, recent notes)\n3. **Step 2**: Search `\"to:john@company.com OR from:john@company.com\"`\n   - If previous conversation exists, ask user whether to reply or create new email\n4. **Step 3**: Draft email using appropriate template from `assets/email-templates.md`\n   - Ask user to review and approve\n5. **Step 4**: Test send to user's own email address\n   - Request confirmation after user reviews\n6. **Step 5**: Send to John\n   - Report completion\n\n**Signature**: All outgoing emails include the signature:\n```\n---\nSent with Claude Code\n```\n\n## accounts.yaml Configuration\n\nThe `accounts.yaml` file stores metadata about your Gmail accounts:\n\n```yaml\n# Gmail Account Settings\n# Token files are stored separately in accounts/{name}.json\n\naccounts:\n  # Personal Gmail account\n  personal:\n    email: your-personal@gmail.com\n    description: Personal Gmail\n\n  # Work/Business account\n  work:\n    email: your-work@company.com\n    description: Work account\n\n  # Additional account example\n  project:\n    email: project@domain.com\n    description: For specific project\n```\n\nAfter editing `accounts.yaml`, authenticate each account:\n```bash\nuv run python scripts/setup_auth.py --account personal\nuv run python scripts/setup_auth.py --account work\n```\n\n## Available Scripts\n\n### Main Scripts\n\n| Script | Description |\n|--------|-------------|\n| `setup_auth.py` | OAuth authentication setup for new accounts |\n| `list_messages.py` | List and search emails with various filters |\n| `read_message.py` | Read individual messages or entire threads |\n| `send_message.py` | Send new emails, replies, or save as drafts |\n| `manage_labels.py` | Label management and message organization |\n| `gmail_client.py` | Core Gmail API client library |\n\n### Core Modules (scripts/core/)\n\n| Module | Description |\n|--------|-------------|\n| `quota_manager.py` | Gmail API quota tracking and rate limiting |\n| `retry_handler.py` | Exponential backoff for API error handling |\n| `cache_manager.py` | Local caching for API response optimization |\n| `batch_processor.py` | Efficient bulk operations for multiple messages |\n\n## Gmail Search Query Examples\n\n### Basic Queries\n\n| Query | Description |\n|-------|-------------|\n| `from:user@example.com` | From specific sender |\n| `to:user@example.com` | To specific recipient |\n| `subject:project` | Contains word in subject |\n| `is:unread` | Unread messages |\n| `is:starred` | Starred messages |\n| `is:important` | Marked as important |\n| `has:attachment` | Has attachments |\n| `filename:pdf` | PDF attachments |\n\n### Date Filters\n\n| Query | Description |\n|-------|-------------|\n| `after:2024/01/01` | After specific date |\n| `before:2024/12/31` | Before specific date |\n| `older_than:7d` | Older than 7 days |\n| `newer_than:1d` | Within last day |\n\n### Location Filters\n\n| Query | Description |\n|-------|-------------|\n| `in:inbox` | In inbox |\n| `in:sent` | In sent mail |\n| `in:drafts` | In drafts |\n| `in:trash` | In trash |\n| `label:work` | Has specific label |\n\n### Compound Query Examples\n\n```bash\n# Unread from specific sender\nfrom:boss@company.com is:unread\n\n# Attachments in last 7 days\nhas:attachment newer_than:7d\n\n# Excel files in date range\nhas:attachment filename:xlsx after:2024/01/01 before:2024/12/31\n\n# Important unread emails\nis:unread is:important\n\n# Starred with specific label\nlabel:projects is:starred\n```\n\n## Troubleshooting\n\n### \"credentials.json file not found\"\n\nEnsure you have:\n1. Downloaded the OAuth client ID JSON from Google Cloud Console\n2. Saved it to `references/credentials.json`\n\n### \"Token has expired\"\n\nTokens auto-refresh, but if that fails:\n```bash\nuv run python scripts/setup_auth.py --account <name>\n```\n\n### \"This app isn't verified\"\n\nThis is normal for personal OAuth apps. Click \"Advanced\" > \"Continue\" during authentication.\n\n### \"Insufficient permissions\"\n\nEnsure these scopes are enabled in your OAuth consent screen:\n- `gmail.modify`\n- `gmail.send`\n- `gmail.labels`\n\n### \"Rate limit exceeded\"\n\nThe plugin includes built-in rate limiting. If you hit limits:\n- Wait a few seconds before retrying\n- The exponential backoff will handle automatic retries\n- Check quota status with `get_quota_status()` method\n\n### \"Account not found\"\n\n1. Verify the account exists in `accounts.yaml`\n2. Check that the token file exists in `accounts/{name}.json`\n3. Re-run authentication if needed\n\n## Security Notes\n\n### Credential Storage\n- **credentials.json**: Contains your OAuth client ID and secret. Keep this secure and never commit to version control.\n- **accounts/*.json**: Contains refresh tokens for each account. These are gitignored by default.\n- **accounts.yaml**: Contains only email addresses and descriptions (no secrets).\n\n### Best Practices\n1. Add `references/credentials.json` and `accounts/` to your `.gitignore`\n2. Never share or commit token files\n3. Use separate Google Cloud projects for development and production\n4. Regularly review OAuth consent screen test users\n5. Revoke access from [Google Account Security](https://myaccount.google.com/permissions) if needed\n\n### Data Privacy\n- Emails are accessed only when explicitly requested\n- Caching is local to your machine\n- No data is sent to external services beyond the Gmail API\n- Test sends go to your own email address for review\n\n## File Structure\n\n```\nskills/gmail/\n├── SKILL.md                    # Skill configuration for Claude Code\n├── accounts.yaml               # Account metadata (emails, descriptions)\n├── scripts/\n│   ├── gmail_client.py         # Core Gmail API client\n│   ├── list_messages.py        # List/search messages CLI\n│   ├── read_message.py         # Read messages CLI\n│   ├── send_message.py         # Send messages CLI\n│   ├── manage_labels.py        # Label management CLI\n│   ├── setup_auth.py           # OAuth setup CLI\n│   └── core/\n│       ├── __init__.py\n│       ├── batch_processor.py  # Bulk operations\n│       ├── cache_manager.py    # Local caching\n│       ├── quota_manager.py    # Rate limiting\n│       └── retry_handler.py    # Error handling\n├── references/\n│   ├── credentials.json        # OAuth Client ID (gitignored)\n│   ├── setup-guide.md          # Detailed setup instructions\n│   ├── cli-usage.md            # CLI command reference\n│   └── search-queries.md       # Gmail query syntax reference\n├── assets/\n│   ├── accounts.default.yaml   # Default accounts template\n│   ├── email-templates.md      # Email body templates\n│   └── signatures.md           # Signature templates\n└── accounts/                   # Per-account tokens (gitignored)\n    ├── personal.json\n    └── work.json\n```\n\n## Environment Variables\n\n| Variable | Default | Description |\n|----------|---------|-------------|\n| `GMAIL_SKILL_PATH` | Auto-detected | Skill root path |\n| `GMAIL_TIMEOUT` | `30` | API request timeout (seconds) |\n| `GMAIL_CACHE_DIR` | `.cache/gmail` | Cache directory location |\n| `GMAIL_ENABLE_CACHE` | `true` | Enable/disable caching |\n| `GMAIL_ENABLE_QUOTA` | `true` | Enable/disable quota management |\n\n## License\n\nThis plugin is part of the [plugins-for-claude-natives](https://github.com/team-attention/plugins-for-claude-natives) project.\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/.gitignore",
    "content": "# OAuth tokens (contain refresh tokens)\naccounts/\n\n# OAuth client credentials\nreferences/credentials.json\n\n# Virtual environment\n.venv/\n\n# Python cache\n__pycache__/\n*.pyc\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/SKILL.md",
    "content": "---\nname: gmail\ndescription: This skill should be used when the user asks to \"check email\", \"read emails\", \"send email\", \"reply to email\", \"search inbox\", or manages Gmail. Supports multi-account Gmail integration for reading, searching, sending, and label management.\n---\n\n# Gmail Skill\n\nManage emails through Gmail API - read, search, send, and organize across multiple Google accounts.\n\n## Account Setup\n\n**Before running any command, read `accounts.yaml` to check registered accounts.**\n\n> If `accounts.yaml` is missing or empty → Read `references/setup-guide.md` for initial setup\n\n```yaml\n# accounts.yaml example\naccounts:\n  personal:\n    email: user@gmail.com\n    description: Personal Gmail\n  work:\n    email: user@company.com\n    description: Work account\n```\n\n## Email Sending Workflow (5 Steps)\n\nWhen sending emails, **create 5 Tasks with TaskCreate** and execute sequentially:\n\n| Step | Task | Key Action |\n|------|------|----------|\n| 1 | Gather context | Run Explore SubAgents **in parallel**: recipient info, related projects, background context |\n| 2 | Check previous conversations | Search `--query \"to:recipient OR from:recipient newer_than:90d\"` → AskUserQuestion for thread selection |\n| 3 | Draft email | Compose draft → AskUserQuestion for feedback |\n| 4 | Test send | Send `[TEST]` email to user's own address → Open in Gmail web → Request confirmation |\n| 5 | Actual send | Send to recipient → Report completion |\n\n**Signature**: Append `---\\nSent with Claude Code` to all outgoing emails\n\n### Workflow Example: \"Send a meeting email to John\"\n\n```\n1. Create 5 Tasks\n2. Step 1: Run parallel Explore SubAgents\n   - Search recipient (John) info (partners/, projects/, context.md, etc.)\n   - Search meeting context (calendar, recent meeting notes, etc.)\n3. Step 2: Search \"to:john@company.com OR from:john@company.com\"\n   → If previous conversation exists, AskUserQuestion (reply/new email)\n4. Step 3: Draft email → AskUserQuestion (proceed/revise)\n5. Step 4: Test send to my email → Open in Gmail web (`open \"https://mail.google.com/mail/u/0/#inbox/{message_id}\"`) → Request confirmation\n6. Step 5: Actual send → Done\n```\n\n## CLI Quick Reference\n\n```bash\n# List messages\nuv run python scripts/list_messages.py --account work --query \"is:unread\" --max 10\n\n# Send email\nuv run python scripts/send_message.py --account work --to \"user@example.com\" --subject \"Subject\" --body \"Content\"\n\n# Check profile\nuv run python scripts/manage_labels.py --account work profile\n```\n\n> Detailed CLI usage: `references/cli-usage.md`\n> Search query reference: `references/search-queries.md`\n\n## View Email in Web\n\nAfter sending, use the returned Message ID to view directly in Gmail web:\n\n```bash\n# URL format\nhttps://mail.google.com/mail/u/0/#inbox/{message_id}\n\n# Example: Open in browser after test send\nopen \"https://mail.google.com/mail/u/0/#inbox/19c145bbd47ddd01\"\n```\n\n> **Note**: `u/0` is the first logged-in account, `u/1` is the second account\n\n## File Structure\n\n```\nskills/gmail/\n├── SKILL.md\n├── accounts.yaml           # Account metadata\n├── scripts/                # CLI scripts\n├── references/\n│   ├── setup-guide.md      # Initial setup guide\n│   ├── cli-usage.md        # Detailed CLI usage\n│   ├── search-queries.md   # Search query reference\n│   └── credentials.json    # OAuth Client ID (gitignore)\n├── assets/\n│   ├── accounts.default.yaml  # Account config template\n│   ├── email-templates.md     # Email body templates\n│   └── signatures.md          # Signature templates (Plain/HTML)\n└── accounts/               # Per-account tokens (gitignore)\n```\n\n## Error Handling\n\n| Situation | Resolution |\n|-----------|------------|\n| accounts.yaml missing | Read `references/setup-guide.md` for initial setup |\n| Token missing | Guide user to run `setup_auth.py --account <name>` |\n| Token expired | Auto-refresh; if failed, guide re-authentication |\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/accounts.yaml.example",
    "content": "# Gmail 계정 설정\n# 계정별로 이메일 주소와 설명을 관리합니다.\n# 토큰 파일은 accounts/{name}.json에 별도 저장됩니다.\n\naccounts:\n  personal:\n    email: bongbonggg97@gmail.com\n    description: 개인 Gmail\n\n  work:\n    email: bong@team-attention.com\n    description: Team Attention 업무용\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/assets/accounts.default.yaml",
    "content": "# Gmail Account Settings Default Template\n# Copy this file to ../accounts.yaml for use\n#\n# Usage:\n#   cp assets/accounts.default.yaml accounts.yaml\n#   # Edit accounts.yaml, then\n#   uv run python scripts/setup_auth.py --account <name>\n\naccounts:\n  # Personal Gmail account\n  personal:\n    email: your-personal@gmail.com\n    description: Personal Gmail\n\n  # Work/Business account\n  work:\n    email: your-work@company.com\n    description: Work account\n\n  # Additional account example (uncomment if needed)\n  # project:\n  #   email: project@domain.com\n  #   description: For specific project\n\n# After setup, run authentication for each account:\n#   uv run python scripts/setup_auth.py --account personal\n#   uv run python scripts/setup_auth.py --account work\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/assets/email-templates.md",
    "content": "# Email Body Templates\n\nSelect an appropriate template based on the situation when composing emails.\n\n---\n\n## 1. Meeting Request\n\n```\nHello {recipient_name},\n\n{Brief greeting or context}\n\nI would like to discuss {meeting purpose}.\n\n**Proposed Schedule:**\n- {date1} {time1}\n- {date2} {time2}\n\n**Estimated Duration:** {30 minutes/1 hour}\n**Location/Format:** {Office location / Google Meet / Zoom}\n\nPlease let me know your preferred time and I will confirm the schedule.\n\nThank you.\n```\n\n---\n\n## 2. Information/Document Request\n\n```\nHello {recipient_name},\n\nRegarding {project/task name}, I need the following materials.\n\n**Requested Items:**\n1. {item1}\n2. {item2}\n3. {item3}\n\n**Deadline:** By {date}\n\nPlease let me know if you have any questions.\n\nThank you.\n```\n\n---\n\n## 3. Progress Update\n\n```\nHello {recipient_name},\n\nHere is the progress update for {project name}.\n\n**Completed:**\n- {completed item 1}\n- {completed item 2}\n\n**In Progress:**\n- {in-progress item 1} (Expected completion: {date})\n\n**Next Steps:**\n- {next item}\n\n**Issues/Blockers:**\n- {issue or \"None\"}\n\nPlease let me know if further discussion is needed.\n\nThank you.\n```\n\n---\n\n## 4. Thank You/Follow-up Email\n\n```\nHello {recipient_name},\n\nThank you for the {meeting/call/assistance}.\n\n{Brief summary or key points}\n\nAs discussed, I will proceed with {follow-up action}.\n\nPlease feel free to reach out if you have any additional questions.\n\nThank you.\n```\n\n---\n\n## 5. Introduction Email\n\n```\nHello {recipient_name},\n\nI am reaching out through {introducer_name}.\n\nI am {name}, {title} at {company/organization}.\n{Brief self/company introduction}\n\n{Purpose of contact and proposal}\n\nWould you be available for a brief call or meeting?\n\nThank you.\n```\n\n---\n\n## 6. Reminder\n\n```\nHello {recipient_name},\n\nI am following up on the email I sent on {previous email date} regarding {subject}.\n\n{Brief summary or request}\n\nI understand you are busy, but I would appreciate a response by {date}.\n\nThank you.\n```\n\n---\n\n## Template Usage Guide\n\n1. **Assess the situation**: Select an appropriate template based on context identified in Step 1 (Explore Tasks)\n2. **Replace variables**: Replace `{variable}` parts with actual content\n3. **Adjust tone**: Adjust formality level based on relationship with recipient\n4. **Add signature**: Add signature from `assets/signatures.md` at the bottom\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/assets/signatures.md",
    "content": "# Email Signature Templates\n\n---\n\n## Default Signature (Plain Text)\n\nAutomatically added to all outgoing emails:\n\n```\n---\nSent with Claude Code\n```\n\n---\n\n## Default Signature (HTML)\n\n```html\n<hr style=\"margin-top: 20px; border: none; border-top: 1px solid #ddd;\">\n<p style=\"color: #888; font-size: 12px;\">Sent with Claude Code</p>\n```\n\n---\n\n## Business Signature (Plain Text)\n\n```\n---\n{Name}\n{Title} | {Company}\n{Email} | {Phone}\n\nSent with Claude Code\n```\n\n---\n\n## Business Signature (HTML)\n\n```html\n<hr style=\"margin-top: 20px; border: none; border-top: 1px solid #ddd;\">\n<table style=\"font-family: Arial, sans-serif; font-size: 14px; color: #333;\">\n  <tr>\n    <td style=\"padding-right: 15px; border-right: 2px solid #0066cc;\">\n      <!-- If logo image available -->\n      <!-- <img src=\"logo.png\" width=\"60\" alt=\"Company Logo\"> -->\n    </td>\n    <td style=\"padding-left: 15px;\">\n      <strong style=\"font-size: 16px; color: #0066cc;\">{Name}</strong><br>\n      <span style=\"color: #666;\">{Title} | {Company}</span><br>\n      <span style=\"font-size: 12px;\">\n        📧 {Email}<br>\n        📱 {Phone}\n      </span>\n    </td>\n  </tr>\n</table>\n<p style=\"color: #aaa; font-size: 11px; margin-top: 10px;\">Sent with Claude Code</p>\n```\n\n---\n\n## Minimal Signature\n\n```\n--\n{Name}\n```\n\n---\n\n## Signature Usage Guide\n\n1. **Default signature**: `Sent with Claude Code` is automatically added to all emails\n2. **Business signature**: Use for external partners and customer emails\n3. **Minimal signature**: Use for quick communication between internal team members\n4. **HTML signature**: Applied when using `--html` flag\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/pyproject.toml",
    "content": "[project]\nname = \"gmail-skill\"\nversion = \"0.1.0\"\ndescription = \"Gmail sync skill for Claude Code\"\nrequires-python = \">=3.11\"\ndependencies = [\n    \"google-auth>=2.0.0\",\n    \"google-auth-oauthlib>=1.0.0\",\n    \"google-api-python-client>=2.0.0\",\n    \"httplib2>=0.22.0\",\n    \"pyyaml>=6.0.0\",\n]\n\n[build-system]\nrequires = [\"hatchling\"]\nbuild-backend = \"hatchling.build\"\n\n[tool.hatch.build.targets.wheel]\npackages = [\"scripts\"]\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/references/cli-usage.md",
    "content": "# Gmail CLI Detailed Usage\n\n## List Messages\n\n```bash\n# Recent 10 emails\nuv run python scripts/list_messages.py --account work --max 10\n\n# Unread emails\nuv run python scripts/list_messages.py --account work --query \"is:unread\"\n\n# From specific sender\nuv run python scripts/list_messages.py --account work --query \"from:user@example.com\"\n\n# Date range\nuv run python scripts/list_messages.py --account work --query \"after:2024/01/01 before:2024/12/31\"\n\n# Filter by label\nuv run python scripts/list_messages.py --account work --labels INBOX,IMPORTANT\n\n# Include full content\nuv run python scripts/list_messages.py --account work --full\n\n# JSON output\nuv run python scripts/list_messages.py --account work --json\n```\n\n## Read Messages\n\n```bash\n# Read message\nuv run python scripts/read_message.py --account work --id <message_id>\n\n# Read entire thread\nuv run python scripts/read_message.py --account work --thread <thread_id>\n\n# Save attachments\nuv run python scripts/read_message.py --account work --id <message_id> --save-attachments ./downloads\n```\n\n## Send Messages\n\n```bash\n# New email\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Hello\" \\\n    --body \"Email content here.\"\n\n# HTML email\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Announcement\" \\\n    --body \"<h1>Title</h1><p>Content</p>\" \\\n    --html\n\n# With attachments\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"File Transfer\" \\\n    --body \"Please check the attachments.\" \\\n    --attach file1.pdf,file2.xlsx\n\n# Reply\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Re: Original Subject\" \\\n    --body \"Reply content\" \\\n    --reply-to <message_id> \\\n    --thread <thread_id>\n\n# Save as draft\nuv run python scripts/send_message.py --account work \\\n    --to \"user@example.com\" \\\n    --subject \"Email to send later\" \\\n    --body \"Draft content\" \\\n    --draft\n```\n\n## Label and Message Management\n\n```bash\n# List labels\nuv run python scripts/manage_labels.py --account work list-labels\n\n# Create label\nuv run python scripts/manage_labels.py --account work create-label --name \"Project/A\"\n\n# Mark as read\nuv run python scripts/manage_labels.py --account work mark-read --id <message_id>\n\n# Star/unstar\nuv run python scripts/manage_labels.py --account work star --id <message_id>\nuv run python scripts/manage_labels.py --account work unstar --id <message_id>\n\n# Archive\nuv run python scripts/manage_labels.py --account work archive --id <message_id>\n\n# Trash\nuv run python scripts/manage_labels.py --account work trash --id <message_id>\nuv run python scripts/manage_labels.py --account work untrash --id <message_id>\n\n# Add/remove labels\nuv run python scripts/manage_labels.py --account work modify --id <message_id> \\\n    --add-labels \"Label_123,STARRED\" --remove-labels \"INBOX\"\n\n# List drafts\nuv run python scripts/manage_labels.py --account work list-drafts\n\n# Send draft\nuv run python scripts/manage_labels.py --account work send-draft --draft-id <draft_id>\n\n# View profile\nuv run python scripts/manage_labels.py --account work profile\n```\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/references/search-queries.md",
    "content": "# Gmail Search Query Reference\n\n## Basic Queries\n\n| Query | Description |\n|-------|-------------|\n| `from:user@example.com` | Specific sender |\n| `to:user@example.com` | Specific recipient |\n| `subject:project` | Contains in subject |\n| `is:unread` | Unread |\n| `is:starred` | Starred |\n| `is:important` | Marked important |\n| `has:attachment` | Has attachment |\n| `filename:pdf` | PDF attachment |\n\n## Date Related\n\n| Query | Description |\n|-------|-------------|\n| `after:2024/01/01` | After date |\n| `before:2024/12/31` | Before date |\n| `older_than:7d` | Older than 7 days |\n| `newer_than:1d` | Within 1 day |\n\n## Location Related\n\n| Query | Description |\n|-------|-------------|\n| `in:inbox` | Inbox |\n| `in:sent` | Sent mail |\n| `in:drafts` | Drafts |\n| `in:trash` | Trash |\n| `label:work` | Specific label |\n\n## Compound Query Examples\n\n```\n# Unread emails from specific sender\nfrom:boss@company.com is:unread\n\n# Emails with attachments in last 7 days\nhas:attachment newer_than:7d\n\n# Excel attachments within date range\nhas:attachment filename:xlsx after:2024/01/01 before:2024/12/31\n\n# Important unread emails\nis:unread is:important\n\n# Starred emails with specific label\nlabel:projects is:starred\n```\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/references/setup-guide.md",
    "content": "# Gmail Skill Initial Setup Guide\n\nFollow this guide if accounts.yaml is missing or no accounts are registered.\n\n---\n\n## 1. Google Cloud Project Setup\n\n### 1.1 Create Project\n\n1. Go to [Google Cloud Console](https://console.cloud.google.com)\n2. Click project selector at top → \"New Project\"\n3. Enter project name (e.g., `gmail-skill`)\n4. Click \"Create\"\n\n### 1.2 Enable Gmail API\n\n1. Left menu → \"APIs & Services\" → \"Library\"\n2. Search for \"Gmail API\"\n3. Click \"Enable\" button\n\n### 1.3 Configure OAuth Consent Screen\n\n1. \"APIs & Services\" → \"OAuth consent screen\"\n2. User Type: Select \"External\" → \"Create\"\n3. Enter app information:\n   - App name: `Gmail Skill`\n   - User support email: Your email\n   - Developer contact: Your email\n4. \"Save and Continue\"\n5. Add scopes:\n   - `https://www.googleapis.com/auth/gmail.modify`\n   - `https://www.googleapis.com/auth/gmail.send`\n   - `https://www.googleapis.com/auth/gmail.labels`\n6. Add test users (your Gmail address)\n7. \"Save and Continue\"\n\n### 1.4 Create OAuth Client ID\n\n1. \"APIs & Services\" → \"Credentials\"\n2. \"Create Credentials\" → \"OAuth client ID\"\n3. Application type: **Desktop app**\n4. Name: `Gmail Skill Client`\n5. Click \"Create\"\n6. Click **Download JSON**\n\n### 1.5 Save credentials.json\n\n```bash\n# Move downloaded file to references/credentials.json\nmv ~/Downloads/client_secret_*.json .claude/skills/gmail/references/credentials.json\n```\n\n---\n\n## 2. Account Setup\n\n### 2.1 Create accounts.yaml\n\n```bash\ncd .claude/skills/gmail\n\n# Copy default template\ncp assets/accounts.default.yaml accounts.yaml\n```\n\n### 2.2 Edit accounts.yaml\n\n```yaml\n# accounts.yaml\naccounts:\n  personal:\n    email: your-personal@gmail.com\n    description: Personal Gmail\n  work:\n    email: your-work@company.com\n    description: Work account\n```\n\n---\n\n## 3. Account Authentication\n\n### 3.1 Install Dependencies\n\n```bash\ncd .claude/skills/gmail\nuv sync\n```\n\n### 3.2 Authenticate Each Account\n\n```bash\n# Authenticate personal account\nuv run python scripts/setup_auth.py --account personal\n\n# Authenticate work account\nuv run python scripts/setup_auth.py --account work\n```\n\nWhen browser opens:\n1. Log in to Google account\n2. Approve permission request\n3. \"This app isn't verified\" → \"Advanced\" → \"Continue\"\n4. Allow all permissions\n\n### 3.3 Verify Authentication\n\n```bash\n# List registered accounts\nuv run python scripts/setup_auth.py --list\n```\n\nExample output:\n```\n📋 Registered accounts:\n\n   ✅ personal\n      Email: your-personal@gmail.com\n      Description: Personal Gmail\n\n   ✅ work\n      Email: your-work@company.com\n      Description: Work account\n```\n\n---\n\n## 4. Test\n\n```bash\n# Test mail listing\nuv run python scripts/list_messages.py --account personal --max 5\n\n# Check profile\nuv run python scripts/manage_labels.py --account personal profile\n```\n\n---\n\n## Troubleshooting\n\n### \"credentials.json file not found\"\n\n→ Check steps 1.4-1.5. Download OAuth client ID JSON and save to `references/credentials.json`.\n\n### \"Token has expired\"\n\n→ If auto-refresh fails, re-authenticate:\n```bash\nuv run python scripts/setup_auth.py --account <name>\n```\n\n### \"This app isn't verified\"\n\n→ Add your email as a test user in OAuth consent screen.\n\n### \"Insufficient permissions\"\n\n→ Add required scopes in OAuth consent screen:\n- `gmail.modify`\n- `gmail.send`\n- `gmail.labels`\n\n---\n\n## File Checklist\n\nVerify after setup:\n\n```\n.claude/skills/gmail/\n├── accounts.yaml              ✅ Account information\n├── references/\n│   └── credentials.json       ✅ OAuth Client ID\n└── accounts/\n    ├── personal.json          ✅ personal token\n    └── work.json              ✅ work token\n```\n\nSetup is complete when all files exist.\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/core/__init__.py",
    "content": "\"\"\"Gmail Core Components.\n\nRate limiting, caching, retry logic, and batch processing for Gmail API.\n\"\"\"\n\nfrom .quota_manager import QuotaManager, QuotaUnit\nfrom .retry_handler import exponential_backoff, RetryConfig\nfrom .cache_manager import EmailCache\nfrom .batch_processor import BatchProcessor\n\n__all__ = [\n    \"QuotaManager\",\n    \"QuotaUnit\",\n    \"exponential_backoff\",\n    \"RetryConfig\",\n    \"EmailCache\",\n    \"BatchProcessor\",\n]\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/core/batch_processor.py",
    "content": "\"\"\"Gmail API Batch Processor.\n\n여러 API 요청을 일괄 처리하여 효율성을 높입니다.\n\nBatch Request Guidelines:\n- 최대 50개 요청을 하나의 배치로\n- 각 요청은 개별적으로 성공/실패\n- Rate limiting은 배치 전체가 아닌 개별 요청에 적용\n\nReference:\n    https://developers.google.com/gmail/api/guides/batch\n\"\"\"\n\nimport logging\nimport time\nfrom dataclasses import dataclass, field\nfrom typing import Any, Callable, Optional\n\nfrom googleapiclient.discovery import Resource\nfrom googleapiclient.http import BatchHttpRequest\n\nfrom .quota_manager import QuotaManager, QuotaUnit, get_quota_manager\nfrom .retry_handler import exponential_backoff\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass BatchResult:\n    \"\"\"배치 처리 결과.\"\"\"\n\n    total: int = 0\n    succeeded: int = 0\n    failed: int = 0\n    results: list[dict] = field(default_factory=list)\n    errors: list[dict] = field(default_factory=list)\n\n\nclass BatchProcessor:\n    \"\"\"Gmail API 배치 처리기.\n\n    여러 API 호출을 효율적으로 일괄 처리합니다.\n\n    Usage:\n        processor = BatchProcessor(gmail_service)\n\n        # 메시지 일괄 조회\n        message_ids = [\"msg1\", \"msg2\", \"msg3\"]\n        results = processor.batch_get_messages(message_ids)\n\n        # 라벨 일괄 수정\n        modified = processor.batch_modify_labels(\n            message_ids,\n            add_labels=[\"STARRED\"],\n            remove_labels=[\"UNREAD\"]\n        )\n    \"\"\"\n\n    MAX_BATCH_SIZE = 50  # Gmail API 최대 배치 크기\n    DEFAULT_DELAY = 0.5  # 배치 간 기본 지연 (초)\n\n    def __init__(\n        self,\n        service: Resource,\n        quota_manager: Optional[QuotaManager] = None,\n        user: str = \"default\",\n        batch_size: int = MAX_BATCH_SIZE,\n        delay_between_batches: float = DEFAULT_DELAY,\n    ):\n        \"\"\"\n        Args:\n            service: Gmail API 서비스 객체\n            quota_manager: 할당량 관리자 (없으면 기본 사용)\n            user: 사용자 식별자 (할당량 추적용)\n            batch_size: 배치당 최대 요청 수\n            delay_between_batches: 배치 간 지연 (초)\n        \"\"\"\n        self.service = service\n        self.quota_manager = quota_manager or get_quota_manager()\n        self.user = user\n        self.batch_size = min(batch_size, self.MAX_BATCH_SIZE)\n        self.delay = delay_between_batches\n\n    # =========================================================================\n    # Message Operations\n    # =========================================================================\n\n    def batch_get_messages(\n        self,\n        message_ids: list[str],\n        format: str = \"metadata\",\n        on_progress: Optional[Callable[[int, int], None]] = None,\n    ) -> BatchResult:\n        \"\"\"메시지 일괄 조회.\n\n        Args:\n            message_ids: 조회할 메시지 ID 목록\n            format: 응답 형식 (minimal, full, raw, metadata)\n            on_progress: 진행 상황 콜백 (current, total)\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = BatchResult(total=len(message_ids))\n\n        for i in range(0, len(message_ids), self.batch_size):\n            batch_ids = message_ids[i : i + self.batch_size]\n            batch_results = []\n            batch_errors = []\n\n            def callback_factory(msg_id: str):\n                def callback(request_id, response, exception):\n                    if exception:\n                        batch_errors.append({\n                            \"message_id\": msg_id,\n                            \"error\": str(exception),\n                        })\n                    else:\n                        batch_results.append(response)\n\n                return callback\n\n            # 배치 요청 생성\n            batch = self.service.new_batch_http_request()\n\n            for msg_id in batch_ids:\n                batch.add(\n                    self.service.users()\n                    .messages()\n                    .get(userId=\"me\", id=msg_id, format=format),\n                    callback=callback_factory(msg_id),\n                )\n\n            # 할당량 확인 및 대기\n            units = len(batch_ids) * QuotaUnit.MESSAGES_GET\n            self.quota_manager.wait_for_quota(self.user, units)\n\n            # 배치 실행\n            batch.execute()\n            self.quota_manager.record_usage(self.user, units)\n\n            # 결과 수집\n            result.results.extend(batch_results)\n            result.errors.extend(batch_errors)\n            result.succeeded += len(batch_results)\n            result.failed += len(batch_errors)\n\n            # 진행 상황 콜백\n            if on_progress:\n                on_progress(min(i + self.batch_size, len(message_ids)), len(message_ids))\n\n            # 다음 배치 전 지연\n            if i + self.batch_size < len(message_ids):\n                time.sleep(self.delay)\n\n        return result\n\n    def batch_modify_labels(\n        self,\n        message_ids: list[str],\n        add_labels: Optional[list[str]] = None,\n        remove_labels: Optional[list[str]] = None,\n        on_progress: Optional[Callable[[int, int], None]] = None,\n    ) -> BatchResult:\n        \"\"\"라벨 일괄 수정 (batchModify API 사용).\n\n        Args:\n            message_ids: 수정할 메시지 ID 목록\n            add_labels: 추가할 라벨 ID\n            remove_labels: 제거할 라벨 ID\n            on_progress: 진행 상황 콜백\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = BatchResult(total=len(message_ids))\n\n        for i in range(0, len(message_ids), self.batch_size):\n            batch_ids = message_ids[i : i + self.batch_size]\n\n            # 할당량 확인 및 대기\n            units = QuotaUnit.MESSAGES_BATCH_MODIFY\n            self.quota_manager.wait_for_quota(self.user, units)\n\n            try:\n                self.service.users().messages().batchModify(\n                    userId=\"me\",\n                    body={\n                        \"ids\": batch_ids,\n                        \"addLabelIds\": add_labels or [],\n                        \"removeLabelIds\": remove_labels or [],\n                    },\n                ).execute()\n\n                self.quota_manager.record_usage(self.user, units)\n\n                # batchModify는 성공 시 빈 응답 반환\n                result.succeeded += len(batch_ids)\n                result.results.extend([{\"id\": mid, \"status\": \"modified\"} for mid in batch_ids])\n\n            except Exception as e:\n                result.failed += len(batch_ids)\n                result.errors.append({\n                    \"message_ids\": batch_ids,\n                    \"error\": str(e),\n                })\n                logger.error(f\"Batch modify failed: {e}\")\n\n            # 진행 상황 콜백\n            if on_progress:\n                on_progress(min(i + self.batch_size, len(message_ids)), len(message_ids))\n\n            # 다음 배치 전 지연\n            if i + self.batch_size < len(message_ids):\n                time.sleep(self.delay)\n\n        return result\n\n    def batch_trash_messages(\n        self,\n        message_ids: list[str],\n        on_progress: Optional[Callable[[int, int], None]] = None,\n    ) -> BatchResult:\n        \"\"\"메시지 일괄 휴지통 이동.\n\n        Args:\n            message_ids: 휴지통으로 이동할 메시지 ID 목록\n            on_progress: 진행 상황 콜백\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = BatchResult(total=len(message_ids))\n\n        for i in range(0, len(message_ids), self.batch_size):\n            batch_ids = message_ids[i : i + self.batch_size]\n            batch_results = []\n            batch_errors = []\n\n            def callback_factory(msg_id: str):\n                def callback(request_id, response, exception):\n                    if exception:\n                        batch_errors.append({\n                            \"message_id\": msg_id,\n                            \"error\": str(exception),\n                        })\n                    else:\n                        batch_results.append({\"id\": msg_id, \"status\": \"trashed\"})\n\n                return callback\n\n            batch = self.service.new_batch_http_request()\n\n            for msg_id in batch_ids:\n                batch.add(\n                    self.service.users().messages().trash(userId=\"me\", id=msg_id),\n                    callback=callback_factory(msg_id),\n                )\n\n            units = len(batch_ids) * QuotaUnit.MESSAGES_TRASH\n            self.quota_manager.wait_for_quota(self.user, units)\n\n            batch.execute()\n            self.quota_manager.record_usage(self.user, units)\n\n            result.results.extend(batch_results)\n            result.errors.extend(batch_errors)\n            result.succeeded += len(batch_results)\n            result.failed += len(batch_errors)\n\n            if on_progress:\n                on_progress(min(i + self.batch_size, len(message_ids)), len(message_ids))\n\n            if i + self.batch_size < len(message_ids):\n                time.sleep(self.delay)\n\n        return result\n\n    def batch_delete_messages(\n        self,\n        message_ids: list[str],\n        on_progress: Optional[Callable[[int, int], None]] = None,\n    ) -> BatchResult:\n        \"\"\"메시지 일괄 영구 삭제.\n\n        주의: 이 작업은 되돌릴 수 없습니다!\n\n        Args:\n            message_ids: 삭제할 메시지 ID 목록\n            on_progress: 진행 상황 콜백\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = BatchResult(total=len(message_ids))\n\n        for i in range(0, len(message_ids), self.batch_size):\n            batch_ids = message_ids[i : i + self.batch_size]\n            batch_results = []\n            batch_errors = []\n\n            def callback_factory(msg_id: str):\n                def callback(request_id, response, exception):\n                    if exception:\n                        batch_errors.append({\n                            \"message_id\": msg_id,\n                            \"error\": str(exception),\n                        })\n                    else:\n                        batch_results.append({\"id\": msg_id, \"status\": \"deleted\"})\n\n                return callback\n\n            batch = self.service.new_batch_http_request()\n\n            for msg_id in batch_ids:\n                batch.add(\n                    self.service.users().messages().delete(userId=\"me\", id=msg_id),\n                    callback=callback_factory(msg_id),\n                )\n\n            units = len(batch_ids) * QuotaUnit.MESSAGES_DELETE\n            self.quota_manager.wait_for_quota(self.user, units)\n\n            batch.execute()\n            self.quota_manager.record_usage(self.user, units)\n\n            result.results.extend(batch_results)\n            result.errors.extend(batch_errors)\n            result.succeeded += len(batch_results)\n            result.failed += len(batch_errors)\n\n            if on_progress:\n                on_progress(min(i + self.batch_size, len(message_ids)), len(message_ids))\n\n            if i + self.batch_size < len(message_ids):\n                time.sleep(self.delay)\n\n        return result\n\n    # =========================================================================\n    # Thread Operations\n    # =========================================================================\n\n    def batch_get_threads(\n        self,\n        thread_ids: list[str],\n        format: str = \"metadata\",\n        on_progress: Optional[Callable[[int, int], None]] = None,\n    ) -> BatchResult:\n        \"\"\"스레드 일괄 조회.\n\n        Args:\n            thread_ids: 조회할 스레드 ID 목록\n            format: 응답 형식\n            on_progress: 진행 상황 콜백\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = BatchResult(total=len(thread_ids))\n\n        for i in range(0, len(thread_ids), self.batch_size):\n            batch_ids = thread_ids[i : i + self.batch_size]\n            batch_results = []\n            batch_errors = []\n\n            def callback_factory(thread_id: str):\n                def callback(request_id, response, exception):\n                    if exception:\n                        batch_errors.append({\n                            \"thread_id\": thread_id,\n                            \"error\": str(exception),\n                        })\n                    else:\n                        batch_results.append(response)\n\n                return callback\n\n            batch = self.service.new_batch_http_request()\n\n            for thread_id in batch_ids:\n                batch.add(\n                    self.service.users()\n                    .threads()\n                    .get(userId=\"me\", id=thread_id, format=format),\n                    callback=callback_factory(thread_id),\n                )\n\n            units = len(batch_ids) * QuotaUnit.THREADS_GET\n            self.quota_manager.wait_for_quota(self.user, units)\n\n            batch.execute()\n            self.quota_manager.record_usage(self.user, units)\n\n            result.results.extend(batch_results)\n            result.errors.extend(batch_errors)\n            result.succeeded += len(batch_results)\n            result.failed += len(batch_errors)\n\n            if on_progress:\n                on_progress(min(i + self.batch_size, len(thread_ids)), len(thread_ids))\n\n            if i + self.batch_size < len(thread_ids):\n                time.sleep(self.delay)\n\n        return result\n\n    # =========================================================================\n    # Utility Methods\n    # =========================================================================\n\n    def mark_all_as_read(\n        self,\n        query: str = \"is:unread\",\n        max_messages: int = 500,\n    ) -> BatchResult:\n        \"\"\"조건에 맞는 메시지 전체 읽음 처리.\n\n        Args:\n            query: 검색 쿼리 (기본: 읽지 않음)\n            max_messages: 최대 처리 메시지 수\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        # 먼저 메시지 ID 목록 조회\n        message_ids = []\n        page_token = None\n\n        while len(message_ids) < max_messages:\n            result = (\n                self.service.users()\n                .messages()\n                .list(\n                    userId=\"me\",\n                    q=query,\n                    maxResults=min(100, max_messages - len(message_ids)),\n                    pageToken=page_token,\n                )\n                .execute()\n            )\n\n            for msg in result.get(\"messages\", []):\n                message_ids.append(msg[\"id\"])\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        if not message_ids:\n            return BatchResult()\n\n        return self.batch_modify_labels(\n            message_ids,\n            remove_labels=[\"UNREAD\"],\n        )\n\n    def archive_all(\n        self,\n        query: str = \"\",\n        max_messages: int = 500,\n    ) -> BatchResult:\n        \"\"\"조건에 맞는 메시지 전체 보관처리.\n\n        Args:\n            query: 검색 쿼리\n            max_messages: 최대 처리 메시지 수\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        # INBOX 라벨이 있는 메시지만 조회\n        full_query = f\"in:inbox {query}\".strip()\n\n        message_ids = []\n        page_token = None\n\n        while len(message_ids) < max_messages:\n            result = (\n                self.service.users()\n                .messages()\n                .list(\n                    userId=\"me\",\n                    q=full_query,\n                    maxResults=min(100, max_messages - len(message_ids)),\n                    pageToken=page_token,\n                )\n                .execute()\n            )\n\n            for msg in result.get(\"messages\", []):\n                message_ids.append(msg[\"id\"])\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        if not message_ids:\n            return BatchResult()\n\n        return self.batch_modify_labels(\n            message_ids,\n            remove_labels=[\"INBOX\"],\n        )\n\n\nif __name__ == \"__main__\":\n    # 모듈 테스트 (실제 API 없이)\n    print(\"BatchProcessor module loaded successfully\")\n\n    # BatchResult 테스트\n    result = BatchResult(total=10, succeeded=8, failed=2)\n    result.results = [{\"id\": f\"msg{i}\"} for i in range(8)]\n    result.errors = [{\"message_id\": \"msg8\", \"error\": \"Not found\"}, {\"message_id\": \"msg9\", \"error\": \"Rate limited\"}]\n\n    print(f\"Total: {result.total}\")\n    print(f\"Succeeded: {result.succeeded}\")\n    print(f\"Failed: {result.failed}\")\n    print(f\"Success rate: {result.succeeded / result.total * 100:.1f}%\")\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/core/cache_manager.py",
    "content": "\"\"\"Gmail Local Cache Manager.\n\nAPI 호출을 최소화하기 위한 로컬 캐시 레이어.\n\n캐시 전략:\n- 메시지 내용: 24시간 유효 (메시지는 불변)\n- 메시지 메타데이터: 1시간 유효 (라벨 변경 가능)\n- 메시지 목록: 5분 유효 (자주 변경됨)\n- 라벨 목록: 1시간 유효\n\n캐시 무효화:\n- 메시지 수정 시 해당 메시지 캐시 무효화\n- 발송 시 목록 캐시 무효화\n- 라벨 변경 시 라벨 캐시 무효화\n\nReference:\n    https://community.latenode.com/t/understanding-gmail-api-quota-restrictions-and-rate-limits/28113\n\"\"\"\n\nimport hashlib\nimport json\nimport os\nimport shutil\nimport threading\nfrom dataclasses import dataclass\nfrom datetime import datetime, timedelta\nfrom pathlib import Path\nfrom typing import Any, Optional\n\n\n@dataclass\nclass CacheConfig:\n    \"\"\"캐시 설정.\"\"\"\n\n    # TTL (Time-To-Live) 설정 (시간 단위)\n    message_ttl_hours: int = 24  # 메시지 본문\n    metadata_ttl_hours: int = 1  # 메타데이터\n    list_ttl_minutes: int = 5  # 목록\n    labels_ttl_hours: int = 1  # 라벨\n\n    # 캐시 크기 제한\n    max_messages_per_account: int = 1000\n    max_cache_size_mb: int = 100\n\n\nclass EmailCache:\n    \"\"\"Gmail 이메일 로컬 캐시 관리자.\n\n    API 호출을 줄이기 위해 메시지, 목록, 라벨을 로컬에 캐싱합니다.\n\n    Usage:\n        cache = EmailCache()\n\n        # 메시지 캐싱\n        cached = cache.get_message(\"work\", \"msg123\")\n        if cached is None:\n            message = api.get_message(\"msg123\")\n            cache.set_message(\"work\", \"msg123\", message)\n        else:\n            message = cached\n\n        # 목록 캐싱\n        query = \"is:unread\"\n        cached_list = cache.get_list(\"work\", query)\n        if cached_list is None:\n            messages = api.list_messages(query)\n            cache.set_list(\"work\", query, messages)\n    \"\"\"\n\n    def __init__(\n        self,\n        cache_dir: Optional[str] = None,\n        config: Optional[CacheConfig] = None,\n    ):\n        \"\"\"\n        Args:\n            cache_dir: 캐시 디렉토리 (기본값: .cache/gmail)\n            config: 캐시 설정\n        \"\"\"\n        self.config = config or CacheConfig()\n\n        if cache_dir:\n            self.cache_dir = Path(cache_dir)\n        elif os.environ.get(\"GMAIL_CACHE_DIR\"):\n            self.cache_dir = Path(os.environ[\"GMAIL_CACHE_DIR\"])\n        else:\n            self.cache_dir = Path(__file__).parent.parent.parent / \".cache\" / \"gmail\"\n\n        self.cache_dir.mkdir(parents=True, exist_ok=True)\n        self._lock = threading.Lock()\n\n    # =========================================================================\n    # Message Cache\n    # =========================================================================\n\n    def get_message(\n        self,\n        account: str,\n        message_id: str,\n        metadata_only: bool = False,\n    ) -> Optional[dict]:\n        \"\"\"캐시된 메시지 조회.\n\n        Args:\n            account: 계정 이름\n            message_id: 메시지 ID\n            metadata_only: 메타데이터만 조회 시 True\n\n        Returns:\n            캐시된 메시지 또는 None\n        \"\"\"\n        cache_file = self._message_path(account, message_id)\n\n        if not cache_file.exists():\n            return None\n\n        try:\n            with open(cache_file) as f:\n                data = json.load(f)\n\n            ttl_hours = (\n                self.config.metadata_ttl_hours\n                if metadata_only\n                else self.config.message_ttl_hours\n            )\n\n            if self._is_fresh(data.get(\"cached_at\"), ttl_hours):\n                return data.get(\"message\")\n\n            # 만료된 캐시 삭제\n            cache_file.unlink(missing_ok=True)\n            return None\n        except (json.JSONDecodeError, KeyError):\n            cache_file.unlink(missing_ok=True)\n            return None\n\n    def set_message(\n        self,\n        account: str,\n        message_id: str,\n        message: dict,\n    ) -> None:\n        \"\"\"메시지 캐시.\n\n        Args:\n            account: 계정 이름\n            message_id: 메시지 ID\n            message: 메시지 데이터\n        \"\"\"\n        with self._lock:\n            account_dir = self.cache_dir / account / \"messages\"\n            account_dir.mkdir(parents=True, exist_ok=True)\n\n            cache_file = self._message_path(account, message_id)\n            cache_data = {\n                \"cached_at\": datetime.now().isoformat(),\n                \"message\": message,\n            }\n\n            with open(cache_file, \"w\") as f:\n                json.dump(cache_data, f, ensure_ascii=False)\n\n            # 캐시 크기 정리\n            self._cleanup_if_needed(account)\n\n    # =========================================================================\n    # List Cache\n    # =========================================================================\n\n    def get_list(\n        self,\n        account: str,\n        query: str,\n        label_ids: Optional[list[str]] = None,\n    ) -> Optional[list[dict]]:\n        \"\"\"캐시된 목록 조회.\n\n        Args:\n            account: 계정 이름\n            query: 검색 쿼리\n            label_ids: 라벨 필터\n\n        Returns:\n            캐시된 메시지 ID 목록 또는 None\n        \"\"\"\n        cache_key = self._list_cache_key(query, label_ids)\n        cache_file = self._list_path(account, cache_key)\n\n        if not cache_file.exists():\n            return None\n\n        try:\n            with open(cache_file) as f:\n                data = json.load(f)\n\n            ttl_minutes = self.config.list_ttl_minutes\n            if self._is_fresh(data.get(\"cached_at\"), ttl_minutes / 60):\n                return data.get(\"messages\")\n\n            cache_file.unlink(missing_ok=True)\n            return None\n        except (json.JSONDecodeError, KeyError):\n            cache_file.unlink(missing_ok=True)\n            return None\n\n    def set_list(\n        self,\n        account: str,\n        query: str,\n        messages: list[dict],\n        label_ids: Optional[list[str]] = None,\n    ) -> None:\n        \"\"\"목록 캐시.\n\n        Args:\n            account: 계정 이름\n            query: 검색 쿼리\n            messages: 메시지 목록\n            label_ids: 라벨 필터\n        \"\"\"\n        with self._lock:\n            list_dir = self.cache_dir / account / \"lists\"\n            list_dir.mkdir(parents=True, exist_ok=True)\n\n            cache_key = self._list_cache_key(query, label_ids)\n            cache_file = self._list_path(account, cache_key)\n\n            cache_data = {\n                \"cached_at\": datetime.now().isoformat(),\n                \"query\": query,\n                \"label_ids\": label_ids,\n                \"messages\": messages,\n            }\n\n            with open(cache_file, \"w\") as f:\n                json.dump(cache_data, f, ensure_ascii=False)\n\n    # =========================================================================\n    # Labels Cache\n    # =========================================================================\n\n    def get_labels(self, account: str) -> Optional[list[dict]]:\n        \"\"\"캐시된 라벨 목록 조회.\n\n        Args:\n            account: 계정 이름\n\n        Returns:\n            캐시된 라벨 목록 또는 None\n        \"\"\"\n        cache_file = self.cache_dir / account / \"labels.json\"\n\n        if not cache_file.exists():\n            return None\n\n        try:\n            with open(cache_file) as f:\n                data = json.load(f)\n\n            if self._is_fresh(data.get(\"cached_at\"), self.config.labels_ttl_hours):\n                return data.get(\"labels\")\n\n            cache_file.unlink(missing_ok=True)\n            return None\n        except (json.JSONDecodeError, KeyError):\n            cache_file.unlink(missing_ok=True)\n            return None\n\n    def set_labels(self, account: str, labels: list[dict]) -> None:\n        \"\"\"라벨 캐시.\n\n        Args:\n            account: 계정 이름\n            labels: 라벨 목록\n        \"\"\"\n        with self._lock:\n            account_dir = self.cache_dir / account\n            account_dir.mkdir(parents=True, exist_ok=True)\n\n            cache_file = account_dir / \"labels.json\"\n            cache_data = {\n                \"cached_at\": datetime.now().isoformat(),\n                \"labels\": labels,\n            }\n\n            with open(cache_file, \"w\") as f:\n                json.dump(cache_data, f, ensure_ascii=False)\n\n    # =========================================================================\n    # Cache Invalidation\n    # =========================================================================\n\n    def invalidate_message(self, account: str, message_id: str) -> None:\n        \"\"\"메시지 캐시 무효화.\n\n        Args:\n            account: 계정 이름\n            message_id: 메시지 ID\n        \"\"\"\n        with self._lock:\n            cache_file = self._message_path(account, message_id)\n            cache_file.unlink(missing_ok=True)\n\n    def invalidate_lists(self, account: str) -> None:\n        \"\"\"목록 캐시 전체 무효화.\n\n        Args:\n            account: 계정 이름\n        \"\"\"\n        with self._lock:\n            list_dir = self.cache_dir / account / \"lists\"\n            if list_dir.exists():\n                shutil.rmtree(list_dir, ignore_errors=True)\n\n    def invalidate_labels(self, account: str) -> None:\n        \"\"\"라벨 캐시 무효화.\n\n        Args:\n            account: 계정 이름\n        \"\"\"\n        with self._lock:\n            cache_file = self.cache_dir / account / \"labels.json\"\n            cache_file.unlink(missing_ok=True)\n\n    def invalidate_account(self, account: str) -> None:\n        \"\"\"계정의 모든 캐시 무효화.\n\n        Args:\n            account: 계정 이름\n        \"\"\"\n        with self._lock:\n            account_dir = self.cache_dir / account\n            if account_dir.exists():\n                shutil.rmtree(account_dir, ignore_errors=True)\n\n    def invalidate_all(self) -> None:\n        \"\"\"전체 캐시 무효화.\"\"\"\n        with self._lock:\n            if self.cache_dir.exists():\n                shutil.rmtree(self.cache_dir, ignore_errors=True)\n            self.cache_dir.mkdir(parents=True, exist_ok=True)\n\n    # =========================================================================\n    # Cache Statistics\n    # =========================================================================\n\n    def get_stats(self, account: Optional[str] = None) -> dict:\n        \"\"\"캐시 통계 조회.\n\n        Args:\n            account: 특정 계정만 조회 시 지정\n\n        Returns:\n            캐시 통계 정보\n        \"\"\"\n        stats = {\n            \"cache_dir\": str(self.cache_dir),\n            \"accounts\": {},\n            \"total_size_bytes\": 0,\n            \"total_messages\": 0,\n        }\n\n        accounts = [account] if account else self._get_cached_accounts()\n\n        for acc in accounts:\n            acc_dir = self.cache_dir / acc\n            if not acc_dir.exists():\n                continue\n\n            msg_dir = acc_dir / \"messages\"\n            list_dir = acc_dir / \"lists\"\n\n            msg_count = len(list(msg_dir.glob(\"*.json\"))) if msg_dir.exists() else 0\n            list_count = len(list(list_dir.glob(\"*.json\"))) if list_dir.exists() else 0\n\n            # 계정 디렉토리 크기 계산\n            size = sum(\n                f.stat().st_size\n                for f in acc_dir.rglob(\"*\")\n                if f.is_file()\n            )\n\n            stats[\"accounts\"][acc] = {\n                \"messages_cached\": msg_count,\n                \"lists_cached\": list_count,\n                \"size_bytes\": size,\n                \"size_mb\": round(size / (1024 * 1024), 2),\n            }\n\n            stats[\"total_size_bytes\"] += size\n            stats[\"total_messages\"] += msg_count\n\n        stats[\"total_size_mb\"] = round(\n            stats[\"total_size_bytes\"] / (1024 * 1024), 2\n        )\n\n        return stats\n\n    # =========================================================================\n    # Internal Methods\n    # =========================================================================\n\n    def _message_path(self, account: str, message_id: str) -> Path:\n        \"\"\"메시지 캐시 파일 경로.\"\"\"\n        return self.cache_dir / account / \"messages\" / f\"{message_id}.json\"\n\n    def _list_path(self, account: str, cache_key: str) -> Path:\n        \"\"\"목록 캐시 파일 경로.\"\"\"\n        return self.cache_dir / account / \"lists\" / f\"{cache_key}.json\"\n\n    def _list_cache_key(\n        self,\n        query: str,\n        label_ids: Optional[list[str]] = None,\n    ) -> str:\n        \"\"\"목록 캐시 키 생성.\"\"\"\n        key_data = {\"query\": query, \"labels\": sorted(label_ids or [])}\n        key_str = json.dumps(key_data, sort_keys=True)\n        return hashlib.md5(key_str.encode()).hexdigest()[:16]\n\n    def _is_fresh(\n        self,\n        cached_at: Optional[str],\n        max_age_hours: float,\n    ) -> bool:\n        \"\"\"캐시 신선도 확인.\"\"\"\n        if not cached_at:\n            return False\n\n        try:\n            cache_time = datetime.fromisoformat(cached_at)\n            return datetime.now() - cache_time < timedelta(hours=max_age_hours)\n        except ValueError:\n            return False\n\n    def _get_cached_accounts(self) -> list[str]:\n        \"\"\"캐시된 계정 목록.\"\"\"\n        if not self.cache_dir.exists():\n            return []\n\n        return [\n            d.name\n            for d in self.cache_dir.iterdir()\n            if d.is_dir() and not d.name.startswith(\".\")\n        ]\n\n    def _cleanup_if_needed(self, account: str) -> None:\n        \"\"\"캐시 크기 제한 적용.\"\"\"\n        msg_dir = self.cache_dir / account / \"messages\"\n        if not msg_dir.exists():\n            return\n\n        cache_files = sorted(\n            msg_dir.glob(\"*.json\"),\n            key=lambda f: f.stat().st_mtime,\n        )\n\n        # 메시지 수 제한\n        if len(cache_files) > self.config.max_messages_per_account:\n            to_delete = len(cache_files) - self.config.max_messages_per_account\n            for f in cache_files[:to_delete]:\n                f.unlink(missing_ok=True)\n\n\n# 싱글톤 인스턴스\n_default_cache: Optional[EmailCache] = None\n\n\ndef get_cache(cache_dir: Optional[str] = None) -> EmailCache:\n    \"\"\"기본 캐시 인스턴스 반환.\n\n    Args:\n        cache_dir: 캐시 디렉토리\n\n    Returns:\n        EmailCache 싱글톤 인스턴스\n    \"\"\"\n    global _default_cache\n    if _default_cache is None:\n        _default_cache = EmailCache(cache_dir)\n    return _default_cache\n\n\nif __name__ == \"__main__\":\n    # 테스트\n    import tempfile\n\n    # 임시 디렉토리에서 테스트\n    with tempfile.TemporaryDirectory() as tmpdir:\n        cache = EmailCache(cache_dir=tmpdir)\n\n        # 메시지 캐싱 테스트\n        test_message = {\n            \"id\": \"msg123\",\n            \"subject\": \"Test Subject\",\n            \"from\": \"test@example.com\",\n            \"body\": \"Test body content\",\n        }\n\n        cache.set_message(\"work\", \"msg123\", test_message)\n        cached = cache.get_message(\"work\", \"msg123\")\n        print(f\"Cached message: {cached['subject']}\")\n\n        # 목록 캐싱 테스트\n        test_list = [\n            {\"id\": \"msg1\", \"threadId\": \"thread1\"},\n            {\"id\": \"msg2\", \"threadId\": \"thread2\"},\n        ]\n\n        cache.set_list(\"work\", \"is:unread\", test_list)\n        cached_list = cache.get_list(\"work\", \"is:unread\")\n        print(f\"Cached list: {len(cached_list)} messages\")\n\n        # 라벨 캐싱 테스트\n        test_labels = [\n            {\"id\": \"INBOX\", \"name\": \"INBOX\"},\n            {\"id\": \"SENT\", \"name\": \"SENT\"},\n        ]\n\n        cache.set_labels(\"work\", test_labels)\n        cached_labels = cache.get_labels(\"work\")\n        print(f\"Cached labels: {len(cached_labels)} labels\")\n\n        # 통계\n        stats = cache.get_stats()\n        print(f\"\\nCache stats: {json.dumps(stats, indent=2)}\")\n\n        # 무효화 테스트\n        cache.invalidate_message(\"work\", \"msg123\")\n        cached = cache.get_message(\"work\", \"msg123\")\n        print(f\"After invalidation: {cached}\")\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/core/quota_manager.py",
    "content": "\"\"\"Gmail API Quota Management.\n\nGmail API 할당량 관리를 위한 모듈.\n\nQuota Units (Gmail API Reference):\n- messages.list: 5 units\n- messages.get: 5 units\n- messages.send: 100 units\n- messages.modify: 5 units\n- messages.batchModify: 50 units\n- threads.list: 5 units\n- threads.get: 10 units\n\nRate Limits:\n- Per-user: 250 quota units per second\n- Daily: 1,000,000,000 units (workspace), varies for consumer\n\nReference:\n    https://developers.google.com/workspace/gmail/api/reference/quota\n\"\"\"\n\nimport threading\nimport time\nfrom dataclasses import dataclass, field\nfrom datetime import datetime, timedelta\nfrom enum import IntEnum\nfrom typing import Optional\n\n\nclass QuotaUnit(IntEnum):\n    \"\"\"API 메서드별 할당량 단위.\"\"\"\n\n    # Messages\n    MESSAGES_LIST = 5\n    MESSAGES_GET = 5\n    MESSAGES_SEND = 100\n    MESSAGES_MODIFY = 5\n    MESSAGES_BATCH_MODIFY = 50\n    MESSAGES_DELETE = 10\n    MESSAGES_TRASH = 5\n    MESSAGES_UNTRASH = 5\n\n    # Threads\n    THREADS_LIST = 5\n    THREADS_GET = 10\n    THREADS_MODIFY = 5\n    THREADS_TRASH = 5\n\n    # Labels\n    LABELS_LIST = 1\n    LABELS_GET = 1\n    LABELS_CREATE = 5\n    LABELS_UPDATE = 5\n    LABELS_DELETE = 5\n\n    # Drafts\n    DRAFTS_LIST = 5\n    DRAFTS_GET = 5\n    DRAFTS_CREATE = 10\n    DRAFTS_SEND = 100\n    DRAFTS_DELETE = 10\n\n    # Profile\n    PROFILE_GET = 5\n\n    # Attachments\n    ATTACHMENTS_GET = 5\n\n\n@dataclass\nclass QuotaUsage:\n    \"\"\"사용자별 할당량 사용 현황.\"\"\"\n\n    units_used: int = 0\n    last_reset: datetime = field(default_factory=datetime.now)\n    daily_units: int = 0\n    daily_reset: datetime = field(default_factory=datetime.now)\n\n\nclass QuotaManager:\n    \"\"\"Gmail API 할당량 관리자.\n\n    Per-user rate limiting (250 units/second)과\n    일일 할당량을 추적하고 관리합니다.\n\n    Usage:\n        quota = QuotaManager()\n\n        # 실행 전 확인\n        if quota.can_execute(\"user@gmail.com\", QuotaUnit.MESSAGES_LIST):\n            # API 호출\n            result = api.list_messages()\n            quota.record_usage(\"user@gmail.com\", QuotaUnit.MESSAGES_LIST)\n\n        # 또는 자동 대기\n        quota.wait_for_quota(\"user@gmail.com\", QuotaUnit.MESSAGES_GET)\n        result = api.get_message(id)\n        quota.record_usage(\"user@gmail.com\", QuotaUnit.MESSAGES_GET)\n    \"\"\"\n\n    # Gmail API limits\n    USER_RATE_LIMIT = 250  # units per second\n    DAILY_LIMIT = 1_000_000_000  # units per day (workspace)\n    CONSUMER_DAILY_LIMIT = 1_000_000  # Conservative estimate for consumer accounts\n\n    def __init__(\n        self,\n        rate_limit: int = USER_RATE_LIMIT,\n        daily_limit: Optional[int] = None,\n        is_workspace: bool = True,\n    ):\n        \"\"\"\n        Args:\n            rate_limit: 초당 최대 할당량 (기본값: 250)\n            daily_limit: 일일 최대 할당량 (None이면 자동 설정)\n            is_workspace: Workspace 계정 여부\n        \"\"\"\n        self.rate_limit = rate_limit\n        self.daily_limit = (\n            daily_limit\n            or (self.DAILY_LIMIT if is_workspace else self.CONSUMER_DAILY_LIMIT)\n        )\n        self._usage: dict[str, QuotaUsage] = {}\n        self._lock = threading.Lock()\n\n    def can_execute(self, user: str, units: int) -> bool:\n        \"\"\"API 호출 가능 여부 확인.\n\n        Args:\n            user: 사용자 식별자 (이메일 또는 계정명)\n            units: 필요한 할당량 단위\n\n        Returns:\n            실행 가능하면 True\n        \"\"\"\n        with self._lock:\n            self._reset_if_needed(user)\n            usage = self._get_or_create_usage(user)\n            return usage.units_used + units <= self.rate_limit\n\n    def record_usage(self, user: str, units: int) -> None:\n        \"\"\"사용량 기록.\n\n        Args:\n            user: 사용자 식별자\n            units: 사용한 할당량 단위\n        \"\"\"\n        with self._lock:\n            self._reset_if_needed(user)\n            usage = self._get_or_create_usage(user)\n            usage.units_used += units\n            usage.daily_units += units\n\n    def wait_for_quota(\n        self,\n        user: str,\n        units: int,\n        timeout: float = 30.0,\n    ) -> bool:\n        \"\"\"할당량 확보까지 대기.\n\n        Args:\n            user: 사용자 식별자\n            units: 필요한 할당량 단위\n            timeout: 최대 대기 시간 (초)\n\n        Returns:\n            할당량 확보 성공 여부\n\n        Raises:\n            TimeoutError: 타임아웃 시\n        \"\"\"\n        start = time.time()\n\n        while not self.can_execute(user, units):\n            if time.time() - start > timeout:\n                raise TimeoutError(\n                    f\"할당량 확보 타임아웃 ({timeout}초). \"\n                    f\"사용자: {user}, 필요 단위: {units}\"\n                )\n            time.sleep(0.1)\n            with self._lock:\n                self._reset_if_needed(user)\n\n        return True\n\n    def get_usage(self, user: str) -> dict:\n        \"\"\"사용자 할당량 현황 조회.\n\n        Args:\n            user: 사용자 식별자\n\n        Returns:\n            현재 사용량 정보\n        \"\"\"\n        with self._lock:\n            self._reset_if_needed(user)\n            usage = self._get_or_create_usage(user)\n            return {\n                \"user\": user,\n                \"units_used\": usage.units_used,\n                \"rate_limit\": self.rate_limit,\n                \"rate_available\": self.rate_limit - usage.units_used,\n                \"daily_units\": usage.daily_units,\n                \"daily_limit\": self.daily_limit,\n                \"daily_available\": self.daily_limit - usage.daily_units,\n            }\n\n    def get_remaining_rate(self, user: str) -> int:\n        \"\"\"현재 초에 남은 rate limit.\n\n        Args:\n            user: 사용자 식별자\n\n        Returns:\n            남은 할당량 단위 수\n        \"\"\"\n        with self._lock:\n            self._reset_if_needed(user)\n            usage = self._get_or_create_usage(user)\n            return max(0, self.rate_limit - usage.units_used)\n\n    def is_daily_limit_reached(self, user: str) -> bool:\n        \"\"\"일일 할당량 도달 여부.\n\n        Args:\n            user: 사용자 식별자\n\n        Returns:\n            일일 한도 도달 시 True\n        \"\"\"\n        with self._lock:\n            usage = self._get_or_create_usage(user)\n            return usage.daily_units >= self.daily_limit\n\n    def reset_user(self, user: str) -> None:\n        \"\"\"사용자 할당량 리셋 (테스트용).\n\n        Args:\n            user: 사용자 식별자\n        \"\"\"\n        with self._lock:\n            if user in self._usage:\n                del self._usage[user]\n\n    def _get_or_create_usage(self, user: str) -> QuotaUsage:\n        \"\"\"사용자 usage 객체 반환 (없으면 생성).\"\"\"\n        if user not in self._usage:\n            self._usage[user] = QuotaUsage()\n        return self._usage[user]\n\n    def _reset_if_needed(self, user: str) -> None:\n        \"\"\"초당/일일 리셋 확인 및 수행.\"\"\"\n        if user not in self._usage:\n            return\n\n        usage = self._usage[user]\n        now = datetime.now()\n\n        # 초당 리셋\n        if now - usage.last_reset > timedelta(seconds=1):\n            usage.units_used = 0\n            usage.last_reset = now\n\n        # 일일 리셋 (자정 기준)\n        if now.date() > usage.daily_reset.date():\n            usage.daily_units = 0\n            usage.daily_reset = now\n\n\n# 싱글톤 인스턴스\n_default_manager: Optional[QuotaManager] = None\n\n\ndef get_quota_manager(\n    rate_limit: int = QuotaManager.USER_RATE_LIMIT,\n    is_workspace: bool = True,\n) -> QuotaManager:\n    \"\"\"기본 QuotaManager 인스턴스 반환.\n\n    Args:\n        rate_limit: 초당 최대 할당량\n        is_workspace: Workspace 계정 여부\n\n    Returns:\n        QuotaManager 싱글톤 인스턴스\n    \"\"\"\n    global _default_manager\n    if _default_manager is None:\n        _default_manager = QuotaManager(\n            rate_limit=rate_limit,\n            is_workspace=is_workspace,\n        )\n    return _default_manager\n\n\nif __name__ == \"__main__\":\n    # 테스트\n    manager = QuotaManager()\n\n    user = \"test@gmail.com\"\n    print(f\"Initial usage: {manager.get_usage(user)}\")\n\n    # 사용량 기록\n    manager.record_usage(user, QuotaUnit.MESSAGES_LIST)\n    print(f\"After list: {manager.get_usage(user)}\")\n\n    manager.record_usage(user, QuotaUnit.MESSAGES_GET)\n    print(f\"After get: {manager.get_usage(user)}\")\n\n    # 실행 가능 여부\n    print(f\"Can execute 250? {manager.can_execute(user, 250)}\")\n    print(f\"Remaining rate: {manager.get_remaining_rate(user)}\")\n\n    # 대기 테스트\n    print(\"Waiting for quota...\")\n    time.sleep(1.1)  # 1초 후 리셋됨\n    print(f\"After 1s: {manager.get_usage(user)}\")\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/core/retry_handler.py",
    "content": "\"\"\"Exponential Backoff Retry Handler.\n\nGmail API 오류 처리를 위한 지수 백오프 재시도 로직.\n\nRetry-able Errors:\n- 429: Rate Limit Exceeded (Too Many Requests)\n- 500: Internal Server Error\n- 502: Bad Gateway\n- 503: Service Unavailable\n- 504: Gateway Timeout\n\nNon-retry-able Errors:\n- 400: Bad Request\n- 401: Unauthorized\n- 403: Forbidden\n- 404: Not Found\n\nReference:\n    https://developers.google.com/workspace/gmail/api/guides/handle-errors\n\"\"\"\n\nimport logging\nimport random\nimport time\nfrom dataclasses import dataclass\nfrom functools import wraps\nfrom typing import Callable, Optional, TypeVar, Any\n\nfrom googleapiclient.errors import HttpError\n\nlogger = logging.getLogger(__name__)\n\nT = TypeVar(\"T\")\n\n# 재시도 가능한 HTTP 상태 코드\nRETRYABLE_STATUS_CODES = {429, 500, 502, 503, 504}\n\n\n@dataclass\nclass RetryConfig:\n    \"\"\"재시도 설정.\"\"\"\n\n    max_retries: int = 5\n    base_delay: float = 1.0\n    max_delay: float = 60.0\n    exponential_base: float = 2.0\n    jitter: bool = True\n\n\ndef calculate_delay(\n    attempt: int,\n    base_delay: float = 1.0,\n    max_delay: float = 60.0,\n    exponential_base: float = 2.0,\n    jitter: bool = True,\n) -> float:\n    \"\"\"지수 백오프 지연 시간 계산.\n\n    Args:\n        attempt: 현재 시도 횟수 (0부터 시작)\n        base_delay: 기본 지연 시간 (초)\n        max_delay: 최대 지연 시간 (초)\n        exponential_base: 지수 배수\n        jitter: 무작위 지터 추가 여부\n\n    Returns:\n        계산된 지연 시간 (초)\n    \"\"\"\n    delay = min(base_delay * (exponential_base**attempt), max_delay)\n\n    if jitter:\n        # 0.5 ~ 1.5 범위의 지터 추가\n        delay *= 0.5 + random.random()\n\n    return delay\n\n\ndef is_retryable_error(error: Exception) -> bool:\n    \"\"\"재시도 가능한 오류인지 확인.\n\n    Args:\n        error: 발생한 예외\n\n    Returns:\n        재시도 가능하면 True\n    \"\"\"\n    if isinstance(error, HttpError):\n        return error.resp.status in RETRYABLE_STATUS_CODES\n    return False\n\n\ndef exponential_backoff(\n    max_retries: int = 5,\n    base_delay: float = 1.0,\n    max_delay: float = 60.0,\n    exponential_base: float = 2.0,\n    jitter: bool = True,\n    on_retry: Optional[Callable[[int, Exception, float], None]] = None,\n) -> Callable:\n    \"\"\"지수 백오프 데코레이터.\n\n    Gmail API 호출 시 rate limiting 및 일시적 오류를\n    자동으로 처리합니다.\n\n    Args:\n        max_retries: 최대 재시도 횟수\n        base_delay: 기본 지연 시간 (초)\n        max_delay: 최대 지연 시간 (초)\n        exponential_base: 지수 배수\n        jitter: 무작위 지터 추가 여부\n        on_retry: 재시도 시 호출될 콜백 (attempt, error, delay)\n\n    Returns:\n        데코레이터 함수\n\n    Usage:\n        @exponential_backoff(max_retries=5)\n        def get_message(service, message_id):\n            return service.users().messages().get(\n                userId='me', id=message_id\n            ).execute()\n\n        # 콜백과 함께 사용\n        def log_retry(attempt, error, delay):\n            print(f\"Retry {attempt}: {error}, waiting {delay:.1f}s\")\n\n        @exponential_backoff(max_retries=3, on_retry=log_retry)\n        def list_messages(service, query):\n            ...\n    \"\"\"\n\n    def decorator(func: Callable[..., T]) -> Callable[..., T]:\n        @wraps(func)\n        def wrapper(*args: Any, **kwargs: Any) -> T:\n            last_exception: Optional[Exception] = None\n\n            for attempt in range(max_retries + 1):\n                try:\n                    return func(*args, **kwargs)\n                except HttpError as e:\n                    last_exception = e\n\n                    if not is_retryable_error(e):\n                        # 재시도 불가능한 오류는 즉시 발생\n                        raise\n\n                    if attempt == max_retries:\n                        # 최대 재시도 횟수 도달\n                        logger.error(\n                            f\"최대 재시도 횟수({max_retries}) 도달. \"\n                            f\"함수: {func.__name__}, 오류: {e}\"\n                        )\n                        raise\n\n                    delay = calculate_delay(\n                        attempt,\n                        base_delay,\n                        max_delay,\n                        exponential_base,\n                        jitter,\n                    )\n\n                    if on_retry:\n                        on_retry(attempt, e, delay)\n\n                    logger.warning(\n                        f\"재시도 {attempt + 1}/{max_retries}: \"\n                        f\"HTTP {e.resp.status}, {delay:.1f}초 대기\"\n                    )\n                    time.sleep(delay)\n                except Exception as e:\n                    # HttpError가 아닌 예외는 그대로 발생\n                    raise\n\n            # 이 코드에 도달하면 안 됨\n            if last_exception:\n                raise last_exception\n            raise RuntimeError(\"Unexpected state in retry logic\")\n\n        return wrapper\n\n    return decorator\n\n\nclass RetryableOperation:\n    \"\"\"재시도 가능한 작업을 위한 컨텍스트 매니저.\n\n    데코레이터 대신 명시적으로 재시도 로직을 사용할 때 유용합니다.\n\n    Usage:\n        with RetryableOperation(max_retries=5) as op:\n            while op.should_retry():\n                try:\n                    result = service.users().messages().get(\n                        userId='me', id=message_id\n                    ).execute()\n                    op.success()\n                    break\n                except HttpError as e:\n                    op.handle_error(e)\n\n        # 또는 execute 사용\n        def fetch_message():\n            return service.users().messages().get(...).execute()\n\n        result = RetryableOperation(max_retries=5).execute(fetch_message)\n    \"\"\"\n\n    def __init__(\n        self,\n        max_retries: int = 5,\n        base_delay: float = 1.0,\n        max_delay: float = 60.0,\n        exponential_base: float = 2.0,\n        jitter: bool = True,\n    ):\n        self.config = RetryConfig(\n            max_retries=max_retries,\n            base_delay=base_delay,\n            max_delay=max_delay,\n            exponential_base=exponential_base,\n            jitter=jitter,\n        )\n        self.attempt = 0\n        self.succeeded = False\n        self.last_error: Optional[Exception] = None\n\n    def __enter__(self) -> \"RetryableOperation\":\n        self.attempt = 0\n        self.succeeded = False\n        self.last_error = None\n        return self\n\n    def __exit__(self, exc_type, exc_val, exc_tb):\n        return False\n\n    def should_retry(self) -> bool:\n        \"\"\"재시도 해야 하는지 확인.\"\"\"\n        return not self.succeeded and self.attempt <= self.config.max_retries\n\n    def success(self) -> None:\n        \"\"\"작업 성공 표시.\"\"\"\n        self.succeeded = True\n\n    def handle_error(self, error: Exception) -> None:\n        \"\"\"오류 처리 및 대기.\n\n        Args:\n            error: 발생한 예외\n\n        Raises:\n            Exception: 재시도 불가능하거나 최대 횟수 도달 시\n        \"\"\"\n        self.last_error = error\n\n        if not is_retryable_error(error):\n            raise error\n\n        if self.attempt >= self.config.max_retries:\n            raise error\n\n        delay = calculate_delay(\n            self.attempt,\n            self.config.base_delay,\n            self.config.max_delay,\n            self.config.exponential_base,\n            self.config.jitter,\n        )\n\n        logger.warning(\n            f\"재시도 {self.attempt + 1}/{self.config.max_retries}: \"\n            f\"{type(error).__name__}, {delay:.1f}초 대기\"\n        )\n\n        time.sleep(delay)\n        self.attempt += 1\n\n    def execute(self, func: Callable[..., T], *args, **kwargs) -> T:\n        \"\"\"함수를 재시도 로직과 함께 실행.\n\n        Args:\n            func: 실행할 함수\n            *args: 함수 인자\n            **kwargs: 함수 키워드 인자\n\n        Returns:\n            함수 실행 결과\n\n        Raises:\n            Exception: 최대 재시도 후에도 실패 시\n        \"\"\"\n        with self as op:\n            while op.should_retry():\n                try:\n                    result = func(*args, **kwargs)\n                    op.success()\n                    return result\n                except Exception as e:\n                    op.handle_error(e)\n\n        # 이 코드에 도달하면 안 됨\n        if self.last_error:\n            raise self.last_error\n        raise RuntimeError(\"Unexpected state in retry operation\")\n\n\ndef retry_api_call(\n    func: Callable[..., T],\n    *args,\n    max_retries: int = 5,\n    **kwargs,\n) -> T:\n    \"\"\"단순 재시도 헬퍼 함수.\n\n    데코레이터나 컨텍스트 매니저 없이 간단히 사용할 수 있습니다.\n\n    Args:\n        func: 실행할 함수\n        *args: 함수 인자\n        max_retries: 최대 재시도 횟수\n        **kwargs: 함수 키워드 인자\n\n    Returns:\n        함수 실행 결과\n\n    Usage:\n        result = retry_api_call(\n            service.users().messages().get,\n            userId='me',\n            id=message_id,\n            max_retries=3\n        )\n    \"\"\"\n    return RetryableOperation(max_retries=max_retries).execute(\n        func, *args, **kwargs\n    )\n\n\nif __name__ == \"__main__\":\n    # 테스트\n\n    # 성공 케이스\n    @exponential_backoff(max_retries=3)\n    def always_succeeds():\n        return \"success\"\n\n    print(f\"Success test: {always_succeeds()}\")\n\n    # 재시도 후 성공 케이스 시뮬레이션\n    call_count = 0\n\n    @exponential_backoff(max_retries=3, base_delay=0.1)\n    def succeed_on_third_try():\n        global call_count\n        call_count += 1\n        if call_count < 3:\n            # 임시 오류 시뮬레이션\n            from unittest.mock import MagicMock\n\n            error = HttpError(MagicMock(status=429), b\"Rate limited\")\n            raise error\n        return f\"success on try {call_count}\"\n\n    try:\n        call_count = 0\n        result = succeed_on_third_try()\n        print(f\"Retry test: {result}\")\n    except HttpError as e:\n        print(f\"Retry test failed: {e}\")\n\n    # RetryableOperation 테스트\n    print(\"\\nRetryableOperation test:\")\n    op = RetryableOperation(max_retries=3, base_delay=0.1)\n    result = op.execute(lambda: \"direct execute\")\n    print(f\"Direct execute: {result}\")\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/gmail_client.py",
    "content": "\"\"\"Gmail API 클라이언트.\n\n여러 Google 계정의 Gmail을 조회/발송하기 위한 클라이언트.\n저장된 refresh token을 사용하여 매번 인증 없이 API 호출.\n\nFeatures:\n    - 다중 계정 지원 (work, personal 등)\n    - Rate Limiting & Quota Management (P0)\n    - Exponential Backoff for Error Handling (P0)\n    - Batch Processing for Bulk Operations (P1)\n    - Local Caching for API Optimization (P1)\n\nEnvironment Variables:\n    GMAIL_SKILL_PATH: Skill 루트 경로 (기본값: 이 파일의 부모의 부모)\n    GMAIL_TIMEOUT: API 요청 타임아웃 초 (기본값: 30)\n    GMAIL_CACHE_DIR: 캐시 디렉토리 (기본값: .cache/gmail)\n    GMAIL_ENABLE_CACHE: 캐시 활성화 여부 (기본값: true)\n    GMAIL_ENABLE_QUOTA: 할당량 관리 활성화 여부 (기본값: true)\n\"\"\"\n\nimport base64\nimport json\nimport logging\nimport mimetypes\nimport os\nfrom datetime import datetime\nfrom email import encoders\nfrom email.mime.audio import MIMEAudio\nfrom email.mime.base import MIMEBase\nfrom email.mime.image import MIMEImage\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.text import MIMEText\nfrom pathlib import Path\nfrom typing import Optional\n\nimport google.auth\nfrom google.auth.transport.requests import Request\nfrom google.oauth2.credentials import Credentials\nfrom googleapiclient.discovery import build\n\n# Core modules for enhanced functionality\ntry:\n    from .core import (\n        QuotaManager,\n        QuotaUnit,\n        exponential_backoff,\n        RetryConfig,\n        EmailCache,\n        BatchProcessor,\n    )\nexcept ImportError:\n    # Fallback for direct script execution\n    from core import (\n        QuotaManager,\n        QuotaUnit,\n        exponential_backoff,\n        RetryConfig,\n        EmailCache,\n        BatchProcessor,\n    )\n\nlogger = logging.getLogger(__name__)\n\nDEFAULT_TIMEOUT = int(os.environ.get(\"GMAIL_TIMEOUT\", \"30\"))\nENABLE_CACHE = os.environ.get(\"GMAIL_ENABLE_CACHE\", \"true\").lower() == \"true\"\nENABLE_QUOTA = os.environ.get(\"GMAIL_ENABLE_QUOTA\", \"true\").lower() == \"true\"\n\n\nclass GmailClient:\n    \"\"\"단일 Google 계정의 Gmail 클라이언트.\n\n    Enhanced with:\n        - Rate limiting & quota management\n        - Exponential backoff for error handling\n        - Local caching for API optimization\n        - Batch processing support\n    \"\"\"\n\n    SCOPES = [\n        \"https://www.googleapis.com/auth/gmail.modify\",  # 읽기/수정/삭제\n        \"https://www.googleapis.com/auth/gmail.send\",    # 메일 발송\n        \"https://www.googleapis.com/auth/gmail.labels\",  # 라벨 관리\n    ]\n\n    def __init__(\n        self,\n        account_name: str,\n        base_path: Optional[Path] = None,\n        timeout: int = DEFAULT_TIMEOUT,\n        enable_cache: bool = ENABLE_CACHE,\n        enable_quota: bool = ENABLE_QUOTA,\n    ):\n        \"\"\"\n        Args:\n            account_name: 계정 식별자 (예: 'work', 'personal')\n            base_path: skill 루트 경로\n            timeout: API 요청 타임아웃 (초)\n            enable_cache: 캐시 활성화 여부\n            enable_quota: 할당량 관리 활성화 여부\n        \"\"\"\n        self.account_name = account_name\n        self.timeout = timeout\n        self.enable_cache = enable_cache\n        self.enable_quota = enable_quota\n\n        if base_path:\n            self.base_path = base_path\n        elif os.environ.get(\"GMAIL_SKILL_PATH\"):\n            self.base_path = Path(os.environ[\"GMAIL_SKILL_PATH\"])\n        else:\n            self.base_path = Path(__file__).parent.parent\n\n        self.creds = self._load_credentials()\n        self._service = None\n\n        # Initialize core components\n        self._cache: Optional[EmailCache] = None\n        self._quota_manager: Optional[QuotaManager] = None\n        self._batch_processor: Optional[BatchProcessor] = None\n\n        if enable_cache:\n            cache_dir = os.environ.get(\"GMAIL_CACHE_DIR\") or str(\n                self.base_path / \".cache\" / \"gmail\"\n            )\n            self._cache = EmailCache(cache_dir=cache_dir)\n\n        if enable_quota:\n            self._quota_manager = QuotaManager()\n\n    @property\n    def service(self):\n        \"\"\"Lazy-load Gmail service.\"\"\"\n        if self._service is None:\n            self._service = build(\"gmail\", \"v1\", credentials=self.creds)\n        return self._service\n\n    @property\n    def cache(self) -> Optional[EmailCache]:\n        \"\"\"Get cache manager instance.\"\"\"\n        return self._cache\n\n    @property\n    def quota_manager(self) -> Optional[QuotaManager]:\n        \"\"\"Get quota manager instance.\"\"\"\n        return self._quota_manager\n\n    @property\n    def batch_processor(self) -> BatchProcessor:\n        \"\"\"Get batch processor instance (lazy-loaded).\"\"\"\n        if self._batch_processor is None:\n            self._batch_processor = BatchProcessor(\n                service=self.service,\n                quota_manager=self._quota_manager,\n                user=self.account_name,\n            )\n        return self._batch_processor\n\n    def _record_quota(self, units: int) -> None:\n        \"\"\"Record quota usage if quota management is enabled.\"\"\"\n        if self._quota_manager:\n            self._quota_manager.record_usage(self.account_name, units)\n\n    def _wait_for_quota(self, units: int) -> None:\n        \"\"\"Wait for quota availability if quota management is enabled.\"\"\"\n        if self._quota_manager:\n            self._quota_manager.wait_for_quota(self.account_name, units)\n\n    def _load_credentials(self):\n        \"\"\"저장된 refresh token으로 credentials 로드 및 갱신.\"\"\"\n        token_path = self.base_path / f\"accounts/{self.account_name}.json\"\n\n        if not token_path.exists():\n            raise FileNotFoundError(\n                f\"계정 '{self.account_name}'의 토큰이 없습니다. \"\n                f\"먼저 setup_auth.py --account {self.account_name} 실행 필요\"\n            )\n\n        with open(token_path) as f:\n            token_data = json.load(f)\n\n        if \"client_id\" in token_data and \"type\" not in token_data:\n            creds = Credentials(\n                token=token_data.get(\"token\"),\n                refresh_token=token_data.get(\"refresh_token\"),\n                token_uri=\"https://oauth2.googleapis.com/token\",\n                client_id=token_data.get(\"client_id\"),\n                client_secret=token_data.get(\"client_secret\"),\n                scopes=self.SCOPES,\n            )\n            quota_project = token_data.get(\"quota_project_id\", \"teamattention\")\n            creds = creds.with_quota_project(quota_project)\n        else:\n            creds = Credentials.from_authorized_user_info(token_data, self.SCOPES)\n\n        if creds.expired and creds.refresh_token:\n            creds.refresh(Request())\n            with open(token_path, \"w\") as f:\n                json.dump(json.loads(creds.to_json()), f, indent=2)\n\n        return creds\n\n    # =========================================================================\n    # Messages\n    # =========================================================================\n\n    def list_messages(\n        self,\n        query: str = \"\",\n        max_results: int = 20,\n        label_ids: Optional[list[str]] = None,\n        include_spam_trash: bool = False,\n        use_cache: bool = True,\n    ) -> list[dict]:\n        \"\"\"메시지 목록 조회.\n\n        Args:\n            query: Gmail 검색 쿼리 (예: \"from:user@example.com\", \"is:unread\")\n            max_results: 최대 결과 수\n            label_ids: 필터할 라벨 ID 목록\n            include_spam_trash: 스팸/휴지통 포함 여부\n            use_cache: 캐시 사용 여부 (기본값: True)\n\n        Returns:\n            메시지 목록 (id, threadId 포함)\n        \"\"\"\n        # Check cache first\n        if use_cache and self._cache:\n            cached = self._cache.get_list(self.account_name, query, label_ids)\n            if cached is not None:\n                logger.debug(f\"Cache hit for list query: {query}\")\n                return cached[:max_results]\n\n        messages = []\n        page_token = None\n\n        @exponential_backoff(max_retries=5)\n        def _list_page(**kwargs):\n            return self.service.users().messages().list(**kwargs).execute()\n\n        while len(messages) < max_results:\n            kwargs = {\n                \"userId\": \"me\",\n                \"maxResults\": min(max_results - len(messages), 100),\n                \"includeSpamTrash\": include_spam_trash,\n            }\n            if query:\n                kwargs[\"q\"] = query\n            if label_ids:\n                kwargs[\"labelIds\"] = label_ids\n            if page_token:\n                kwargs[\"pageToken\"] = page_token\n\n            # Wait for quota before API call\n            self._wait_for_quota(QuotaUnit.MESSAGES_LIST)\n\n            result = _list_page(**kwargs)\n\n            # Record quota usage\n            self._record_quota(QuotaUnit.MESSAGES_LIST)\n\n            for msg in result.get(\"messages\", []):\n                messages.append(msg)\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        # Cache the results\n        if use_cache and self._cache and messages:\n            self._cache.set_list(self.account_name, query, messages, label_ids)\n\n        return messages\n\n    def get_message(\n        self,\n        message_id: str,\n        format: str = \"full\",\n        use_cache: bool = True,\n    ) -> dict:\n        \"\"\"메시지 상세 조회.\n\n        Args:\n            message_id: 메시지 ID\n            format: 응답 형식 (minimal, full, raw, metadata)\n            use_cache: 캐시 사용 여부 (기본값: True)\n\n        Returns:\n            메시지 상세 정보\n        \"\"\"\n        # Check cache first (only for full/metadata formats)\n        if use_cache and self._cache and format in (\"full\", \"metadata\"):\n            cached = self._cache.get_message(\n                self.account_name,\n                message_id,\n                metadata_only=(format == \"metadata\"),\n            )\n            if cached is not None:\n                logger.debug(f\"Cache hit for message: {message_id}\")\n                return cached\n\n        @exponential_backoff(max_retries=5)\n        def _get_message():\n            return (\n                self.service.users()\n                .messages()\n                .get(userId=\"me\", id=message_id, format=format)\n                .execute()\n            )\n\n        # Wait for quota before API call\n        self._wait_for_quota(QuotaUnit.MESSAGES_GET)\n\n        result = _get_message()\n\n        # Record quota usage\n        self._record_quota(QuotaUnit.MESSAGES_GET)\n\n        parsed = self._parse_message(result)\n\n        # Cache the result\n        if use_cache and self._cache and format in (\"full\", \"metadata\"):\n            self._cache.set_message(self.account_name, message_id, parsed)\n\n        return parsed\n\n    def _parse_message(self, msg: dict) -> dict:\n        \"\"\"API 응답을 파싱하여 읽기 쉬운 형식으로 변환.\"\"\"\n        headers = {}\n        for header in msg.get(\"payload\", {}).get(\"headers\", []):\n            name = header[\"name\"].lower()\n            if name in (\"from\", \"to\", \"cc\", \"bcc\", \"subject\", \"date\", \"message-id\"):\n                headers[name] = header[\"value\"]\n\n        body = \"\"\n        attachments = []\n\n        payload = msg.get(\"payload\", {})\n        body, attachments = self._extract_body_and_attachments(payload, msg[\"id\"])\n\n        return {\n            \"id\": msg[\"id\"],\n            \"thread_id\": msg[\"threadId\"],\n            \"label_ids\": msg.get(\"labelIds\", []),\n            \"snippet\": msg.get(\"snippet\", \"\"),\n            \"from\": headers.get(\"from\", \"\"),\n            \"to\": headers.get(\"to\", \"\"),\n            \"cc\": headers.get(\"cc\", \"\"),\n            \"subject\": headers.get(\"subject\", \"(제목 없음)\"),\n            \"date\": headers.get(\"date\", \"\"),\n            \"message_id\": headers.get(\"message-id\", \"\"),\n            \"body\": body,\n            \"attachments\": attachments,\n            \"size_estimate\": msg.get(\"sizeEstimate\", 0),\n            \"internal_date\": msg.get(\"internalDate\", \"\"),\n        }\n\n    def _extract_body_and_attachments(\n        self, payload: dict, message_id: str\n    ) -> tuple[str, list[dict]]:\n        \"\"\"메시지 본문과 첨부파일 추출.\"\"\"\n        body = \"\"\n        attachments = []\n\n        mime_type = payload.get(\"mimeType\", \"\")\n\n        if mime_type.startswith(\"multipart/\"):\n            for part in payload.get(\"parts\", []):\n                part_body, part_attachments = self._extract_body_and_attachments(\n                    part, message_id\n                )\n                if part_body:\n                    body = part_body\n                attachments.extend(part_attachments)\n        else:\n            if payload.get(\"filename\"):\n                attachments.append(\n                    {\n                        \"filename\": payload[\"filename\"],\n                        \"mime_type\": mime_type,\n                        \"size\": payload.get(\"body\", {}).get(\"size\", 0),\n                        \"attachment_id\": payload.get(\"body\", {}).get(\"attachmentId\"),\n                    }\n                )\n            elif mime_type in (\"text/plain\", \"text/html\"):\n                data = payload.get(\"body\", {}).get(\"data\", \"\")\n                if data:\n                    decoded = base64.urlsafe_b64decode(data).decode(\"utf-8\")\n                    if mime_type == \"text/plain\" or not body:\n                        body = decoded\n\n        return body, attachments\n\n    def get_attachment(self, message_id: str, attachment_id: str) -> bytes:\n        \"\"\"첨부파일 다운로드.\n\n        Args:\n            message_id: 메시지 ID\n            attachment_id: 첨부파일 ID\n\n        Returns:\n            첨부파일 바이너리 데이터\n        \"\"\"\n        result = (\n            self.service.users()\n            .messages()\n            .attachments()\n            .get(userId=\"me\", messageId=message_id, id=attachment_id)\n            .execute()\n        )\n        return base64.urlsafe_b64decode(result[\"data\"])\n\n    def send_message(\n        self,\n        to: str,\n        subject: str,\n        body: str,\n        cc: Optional[str] = None,\n        bcc: Optional[str] = None,\n        html: bool = False,\n        attachments: Optional[list[str]] = None,\n        reply_to_message_id: Optional[str] = None,\n        thread_id: Optional[str] = None,\n    ) -> dict:\n        \"\"\"메일 발송.\n\n        Args:\n            to: 수신자 (쉼표로 구분 가능)\n            subject: 제목\n            body: 본문\n            cc: 참조\n            bcc: 숨은 참조\n            html: HTML 형식 여부\n            attachments: 첨부파일 경로 목록\n            reply_to_message_id: 답장할 메시지 ID (In-Reply-To 헤더용)\n            thread_id: 스레드 ID (답장 시)\n\n        Returns:\n            발송된 메시지 정보\n        \"\"\"\n        if attachments:\n            message = MIMEMultipart()\n            message.attach(MIMEText(body, \"html\" if html else \"plain\", \"utf-8\"))\n            for filepath in attachments:\n                self._attach_file(message, filepath)\n        else:\n            message = MIMEText(body, \"html\" if html else \"plain\", \"utf-8\")\n\n        message[\"to\"] = to\n        message[\"subject\"] = subject\n        if cc:\n            message[\"cc\"] = cc\n        if bcc:\n            message[\"bcc\"] = bcc\n        if reply_to_message_id:\n            message[\"In-Reply-To\"] = reply_to_message_id\n            message[\"References\"] = reply_to_message_id\n\n        raw = base64.urlsafe_b64encode(message.as_bytes()).decode(\"utf-8\")\n\n        body_data = {\"raw\": raw}\n        if thread_id:\n            body_data[\"threadId\"] = thread_id\n\n        @exponential_backoff(max_retries=5)\n        def _send():\n            return (\n                self.service.users().messages().send(userId=\"me\", body=body_data).execute()\n            )\n\n        # Wait for quota before API call (send uses 100 units)\n        self._wait_for_quota(QuotaUnit.MESSAGES_SEND)\n\n        result = _send()\n\n        # Record quota usage\n        self._record_quota(QuotaUnit.MESSAGES_SEND)\n\n        # Invalidate list cache after sending\n        if self._cache:\n            self._cache.invalidate_lists(self.account_name)\n\n        return {\n            \"id\": result[\"id\"],\n            \"thread_id\": result[\"threadId\"],\n            \"label_ids\": result.get(\"labelIds\", []),\n            \"status\": \"sent\",\n        }\n\n    def _attach_file(self, message: MIMEMultipart, filepath: str) -> None:\n        \"\"\"파일을 메시지에 첨부.\"\"\"\n        path = Path(filepath)\n        content_type, encoding = mimetypes.guess_type(str(path))\n\n        if content_type is None:\n            content_type = \"application/octet-stream\"\n\n        main_type, sub_type = content_type.split(\"/\", 1)\n\n        with open(path, \"rb\") as f:\n            data = f.read()\n\n        if main_type == \"text\":\n            attachment = MIMEText(data.decode(\"utf-8\"), _subtype=sub_type)\n        elif main_type == \"image\":\n            attachment = MIMEImage(data, _subtype=sub_type)\n        elif main_type == \"audio\":\n            attachment = MIMEAudio(data, _subtype=sub_type)\n        else:\n            attachment = MIMEBase(main_type, sub_type)\n            attachment.set_payload(data)\n            encoders.encode_base64(attachment)\n\n        attachment.add_header(\n            \"Content-Disposition\", \"attachment\", filename=path.name\n        )\n        message.attach(attachment)\n\n    def modify_message(\n        self,\n        message_id: str,\n        add_label_ids: Optional[list[str]] = None,\n        remove_label_ids: Optional[list[str]] = None,\n    ) -> dict:\n        \"\"\"메시지 라벨 수정.\n\n        Args:\n            message_id: 메시지 ID\n            add_label_ids: 추가할 라벨 ID\n            remove_label_ids: 제거할 라벨 ID\n\n        Returns:\n            수정된 메시지 정보\n        \"\"\"\n        body = {}\n        if add_label_ids:\n            body[\"addLabelIds\"] = add_label_ids\n        if remove_label_ids:\n            body[\"removeLabelIds\"] = remove_label_ids\n\n        @exponential_backoff(max_retries=5)\n        def _modify():\n            return (\n                self.service.users()\n                .messages()\n                .modify(userId=\"me\", id=message_id, body=body)\n                .execute()\n            )\n\n        # Wait for quota before API call\n        self._wait_for_quota(QuotaUnit.MESSAGES_MODIFY)\n\n        result = _modify()\n\n        # Record quota usage\n        self._record_quota(QuotaUnit.MESSAGES_MODIFY)\n\n        # Invalidate cache for this message\n        if self._cache:\n            self._cache.invalidate_message(self.account_name, message_id)\n\n        return {\n            \"id\": result[\"id\"],\n            \"thread_id\": result[\"threadId\"],\n            \"label_ids\": result.get(\"labelIds\", []),\n            \"status\": \"modified\",\n        }\n\n    def mark_as_read(self, message_id: str) -> dict:\n        \"\"\"읽음으로 표시.\"\"\"\n        return self.modify_message(message_id, remove_label_ids=[\"UNREAD\"])\n\n    def mark_as_unread(self, message_id: str) -> dict:\n        \"\"\"읽지 않음으로 표시.\"\"\"\n        return self.modify_message(message_id, add_label_ids=[\"UNREAD\"])\n\n    def star_message(self, message_id: str) -> dict:\n        \"\"\"별표 추가.\"\"\"\n        return self.modify_message(message_id, add_label_ids=[\"STARRED\"])\n\n    def unstar_message(self, message_id: str) -> dict:\n        \"\"\"별표 제거.\"\"\"\n        return self.modify_message(message_id, remove_label_ids=[\"STARRED\"])\n\n    def archive_message(self, message_id: str) -> dict:\n        \"\"\"보관처리 (INBOX 라벨 제거).\"\"\"\n        return self.modify_message(message_id, remove_label_ids=[\"INBOX\"])\n\n    def trash_message(self, message_id: str) -> dict:\n        \"\"\"휴지통으로 이동.\"\"\"\n        @exponential_backoff(max_retries=5)\n        def _trash():\n            return (\n                self.service.users()\n                .messages()\n                .trash(userId=\"me\", id=message_id)\n                .execute()\n            )\n\n        self._wait_for_quota(QuotaUnit.MESSAGES_TRASH)\n        result = _trash()\n        self._record_quota(QuotaUnit.MESSAGES_TRASH)\n\n        # Invalidate cache\n        if self._cache:\n            self._cache.invalidate_message(self.account_name, message_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return {\n            \"id\": result[\"id\"],\n            \"status\": \"trashed\",\n        }\n\n    def untrash_message(self, message_id: str) -> dict:\n        \"\"\"휴지통에서 복원.\"\"\"\n        @exponential_backoff(max_retries=5)\n        def _untrash():\n            return (\n                self.service.users()\n                .messages()\n                .untrash(userId=\"me\", id=message_id)\n                .execute()\n            )\n\n        self._wait_for_quota(QuotaUnit.MESSAGES_UNTRASH)\n        result = _untrash()\n        self._record_quota(QuotaUnit.MESSAGES_UNTRASH)\n\n        # Invalidate cache\n        if self._cache:\n            self._cache.invalidate_message(self.account_name, message_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return {\n            \"id\": result[\"id\"],\n            \"status\": \"untrashed\",\n        }\n\n    def delete_message(self, message_id: str) -> dict:\n        \"\"\"메시지 영구 삭제 (복구 불가).\"\"\"\n        @exponential_backoff(max_retries=5)\n        def _delete():\n            self.service.users().messages().delete(userId=\"me\", id=message_id).execute()\n\n        self._wait_for_quota(QuotaUnit.MESSAGES_DELETE)\n        _delete()\n        self._record_quota(QuotaUnit.MESSAGES_DELETE)\n\n        # Invalidate cache\n        if self._cache:\n            self._cache.invalidate_message(self.account_name, message_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return {\n            \"id\": message_id,\n            \"status\": \"deleted\",\n        }\n\n    # =========================================================================\n    # Threads\n    # =========================================================================\n\n    def list_threads(\n        self,\n        query: str = \"\",\n        max_results: int = 20,\n        label_ids: Optional[list[str]] = None,\n    ) -> list[dict]:\n        \"\"\"스레드 목록 조회.\"\"\"\n        threads = []\n        page_token = None\n\n        while len(threads) < max_results:\n            kwargs = {\n                \"userId\": \"me\",\n                \"maxResults\": min(max_results - len(threads), 100),\n            }\n            if query:\n                kwargs[\"q\"] = query\n            if label_ids:\n                kwargs[\"labelIds\"] = label_ids\n            if page_token:\n                kwargs[\"pageToken\"] = page_token\n\n            result = self.service.users().threads().list(**kwargs).execute()\n\n            for thread in result.get(\"threads\", []):\n                threads.append(thread)\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return threads\n\n    def get_thread(self, thread_id: str, format: str = \"full\") -> dict:\n        \"\"\"스레드 상세 조회.\"\"\"\n        result = (\n            self.service.users()\n            .threads()\n            .get(userId=\"me\", id=thread_id, format=format)\n            .execute()\n        )\n\n        messages = [self._parse_message(msg) for msg in result.get(\"messages\", [])]\n\n        return {\n            \"id\": result[\"id\"],\n            \"messages\": messages,\n            \"message_count\": len(messages),\n        }\n\n    def trash_thread(self, thread_id: str) -> dict:\n        \"\"\"스레드 휴지통으로 이동.\"\"\"\n        result = (\n            self.service.users()\n            .threads()\n            .trash(userId=\"me\", id=thread_id)\n            .execute()\n        )\n        return {\n            \"id\": result[\"id\"],\n            \"status\": \"trashed\",\n        }\n\n    # =========================================================================\n    # Labels\n    # =========================================================================\n\n    def list_labels(self, use_cache: bool = True) -> list[dict]:\n        \"\"\"라벨 목록 조회.\n\n        Args:\n            use_cache: 캐시 사용 여부 (기본값: True)\n\n        Returns:\n            라벨 목록\n        \"\"\"\n        # Check cache first\n        if use_cache and self._cache:\n            cached = self._cache.get_labels(self.account_name)\n            if cached is not None:\n                logger.debug(\"Cache hit for labels\")\n                return cached\n\n        @exponential_backoff(max_retries=5)\n        def _list_labels():\n            return self.service.users().labels().list(userId=\"me\").execute()\n\n        self._wait_for_quota(QuotaUnit.LABELS_LIST)\n        result = _list_labels()\n        self._record_quota(QuotaUnit.LABELS_LIST)\n\n        labels = []\n        for label in result.get(\"labels\", []):\n            labels.append(\n                {\n                    \"id\": label[\"id\"],\n                    \"name\": label[\"name\"],\n                    \"type\": label.get(\"type\", \"user\"),\n                    \"message_list_visibility\": label.get(\"messageListVisibility\"),\n                    \"label_list_visibility\": label.get(\"labelListVisibility\"),\n                }\n            )\n\n        # Cache the results\n        if use_cache and self._cache:\n            self._cache.set_labels(self.account_name, labels)\n\n        return labels\n\n    def get_label(self, label_id: str) -> dict:\n        \"\"\"라벨 상세 조회.\"\"\"\n        result = (\n            self.service.users().labels().get(userId=\"me\", id=label_id).execute()\n        )\n        return {\n            \"id\": result[\"id\"],\n            \"name\": result[\"name\"],\n            \"type\": result.get(\"type\", \"user\"),\n            \"messages_total\": result.get(\"messagesTotal\", 0),\n            \"messages_unread\": result.get(\"messagesUnread\", 0),\n            \"threads_total\": result.get(\"threadsTotal\", 0),\n            \"threads_unread\": result.get(\"threadsUnread\", 0),\n        }\n\n    def create_label(\n        self,\n        name: str,\n        message_list_visibility: str = \"show\",\n        label_list_visibility: str = \"labelShow\",\n    ) -> dict:\n        \"\"\"라벨 생성.\n\n        Args:\n            name: 라벨 이름\n            message_list_visibility: 메시지 목록에서 표시 여부 (show, hide)\n            label_list_visibility: 라벨 목록에서 표시 여부 (labelShow, labelHide)\n\n        Returns:\n            생성된 라벨 정보\n        \"\"\"\n        body = {\n            \"name\": name,\n            \"messageListVisibility\": message_list_visibility,\n            \"labelListVisibility\": label_list_visibility,\n        }\n\n        result = (\n            self.service.users().labels().create(userId=\"me\", body=body).execute()\n        )\n\n        return {\n            \"id\": result[\"id\"],\n            \"name\": result[\"name\"],\n            \"status\": \"created\",\n        }\n\n    def update_label(\n        self,\n        label_id: str,\n        name: Optional[str] = None,\n        message_list_visibility: Optional[str] = None,\n        label_list_visibility: Optional[str] = None,\n    ) -> dict:\n        \"\"\"라벨 수정.\"\"\"\n        result = (\n            self.service.users().labels().get(userId=\"me\", id=label_id).execute()\n        )\n\n        if name:\n            result[\"name\"] = name\n        if message_list_visibility:\n            result[\"messageListVisibility\"] = message_list_visibility\n        if label_list_visibility:\n            result[\"labelListVisibility\"] = label_list_visibility\n\n        updated = (\n            self.service.users()\n            .labels()\n            .update(userId=\"me\", id=label_id, body=result)\n            .execute()\n        )\n\n        return {\n            \"id\": updated[\"id\"],\n            \"name\": updated[\"name\"],\n            \"status\": \"updated\",\n        }\n\n    def delete_label(self, label_id: str) -> dict:\n        \"\"\"라벨 삭제.\"\"\"\n        self.service.users().labels().delete(userId=\"me\", id=label_id).execute()\n        return {\n            \"id\": label_id,\n            \"status\": \"deleted\",\n        }\n\n    # =========================================================================\n    # Drafts\n    # =========================================================================\n\n    def list_drafts(self, max_results: int = 20) -> list[dict]:\n        \"\"\"초안 목록 조회.\"\"\"\n        drafts = []\n        page_token = None\n\n        while len(drafts) < max_results:\n            kwargs = {\n                \"userId\": \"me\",\n                \"maxResults\": min(max_results - len(drafts), 100),\n            }\n            if page_token:\n                kwargs[\"pageToken\"] = page_token\n\n            result = self.service.users().drafts().list(**kwargs).execute()\n\n            for draft in result.get(\"drafts\", []):\n                drafts.append(draft)\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return drafts\n\n    def get_draft(self, draft_id: str) -> dict:\n        \"\"\"초안 상세 조회.\"\"\"\n        result = (\n            self.service.users()\n            .drafts()\n            .get(userId=\"me\", id=draft_id, format=\"full\")\n            .execute()\n        )\n\n        return {\n            \"id\": result[\"id\"],\n            \"message\": self._parse_message(result[\"message\"]),\n        }\n\n    def create_draft(\n        self,\n        to: str,\n        subject: str,\n        body: str,\n        cc: Optional[str] = None,\n        bcc: Optional[str] = None,\n        html: bool = False,\n    ) -> dict:\n        \"\"\"초안 생성.\n\n        Args:\n            to: 수신자\n            subject: 제목\n            body: 본문\n            cc: 참조\n            bcc: 숨은 참조\n            html: HTML 형식 여부\n\n        Returns:\n            생성된 초안 정보\n        \"\"\"\n        message = MIMEText(body, \"html\" if html else \"plain\", \"utf-8\")\n        message[\"to\"] = to\n        message[\"subject\"] = subject\n        if cc:\n            message[\"cc\"] = cc\n        if bcc:\n            message[\"bcc\"] = bcc\n\n        raw = base64.urlsafe_b64encode(message.as_bytes()).decode(\"utf-8\")\n\n        result = (\n            self.service.users()\n            .drafts()\n            .create(userId=\"me\", body={\"message\": {\"raw\": raw}})\n            .execute()\n        )\n\n        return {\n            \"id\": result[\"id\"],\n            \"message_id\": result[\"message\"][\"id\"],\n            \"status\": \"created\",\n        }\n\n    def send_draft(self, draft_id: str) -> dict:\n        \"\"\"초안 발송.\"\"\"\n        result = (\n            self.service.users()\n            .drafts()\n            .send(userId=\"me\", body={\"id\": draft_id})\n            .execute()\n        )\n\n        return {\n            \"id\": result[\"id\"],\n            \"thread_id\": result[\"threadId\"],\n            \"label_ids\": result.get(\"labelIds\", []),\n            \"status\": \"sent\",\n        }\n\n    def delete_draft(self, draft_id: str) -> dict:\n        \"\"\"초안 삭제.\"\"\"\n        self.service.users().drafts().delete(userId=\"me\", id=draft_id).execute()\n        return {\n            \"id\": draft_id,\n            \"status\": \"deleted\",\n        }\n\n    # =========================================================================\n    # Profile\n    # =========================================================================\n\n    def get_profile(self) -> dict:\n        \"\"\"계정 프로필 조회.\"\"\"\n        @exponential_backoff(max_retries=5)\n        def _get_profile():\n            return self.service.users().getProfile(userId=\"me\").execute()\n\n        self._wait_for_quota(QuotaUnit.PROFILE_GET)\n        result = _get_profile()\n        self._record_quota(QuotaUnit.PROFILE_GET)\n\n        return {\n            \"email\": result[\"emailAddress\"],\n            \"messages_total\": result.get(\"messagesTotal\", 0),\n            \"threads_total\": result.get(\"threadsTotal\", 0),\n            \"history_id\": result.get(\"historyId\", \"\"),\n        }\n\n    # =========================================================================\n    # Batch Operations (P1)\n    # =========================================================================\n\n    def batch_get_messages(\n        self,\n        message_ids: list[str],\n        format: str = \"metadata\",\n    ) -> dict:\n        \"\"\"메시지 일괄 조회.\n\n        Args:\n            message_ids: 조회할 메시지 ID 목록\n            format: 응답 형식 (minimal, full, raw, metadata)\n\n        Returns:\n            BatchResult 객체 (total, succeeded, failed, results, errors)\n        \"\"\"\n        return self.batch_processor.batch_get_messages(message_ids, format)\n\n    def batch_modify_labels(\n        self,\n        message_ids: list[str],\n        add_labels: Optional[list[str]] = None,\n        remove_labels: Optional[list[str]] = None,\n    ) -> dict:\n        \"\"\"라벨 일괄 수정.\n\n        Args:\n            message_ids: 수정할 메시지 ID 목록\n            add_labels: 추가할 라벨 ID\n            remove_labels: 제거할 라벨 ID\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = self.batch_processor.batch_modify_labels(\n            message_ids, add_labels, remove_labels\n        )\n\n        # Invalidate cache for modified messages\n        if self._cache:\n            for msg_id in message_ids:\n                self._cache.invalidate_message(self.account_name, msg_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return result\n\n    def batch_trash_messages(self, message_ids: list[str]) -> dict:\n        \"\"\"메시지 일괄 휴지통 이동.\n\n        Args:\n            message_ids: 휴지통으로 이동할 메시지 ID 목록\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = self.batch_processor.batch_trash_messages(message_ids)\n\n        # Invalidate cache\n        if self._cache:\n            for msg_id in message_ids:\n                self._cache.invalidate_message(self.account_name, msg_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return result\n\n    def batch_delete_messages(self, message_ids: list[str]) -> dict:\n        \"\"\"메시지 일괄 영구 삭제.\n\n        주의: 이 작업은 되돌릴 수 없습니다!\n\n        Args:\n            message_ids: 삭제할 메시지 ID 목록\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = self.batch_processor.batch_delete_messages(message_ids)\n\n        # Invalidate cache\n        if self._cache:\n            for msg_id in message_ids:\n                self._cache.invalidate_message(self.account_name, msg_id)\n            self._cache.invalidate_lists(self.account_name)\n\n        return result\n\n    def mark_all_as_read(\n        self,\n        query: str = \"is:unread\",\n        max_messages: int = 500,\n    ) -> dict:\n        \"\"\"조건에 맞는 메시지 전체 읽음 처리.\n\n        Args:\n            query: 검색 쿼리 (기본: 읽지 않음)\n            max_messages: 최대 처리 메시지 수\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = self.batch_processor.mark_all_as_read(query, max_messages)\n\n        # Invalidate cache\n        if self._cache:\n            self._cache.invalidate_lists(self.account_name)\n\n        return result\n\n    def archive_all(\n        self,\n        query: str = \"\",\n        max_messages: int = 500,\n    ) -> dict:\n        \"\"\"조건에 맞는 메시지 전체 보관처리.\n\n        Args:\n            query: 검색 쿼리\n            max_messages: 최대 처리 메시지 수\n\n        Returns:\n            BatchResult 객체\n        \"\"\"\n        result = self.batch_processor.archive_all(query, max_messages)\n\n        # Invalidate cache\n        if self._cache:\n            self._cache.invalidate_lists(self.account_name)\n\n        return result\n\n    # =========================================================================\n    # Cache & Quota Management\n    # =========================================================================\n\n    def get_quota_status(self) -> dict:\n        \"\"\"현재 할당량 사용 현황 조회.\n\n        Returns:\n            할당량 사용 현황 딕셔너리\n        \"\"\"\n        if self._quota_manager:\n            return self._quota_manager.get_usage(self.account_name)\n        return {\"message\": \"Quota management is disabled\"}\n\n    def get_cache_stats(self) -> dict:\n        \"\"\"캐시 통계 조회.\n\n        Returns:\n            캐시 통계 딕셔너리\n        \"\"\"\n        if self._cache:\n            return self._cache.get_stats(self.account_name)\n        return {\"message\": \"Caching is disabled\"}\n\n    def clear_cache(self) -> None:\n        \"\"\"이 계정의 캐시 전체 삭제.\"\"\"\n        if self._cache:\n            self._cache.invalidate_account(self.account_name)\n            logger.info(f\"Cache cleared for account: {self.account_name}\")\n\n\nclass ADCGmailClient:\n    \"\"\"Application Default Credentials를 사용하는 Gmail 클라이언트.\n\n    gcloud auth application-default login으로 인증된 계정 사용.\n    \"\"\"\n\n    SCOPES = [\n        \"https://www.googleapis.com/auth/gmail.modify\",\n        \"https://www.googleapis.com/auth/gmail.send\",\n        \"https://www.googleapis.com/auth/gmail.labels\",\n    ]\n\n    def __init__(self, account_name: str = \"default\", timeout: int = DEFAULT_TIMEOUT):\n        self.account_name = account_name\n        self.timeout = timeout\n        self.creds, self.project = google.auth.default(scopes=self.SCOPES)\n        self._service = None\n\n    @property\n    def service(self):\n        if self._service is None:\n            self._service = build(\"gmail\", \"v1\", credentials=self.creds)\n        return self._service\n\n    def list_messages(\n        self,\n        query: str = \"\",\n        max_results: int = 20,\n        label_ids: Optional[list[str]] = None,\n        include_spam_trash: bool = False,\n    ) -> list[dict]:\n        messages = []\n        page_token = None\n\n        while len(messages) < max_results:\n            kwargs = {\n                \"userId\": \"me\",\n                \"maxResults\": min(max_results - len(messages), 100),\n                \"includeSpamTrash\": include_spam_trash,\n            }\n            if query:\n                kwargs[\"q\"] = query\n            if label_ids:\n                kwargs[\"labelIds\"] = label_ids\n            if page_token:\n                kwargs[\"pageToken\"] = page_token\n\n            result = self.service.users().messages().list(**kwargs).execute()\n\n            for msg in result.get(\"messages\", []):\n                messages.append(msg)\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return messages\n\n    def get_profile(self) -> dict:\n        result = self.service.users().getProfile(userId=\"me\").execute()\n        return {\n            \"email\": result[\"emailAddress\"],\n            \"messages_total\": result.get(\"messagesTotal\", 0),\n            \"threads_total\": result.get(\"threadsTotal\", 0),\n        }\n\n\ndef get_all_accounts(base_path: Optional[Path] = None) -> list[str]:\n    \"\"\"등록된 모든 계정 이름 반환.\"\"\"\n    base_path = base_path or Path(__file__).parent.parent\n    accounts_dir = base_path / \"accounts\"\n\n    if not accounts_dir.exists():\n        return []\n\n    return [\n        f.stem for f in accounts_dir.glob(\"*.json\") if f.stem not in (\"credentials\",)\n    ]\n\n\ndef get_client(\n    account_name: Optional[str] = None,\n    use_adc: bool = False,\n    base_path: Optional[Path] = None,\n) -> GmailClient:\n    \"\"\"Gmail 클라이언트 팩토리.\n\n    Args:\n        account_name: 계정 이름 (None이면 첫 번째 계정 사용)\n        use_adc: ADC 사용 여부\n        base_path: skill 루트 경로\n\n    Returns:\n        GmailClient 또는 ADCGmailClient 인스턴스\n    \"\"\"\n    if use_adc:\n        return ADCGmailClient(account_name or \"default\")\n\n    if account_name:\n        return GmailClient(account_name, base_path)\n\n    accounts = get_all_accounts(base_path)\n    if not accounts:\n        raise ValueError(\n            \"등록된 계정이 없습니다. setup_auth.py --account <이름> 실행 필요\"\n        )\n\n    return GmailClient(accounts[0], base_path)\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/list_messages.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Gmail 메시지 목록 조회 CLI.\n\nUsage:\n    # 받은편지함 최근 10개\n    uv run python list_messages.py --account work --max 10\n\n    # 검색 쿼리 사용\n    uv run python list_messages.py --account work --query \"from:user@example.com\"\n    uv run python list_messages.py --account work --query \"is:unread\"\n    uv run python list_messages.py --account work --query \"after:2024/01/01 before:2024/12/31\"\n\n    # 라벨로 필터\n    uv run python list_messages.py --account work --labels INBOX,UNREAD\n\n    # ADC 사용\n    uv run python list_messages.py --adc --query \"is:unread\"\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nfrom gmail_client import GmailClient, ADCGmailClient, get_all_accounts\n\n\ndef format_message_summary(client: GmailClient, msg_id: str) -> dict:\n    \"\"\"메시지 요약 정보 조회.\"\"\"\n    msg = client.get_message(msg_id, format=\"metadata\")\n    return {\n        \"id\": msg[\"id\"],\n        \"from\": msg[\"from\"],\n        \"subject\": msg[\"subject\"],\n        \"date\": msg[\"date\"],\n        \"snippet\": msg[\"snippet\"][:100] + \"...\" if len(msg[\"snippet\"]) > 100 else msg[\"snippet\"],\n        \"labels\": msg[\"label_ids\"],\n    }\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Gmail 메시지 목록 조회\")\n    parser.add_argument(\"--account\", \"-a\", help=\"계정 식별자\")\n    parser.add_argument(\"--adc\", action=\"store_true\", help=\"Application Default Credentials 사용\")\n    parser.add_argument(\"--query\", \"-q\", default=\"\", help=\"Gmail 검색 쿼리\")\n    parser.add_argument(\"--max\", \"-m\", type=int, default=20, help=\"최대 결과 수\")\n    parser.add_argument(\"--labels\", help=\"라벨 ID (쉼표 구분)\")\n    parser.add_argument(\"--include-spam-trash\", action=\"store_true\", help=\"스팸/휴지통 포함\")\n    parser.add_argument(\"--full\", \"-f\", action=\"store_true\", help=\"전체 메시지 정보 조회\")\n    parser.add_argument(\"--json\", action=\"store_true\", help=\"JSON 형식 출력\")\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if args.adc:\n        client = ADCGmailClient()\n    else:\n        accounts = get_all_accounts(base_path)\n        if not accounts:\n            print(\"❌ 등록된 계정이 없습니다.\")\n            print(\"   먼저 setup_auth.py --account <이름> 실행 필요\")\n            return\n\n        account = args.account or accounts[0]\n        client = GmailClient(account, base_path)\n\n    label_ids = args.labels.split(\",\") if args.labels else None\n\n    messages = client.list_messages(\n        query=args.query,\n        max_results=args.max,\n        label_ids=label_ids,\n        include_spam_trash=args.include_spam_trash,\n    )\n\n    if args.json:\n        if args.full:\n            result = [client.get_message(m[\"id\"]) for m in messages]\n        else:\n            result = [format_message_summary(client, m[\"id\"]) for m in messages]\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        print(f\"📬 {len(messages)}개 메시지\")\n        print()\n        for msg in messages:\n            if args.full:\n                full_msg = client.get_message(msg[\"id\"])\n                print(f\"ID: {full_msg['id']}\")\n                print(f\"From: {full_msg['from']}\")\n                print(f\"To: {full_msg['to']}\")\n                print(f\"Subject: {full_msg['subject']}\")\n                print(f\"Date: {full_msg['date']}\")\n                print(f\"Labels: {', '.join(full_msg['label_ids'])}\")\n                if full_msg['attachments']:\n                    print(f\"Attachments: {', '.join(a['filename'] for a in full_msg['attachments'])}\")\n                print(\"-\" * 60)\n                print(full_msg['body'][:500])\n                if len(full_msg['body']) > 500:\n                    print(\"... (truncated)\")\n                print(\"=\" * 60)\n                print()\n            else:\n                summary = format_message_summary(client, msg[\"id\"])\n                unread = \"📩\" if \"UNREAD\" in summary[\"labels\"] else \"📧\"\n                print(f\"{unread} {summary['subject']}\")\n                print(f\"   From: {summary['from']}\")\n                print(f\"   Date: {summary['date']}\")\n                print(f\"   {summary['snippet']}\")\n                print()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/manage_labels.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Gmail 라벨 및 메시지 관리 CLI.\n\nUsage:\n    # 라벨 목록\n    uv run python manage_labels.py --account work list-labels\n\n    # 라벨 생성\n    uv run python manage_labels.py --account work create-label --name \"프로젝트/A\"\n\n    # 라벨 삭제\n    uv run python manage_labels.py --account work delete-label --label-id Label_123\n\n    # 읽음 표시\n    uv run python manage_labels.py --account work mark-read --id <message_id>\n\n    # 별표 추가\n    uv run python manage_labels.py --account work star --id <message_id>\n\n    # 보관처리\n    uv run python manage_labels.py --account work archive --id <message_id>\n\n    # 휴지통으로 이동\n    uv run python manage_labels.py --account work trash --id <message_id>\n\n    # 라벨 추가/제거\n    uv run python manage_labels.py --account work modify --id <message_id> \\\n        --add-labels \"Label_123,STARRED\" --remove-labels \"INBOX\"\n\n    # 초안 목록\n    uv run python manage_labels.py --account work list-drafts\n\n    # 초안 발송\n    uv run python manage_labels.py --account work send-draft --draft-id <draft_id>\n\n    # 프로필 조회\n    uv run python manage_labels.py --account work profile\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nfrom gmail_client import GmailClient, ADCGmailClient, get_all_accounts\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Gmail 라벨 및 메시지 관리\")\n    parser.add_argument(\"--account\", \"-a\", help=\"계정 식별자\")\n    parser.add_argument(\"--adc\", action=\"store_true\", help=\"Application Default Credentials 사용\")\n    parser.add_argument(\"--json\", action=\"store_true\", help=\"JSON 형식 출력\")\n\n    subparsers = parser.add_subparsers(dest=\"command\", help=\"명령어\")\n\n    # 라벨 관리\n    subparsers.add_parser(\"list-labels\", help=\"라벨 목록\")\n\n    create_label = subparsers.add_parser(\"create-label\", help=\"라벨 생성\")\n    create_label.add_argument(\"--name\", required=True, help=\"라벨 이름\")\n\n    delete_label = subparsers.add_parser(\"delete-label\", help=\"라벨 삭제\")\n    delete_label.add_argument(\"--label-id\", required=True, help=\"라벨 ID\")\n\n    # 메시지 관리\n    mark_read = subparsers.add_parser(\"mark-read\", help=\"읽음 표시\")\n    mark_read.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    mark_unread = subparsers.add_parser(\"mark-unread\", help=\"읽지 않음 표시\")\n    mark_unread.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    star = subparsers.add_parser(\"star\", help=\"별표 추가\")\n    star.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    unstar = subparsers.add_parser(\"unstar\", help=\"별표 제거\")\n    unstar.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    archive = subparsers.add_parser(\"archive\", help=\"보관처리\")\n    archive.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    trash = subparsers.add_parser(\"trash\", help=\"휴지통으로 이동\")\n    trash.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    untrash = subparsers.add_parser(\"untrash\", help=\"휴지통에서 복원\")\n    untrash.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n\n    modify = subparsers.add_parser(\"modify\", help=\"라벨 수정\")\n    modify.add_argument(\"--id\", required=True, help=\"메시지 ID\")\n    modify.add_argument(\"--add-labels\", help=\"추가할 라벨 (쉼표 구분)\")\n    modify.add_argument(\"--remove-labels\", help=\"제거할 라벨 (쉼표 구분)\")\n\n    # 초안 관리\n    subparsers.add_parser(\"list-drafts\", help=\"초안 목록\")\n\n    send_draft = subparsers.add_parser(\"send-draft\", help=\"초안 발송\")\n    send_draft.add_argument(\"--draft-id\", required=True, help=\"초안 ID\")\n\n    delete_draft = subparsers.add_parser(\"delete-draft\", help=\"초안 삭제\")\n    delete_draft.add_argument(\"--draft-id\", required=True, help=\"초안 ID\")\n\n    # 프로필\n    subparsers.add_parser(\"profile\", help=\"프로필 조회\")\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if not args.command:\n        parser.print_help()\n        return\n\n    if args.adc:\n        client = ADCGmailClient()\n    else:\n        accounts = get_all_accounts(base_path)\n        if not accounts:\n            print(\"❌ 등록된 계정이 없습니다.\")\n            return\n\n        account = args.account or accounts[0]\n        client = GmailClient(account, base_path)\n\n    result = None\n\n    # 라벨 명령어\n    if args.command == \"list-labels\":\n        result = client.list_labels()\n        if not args.json:\n            print(\"🏷️  라벨 목록:\")\n            for label in result:\n                label_type = \"📁\" if label[\"type\"] == \"system\" else \"🏷️\"\n                print(f\"  {label_type} {label['name']} ({label['id']})\")\n            return\n\n    elif args.command == \"create-label\":\n        result = client.create_label(args.name)\n        if not args.json:\n            print(f\"✅ 라벨 생성됨: {result['name']} ({result['id']})\")\n            return\n\n    elif args.command == \"delete-label\":\n        result = client.delete_label(args.label_id)\n        if not args.json:\n            print(f\"✅ 라벨 삭제됨: {args.label_id}\")\n            return\n\n    # 메시지 명령어\n    elif args.command == \"mark-read\":\n        result = client.mark_as_read(args.id)\n    elif args.command == \"mark-unread\":\n        result = client.mark_as_unread(args.id)\n    elif args.command == \"star\":\n        result = client.star_message(args.id)\n    elif args.command == \"unstar\":\n        result = client.unstar_message(args.id)\n    elif args.command == \"archive\":\n        result = client.archive_message(args.id)\n    elif args.command == \"trash\":\n        result = client.trash_message(args.id)\n    elif args.command == \"untrash\":\n        result = client.untrash_message(args.id)\n    elif args.command == \"modify\":\n        add_labels = args.add_labels.split(\",\") if args.add_labels else None\n        remove_labels = args.remove_labels.split(\",\") if args.remove_labels else None\n        result = client.modify_message(args.id, add_labels, remove_labels)\n\n    # 초안 명령어\n    elif args.command == \"list-drafts\":\n        drafts = client.list_drafts()\n        if not args.json:\n            print(f\"📝 초안 {len(drafts)}개\")\n            for draft in drafts:\n                detail = client.get_draft(draft[\"id\"])\n                msg = detail[\"message\"]\n                print(f\"  - {msg['subject']} → {msg['to']}\")\n                print(f\"    ID: {draft['id']}\")\n            return\n        result = drafts\n\n    elif args.command == \"send-draft\":\n        result = client.send_draft(args.draft_id)\n        if not args.json:\n            print(f\"✅ 초안 발송됨: {result['id']}\")\n            return\n\n    elif args.command == \"delete-draft\":\n        result = client.delete_draft(args.draft_id)\n        if not args.json:\n            print(f\"✅ 초안 삭제됨: {args.draft_id}\")\n            return\n\n    # 프로필\n    elif args.command == \"profile\":\n        result = client.get_profile()\n        if not args.json:\n            print(f\"📧 {result['email']}\")\n            print(f\"   메시지: {result['messages_total']:,}개\")\n            print(f\"   스레드: {result['threads_total']:,}개\")\n            return\n\n    if args.json and result:\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    elif result:\n        print(f\"✅ {args.command} 완료\")\n        print(f\"   ID: {result.get('id')}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/read_message.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Gmail 메시지 읽기 CLI.\n\nUsage:\n    # 메시지 읽기\n    uv run python read_message.py --account work --id <message_id>\n\n    # 스레드 전체 읽기\n    uv run python read_message.py --account work --thread <thread_id>\n\n    # 첨부파일 저장\n    uv run python read_message.py --account work --id <message_id> --save-attachments ./downloads\n\n    # JSON 출력\n    uv run python read_message.py --account work --id <message_id> --json\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nfrom gmail_client import GmailClient, ADCGmailClient, get_all_accounts\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Gmail 메시지 읽기\")\n    parser.add_argument(\"--account\", \"-a\", help=\"계정 식별자\")\n    parser.add_argument(\"--adc\", action=\"store_true\", help=\"Application Default Credentials 사용\")\n    parser.add_argument(\"--id\", \"-i\", help=\"메시지 ID\")\n    parser.add_argument(\"--thread\", \"-t\", help=\"스레드 ID\")\n    parser.add_argument(\"--save-attachments\", \"-s\", help=\"첨부파일 저장 경로\")\n    parser.add_argument(\"--json\", action=\"store_true\", help=\"JSON 형식 출력\")\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if not args.id and not args.thread:\n        parser.print_help()\n        print()\n        print(\"예시:\")\n        print(\"  uv run python read_message.py --account work --id abc123\")\n        print(\"  uv run python read_message.py --account work --thread xyz789\")\n        return\n\n    if args.adc:\n        client = ADCGmailClient()\n    else:\n        accounts = get_all_accounts(base_path)\n        if not accounts:\n            print(\"❌ 등록된 계정이 없습니다.\")\n            return\n\n        account = args.account or accounts[0]\n        client = GmailClient(account, base_path)\n\n    if args.thread:\n        result = client.get_thread(args.thread)\n\n        if args.json:\n            print(json.dumps(result, ensure_ascii=False, indent=2))\n        else:\n            print(f\"📧 스레드: {result['id']}\")\n            print(f\"   메시지 수: {result['message_count']}\")\n            print(\"=\" * 60)\n\n            for msg in result[\"messages\"]:\n                print(f\"\\n📩 {msg['subject']}\")\n                print(f\"   From: {msg['from']}\")\n                print(f\"   To: {msg['to']}\")\n                print(f\"   Date: {msg['date']}\")\n                print(\"-\" * 60)\n                print(msg['body'])\n                print()\n    else:\n        result = client.get_message(args.id)\n\n        if args.json:\n            print(json.dumps(result, ensure_ascii=False, indent=2))\n        else:\n            print(f\"📧 Subject: {result['subject']}\")\n            print(f\"   From: {result['from']}\")\n            print(f\"   To: {result['to']}\")\n            if result['cc']:\n                print(f\"   CC: {result['cc']}\")\n            print(f\"   Date: {result['date']}\")\n            print(f\"   Labels: {', '.join(result['label_ids'])}\")\n\n            if result['attachments']:\n                print(f\"\\n📎 첨부파일:\")\n                for att in result['attachments']:\n                    size_kb = att['size'] / 1024\n                    print(f\"   - {att['filename']} ({size_kb:.1f} KB)\")\n\n            print(\"\\n\" + \"=\" * 60)\n            print(result['body'])\n\n        if args.save_attachments and result.get('attachments'):\n            save_path = Path(args.save_attachments)\n            save_path.mkdir(parents=True, exist_ok=True)\n\n            for att in result['attachments']:\n                if att.get('attachment_id'):\n                    data = client.get_attachment(args.id, att['attachment_id'])\n                    filepath = save_path / att['filename']\n                    with open(filepath, 'wb') as f:\n                        f.write(data)\n                    print(f\"✅ 저장됨: {filepath}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/send_message.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Gmail 메시지 발송 CLI.\n\nUsage:\n    # 새 메일 발송\n    uv run python send_message.py --account work \\\n        --to \"user@example.com\" \\\n        --subject \"안녕하세요\" \\\n        --body \"메일 내용입니다.\"\n\n    # HTML 메일\n    uv run python send_message.py --account work \\\n        --to \"user@example.com\" \\\n        --subject \"공지\" \\\n        --body \"<h1>제목</h1><p>내용</p>\" \\\n        --html\n\n    # 첨부파일 포함\n    uv run python send_message.py --account work \\\n        --to \"user@example.com\" \\\n        --subject \"파일 전송\" \\\n        --body \"첨부파일을 확인해주세요.\" \\\n        --attach file1.pdf,file2.xlsx\n\n    # 답장\n    uv run python send_message.py --account work \\\n        --to \"user@example.com\" \\\n        --subject \"Re: 원본 제목\" \\\n        --body \"답장 내용\" \\\n        --reply-to <message_id> \\\n        --thread <thread_id>\n\n    # 초안 생성\n    uv run python send_message.py --account work \\\n        --to \"user@example.com\" \\\n        --subject \"나중에 보낼 메일\" \\\n        --body \"초안 내용\" \\\n        --draft\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nfrom gmail_client import ADCGmailClient, GmailClient, get_all_accounts\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Gmail 메시지 발송\")\n    parser.add_argument(\"--account\", \"-a\", help=\"계정 식별자\")\n    parser.add_argument(\"--adc\", action=\"store_true\", help=\"Application Default Credentials 사용\")\n    parser.add_argument(\"--to\", \"-t\", required=True, help=\"수신자 (쉼표 구분)\")\n    parser.add_argument(\"--subject\", \"-s\", required=True, help=\"제목\")\n    parser.add_argument(\"--body\", \"-b\", required=True, help=\"본문\")\n    parser.add_argument(\"--cc\", help=\"참조\")\n    parser.add_argument(\"--bcc\", help=\"숨은 참조\")\n    parser.add_argument(\"--html\", action=\"store_true\", help=\"HTML 형식\")\n    parser.add_argument(\"--attach\", help=\"첨부파일 경로 (쉼표 구분)\")\n    parser.add_argument(\"--reply-to\", help=\"답장할 메시지 ID\")\n    parser.add_argument(\"--thread\", help=\"스레드 ID\")\n    parser.add_argument(\"--draft\", action=\"store_true\", help=\"초안으로 저장\")\n    parser.add_argument(\"--json\", action=\"store_true\", help=\"JSON 형식 출력\")\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if args.adc:\n        client = ADCGmailClient()\n    else:\n        accounts = get_all_accounts(base_path)\n        if not accounts:\n            print(\"❌ 등록된 계정이 없습니다.\")\n            return\n\n        account = args.account or accounts[0]\n        client = GmailClient(account, base_path)\n\n    attachments = args.attach.split(\",\") if args.attach else None\n\n    if args.draft:\n        result = client.create_draft(\n            to=args.to,\n            subject=args.subject,\n            body=args.body,\n            cc=args.cc,\n            bcc=args.bcc,\n            html=args.html,\n        )\n        status_msg = \"초안 저장됨\"\n    else:\n        result = client.send_message(\n            to=args.to,\n            subject=args.subject,\n            body=args.body,\n            cc=args.cc,\n            bcc=args.bcc,\n            html=args.html,\n            attachments=attachments,\n            reply_to_message_id=args.reply_to,\n            thread_id=args.thread,\n        )\n        status_msg = \"발송 완료\"\n\n    if args.json:\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        print(f\"✅ {status_msg}\")\n        print(f\"   ID: {result['id']}\")\n        if 'thread_id' in result:\n            print(f\"   Thread ID: {result['thread_id']}\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/gmail/skills/gmail/scripts/setup_auth.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Gmail OAuth 인증 설정.\n\n최초 1회 실행하여 계정별 refresh token을 저장.\n이후에는 저장된 token으로 자동 인증됨.\n\nUsage:\n    uv run python setup_auth.py --account personal --email user@gmail.com\n    uv run python setup_auth.py --account work --email work@company.com --description \"회사 업무용\"\n    uv run python setup_auth.py --list\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nimport yaml\nfrom google_auth_oauthlib.flow import InstalledAppFlow\n\nSCOPES = [\n    \"https://www.googleapis.com/auth/gmail.modify\",  # 읽기/수정/삭제\n    \"https://www.googleapis.com/auth/gmail.send\",    # 메일 발송\n    \"https://www.googleapis.com/auth/gmail.labels\",  # 라벨 관리\n]\n\n\ndef load_accounts_config(base_path: Path) -> dict:\n    \"\"\"accounts.yaml 로드.\"\"\"\n    config_path = base_path / \"accounts.yaml\"\n    if config_path.exists():\n        with open(config_path) as f:\n            return yaml.safe_load(f) or {\"accounts\": {}}\n    return {\"accounts\": {}}\n\n\ndef save_accounts_config(base_path: Path, config: dict) -> None:\n    \"\"\"accounts.yaml 저장.\"\"\"\n    config_path = base_path / \"accounts.yaml\"\n\n    # YAML 헤더 코멘트\n    header = \"\"\"# Gmail 계정 설정\n# 계정별로 이메일 주소와 설명을 관리합니다.\n# 토큰 파일은 accounts/{name}.json에 별도 저장됩니다.\n\n\"\"\"\n\n    with open(config_path, \"w\") as f:\n        f.write(header)\n        yaml.dump(config, f, allow_unicode=True, default_flow_style=False, sort_keys=False)\n\n\ndef setup_auth(\n    account_name: str,\n    base_path: Path,\n    email: str | None = None,\n    description: str | None = None,\n) -> None:\n    \"\"\"OAuth 인증 플로우 실행 및 토큰 저장.\n\n    Args:\n        account_name: 계정 식별자 (예: 'work', 'personal')\n        base_path: skill 루트 경로\n        email: 이메일 주소 (accounts.yaml에 저장)\n        description: 계정 설명 (accounts.yaml에 저장)\n    \"\"\"\n    credentials_path = base_path / \"references\" / \"credentials.json\"\n    token_path = base_path / \"accounts\" / f\"{account_name}.json\"\n\n    if not credentials_path.exists():\n        print(f\"❌ OAuth Client ID 파일이 없습니다: {credentials_path}\")\n        print()\n        print(\"설정 방법:\")\n        print(\"1. https://console.cloud.google.com 접속\")\n        print(\"2. 프로젝트 생성 또는 선택\")\n        print(\"3. 'API 및 서비스' > 'Gmail API' 활성화\")\n        print(\"4. 'API 및 서비스' > '사용자 인증 정보'\")\n        print(\"5. 'OAuth 2.0 클라이언트 ID' 생성 (Desktop 유형)\")\n        print(\"6. JSON 다운로드 → references/credentials.json 저장\")\n        return\n\n    if token_path.exists():\n        print(f\"⚠️  계정 '{account_name}'의 토큰이 이미 존재합니다.\")\n        response = input(\"덮어쓰시겠습니까? [y/N]: \")\n        if response.lower() != \"y\":\n            print(\"취소됨\")\n            return\n\n    print(f\"🔐 '{account_name}' 계정 인증을 시작합니다...\")\n    print(\"브라우저가 열리면 Google 계정으로 로그인하세요.\")\n    print()\n\n    flow = InstalledAppFlow.from_client_secrets_file(\n        str(credentials_path),\n        SCOPES,\n    )\n\n    creds = flow.run_local_server(port=0)\n\n    token_path.parent.mkdir(parents=True, exist_ok=True)\n    with open(token_path, \"w\") as f:\n        json.dump(json.loads(creds.to_json()), f, indent=2)\n\n    # accounts.yaml 업데이트\n    config = load_accounts_config(base_path)\n\n    # 인증된 이메일 주소 가져오기 (제공되지 않은 경우)\n    if not email:\n        try:\n            from googleapiclient.discovery import build\n            from google.oauth2.credentials import Credentials\n\n            temp_creds = Credentials.from_authorized_user_info(\n                json.loads(creds.to_json()),\n                SCOPES,\n            )\n            service = build(\"gmail\", \"v1\", credentials=temp_creds)\n            profile = service.users().getProfile(userId=\"me\").execute()\n            email = profile.get(\"emailAddress\", \"\")\n        except Exception:\n            email = \"\"\n\n    config[\"accounts\"][account_name] = {\n        \"email\": email,\n        \"description\": description or \"\",\n    }\n    save_accounts_config(base_path, config)\n\n    print()\n    print(f\"✅ 인증 완료!\")\n    print(f\"   계정명: {account_name}\")\n    print(f\"   이메일: {email}\")\n    print(f\"   토큰: {token_path}\")\n\n\ndef list_accounts(base_path: Path) -> None:\n    \"\"\"등록된 계정 목록 출력.\"\"\"\n    config = load_accounts_config(base_path)\n    accounts_dir = base_path / \"accounts\"\n\n    # accounts.yaml에서 계정 정보 읽기\n    accounts_config = config.get(\"accounts\", {})\n\n    # 토큰 파일 존재 여부 확인\n    token_files = set()\n    if accounts_dir.exists():\n        token_files = {f.stem for f in accounts_dir.glob(\"*.json\")}\n\n    if not accounts_config and not token_files:\n        print(\"등록된 계정이 없습니다.\")\n        return\n\n    print(\"📋 등록된 계정:\")\n    print()\n\n    # accounts.yaml에 있는 계정 출력\n    for name, info in accounts_config.items():\n        email = info.get(\"email\", \"\")\n        description = info.get(\"description\", \"\")\n        has_token = \"✅\" if name in token_files else \"❌\"\n\n        print(f\"   {has_token} {name}\")\n        if email:\n            print(f\"      이메일: {email}\")\n        if description:\n            print(f\"      설명: {description}\")\n        print()\n\n    # 토큰은 있지만 accounts.yaml에 없는 계정 경고\n    orphan_tokens = token_files - set(accounts_config.keys())\n    if orphan_tokens:\n        print(\"⚠️  accounts.yaml에 없는 토큰:\")\n        for name in orphan_tokens:\n            print(f\"   - {name}.json\")\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Gmail OAuth 인증 설정\")\n    parser.add_argument(\n        \"--account\",\n        \"-a\",\n        help=\"계정 식별자 (예: work, personal)\",\n    )\n    parser.add_argument(\n        \"--email\",\n        \"-e\",\n        help=\"이메일 주소 (자동 감지되지만 명시 가능)\",\n    )\n    parser.add_argument(\n        \"--description\",\n        \"-d\",\n        help=\"계정 설명 (예: '회사 업무용')\",\n    )\n    parser.add_argument(\n        \"--list\",\n        \"-l\",\n        action=\"store_true\",\n        help=\"등록된 계정 목록 출력\",\n    )\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if args.list:\n        list_accounts(base_path)\n        return\n\n    if not args.account:\n        parser.print_help()\n        print()\n        print(\"예시:\")\n        print(\"  uv run python setup_auth.py --account personal --description '개인 Gmail'\")\n        print(\"  uv run python setup_auth.py --account work --description '회사 업무용'\")\n        print(\"  uv run python setup_auth.py --list\")\n        return\n\n    setup_auth(args.account, base_path, args.email, args.description)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/google-calendar/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"google-calendar\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Multi-account Google Calendar integration with parallel querying and conflict detection\"\n}\n"
  },
  {
    "path": "plugins/google-calendar/README.md",
    "content": "# Google Calendar Plugin\n\nMulti-account Google Calendar integration with parallel querying and conflict detection.\n\n## Features\n\n- Query multiple Google accounts (work, personal) in parallel\n- Detect scheduling conflicts between calendars\n- Create, update, and delete events\n- OAuth2 authentication with stored refresh tokens\n\n## Installation\n\n```bash\n/plugin install google-calendar\n```\n\n## Prerequisites\n\n### 1. Google Cloud Project Setup\n\n1. Create a project at [Google Cloud Console](https://console.cloud.google.com)\n2. Enable Calendar API\n3. Create OAuth 2.0 Client ID (Desktop type)\n4. Download `credentials.json`\n\n### 2. Account Authentication (one-time)\n\n```bash\n# Work account\nuv run python scripts/setup_auth.py --account work\n\n# Personal account\nuv run python scripts/setup_auth.py --account personal\n```\n\n## Usage\n\nAsk Claude about your calendar:\n\n- \"오늘 일정 알려줘\"\n- \"이번 주 일정 충돌 확인해줘\"\n- \"내일 3시에 팀 미팅 추가해줘\"\n- \"What's on my calendar today?\"\n- \"Schedule a meeting for tomorrow at 2pm\"\n\n## License\n\nMIT\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/SKILL.md",
    "content": "---\nname: google-calendar\ndescription: Google 캘린더 일정 조회/생성/수정/삭제. \"오늘 일정\", \"이번 주 일정\", \"미팅 추가해줘\" 요청에 사용. 여러 계정(work, personal) 통합 조회 지원.\n---\n\n# Google Calendar Sync\n\n## Overview\n\n여러 Google 계정(회사, 개인 등)의 캘린더를 한 번에 조회하여 통합된 일정을 제공한다.\n- 사전 인증된 refresh token 사용 (매번 로그인 불필요)\n- Subagent 병렬 실행으로 빠른 조회\n- 계정 간 일정 충돌 감지\n\n## 트리거 조건\n\n### 조회\n- \"오늘 일정\", \"이번 주 일정 알려줘\"\n- \"캘린더 확인\", \"스케줄 뭐야\"\n- \"다음 미팅\", \"내일 뭐 있어\"\n- \"일정 충돌 확인해줘\"\n\n### 생성\n- \"새 일정 만들어줘\", \"미팅 추가해줘\"\n- \"내일 3시에 회의 잡아줘\"\n- \"다음 주 월요일 팀 미팅 생성\"\n\n### 수정\n- \"일정 시간 변경해줘\", \"미팅 시간 바꿔줘\"\n- \"sync 미팅 14시 21분으로 변경\"\n- \"회의 제목 수정해줘\"\n\n### 삭제\n- \"일정 삭제해줘\", \"미팅 취소해줘\"\n- \"이벤트 지워줘\"\n\n## 사전 요구사항\n\n### 1. Google Cloud 프로젝트 설정\n\n1. [Google Cloud Console](https://console.cloud.google.com)에서 프로젝트 생성\n2. Calendar API 활성화\n3. OAuth 2.0 Client ID 생성 (Desktop 유형)\n4. `credentials.json` 다운로드 → `references/credentials.json`에 저장\n\n### 2. 계정별 인증 (최초 1회)\n\n```bash\n# 회사 계정\nuv run python .claude/skills/google-calendar/scripts/setup_auth.py --account work\n\n# 개인 계정\nuv run python .claude/skills/google-calendar/scripts/setup_auth.py --account personal\n```\n\n브라우저에서 Google 로그인 → refresh token이 `accounts/{name}.json`에 저장됨\n\n## 워크플로우\n\n### 1. 등록된 계정 확인\n\n```bash\nls .claude/skills/google-calendar/accounts/\n# → work.json, personal.json\n```\n\n### 2. Subagent 병렬 실행\n\n각 계정별로 Task 도구를 **병렬**로 호출:\n\n```python\n# 병렬 실행 - 단일 메시지에 여러 Task 호출\nTask(subagent_type=\"general-purpose\", prompt=\"fetch calendar for work account\")\nTask(subagent_type=\"general-purpose\", prompt=\"fetch calendar for personal account\")\n```\n\n각 subagent는 다음을 실행:\n```bash\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --account {account_name} \\\n  --days 7\n```\n\n### 3. 결과 통합\n\n- 모든 계정의 이벤트를 시간순 정렬\n- 동일 시간대 이벤트 = 충돌로 표시\n- 계정별 색상/아이콘 구분\n\n## 출력 형식\n\n```\n📅 2026-01-06 (월) 일정\n\n[09:00-10:00] 🔵 팀 스탠드업 (work)\n[10:00-11:30] 🟢 치과 예약 (personal)\n[14:00-15:00] 🔵 고객 미팅 - 삼양 (work)\n              ⚠️ 충돌: 개인 일정과 겹침\n[14:00-14:30] 🟢 은행 방문 (personal)\n\n📊 오늘 총 4개 일정 (work: 2, personal: 2)\n   ⚠️ 1건 충돌\n```\n\n## 실행 예시\n\n사용자: \"이번 주 일정 알려줘\"\n\n```\n1. accounts/ 폴더 확인\n   └── 등록된 계정: work, personal\n\n2. Subagent 병렬 실행\n   ├── Task: work 계정 이벤트 조회\n   └── Task: personal 계정 이벤트 조회\n\n3. 결과 수집 (각 subagent 완료 대기)\n   ├── work: 8개 이벤트\n   └── personal: 3개 이벤트\n\n4. 통합 및 정렬\n   └── 11개 이벤트, 2건 충돌 감지\n\n5. 출력\n   └── 일별로 그룹화하여 표시\n```\n\n## 에러 처리\n\n| 상황 | 처리 |\n|------|------|\n| accounts/ 폴더 비어있음 | 초기 설정 안내 (setup_auth.py 실행 방법) |\n| 특정 계정 토큰 만료 | 해당 계정 재인증 안내, 나머지 계정은 정상 조회 |\n| API 할당량 초과 | 잠시 후 재시도 안내 |\n| 네트워크 오류 | 연결 확인 요청 |\n\n## Scripts\n\n| 파일 | 용도 |\n|------|------|\n| `scripts/setup_auth.py` | 계정별 OAuth 인증 및 token 저장 |\n| `scripts/fetch_events.py` | 특정 계정의 이벤트 조회 (CLI) |\n| `scripts/manage_events.py` | 이벤트 생성/수정/삭제 (CLI) |\n| `scripts/calendar_client.py` | Google Calendar API 클라이언트 라이브러리 |\n\n## 일정 관리 (생성/수정/삭제)\n\n### 일정 생성\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/manage_events.py create \\\n    --summary \"팀 미팅\" \\\n    --start \"2026-01-06T14:00:00\" \\\n    --end \"2026-01-06T15:00:00\" \\\n    --account work\n```\n\n### 종일 일정 생성\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/manage_events.py create \\\n    --summary \"연차\" \\\n    --start \"2026-01-10\" \\\n    --end \"2026-01-11\" \\\n    --account personal\n```\n\n### 일정 수정\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/manage_events.py update \\\n    --event-id \"abc123\" \\\n    --summary \"팀 미팅 (변경)\" \\\n    --start \"2026-01-06T14:21:00\" \\\n    --account work\n```\n\n### 일정 삭제\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/manage_events.py delete \\\n    --event-id \"abc123\" \\\n    --account work\n```\n\n### 옵션\n\n| 옵션 | 설명 |\n|------|------|\n| `--summary` | 일정 제목 |\n| `--start` | 시작 시간 (ISO format: 2026-01-06T14:00:00 또는 2026-01-06) |\n| `--end` | 종료 시간 |\n| `--description` | 일정 설명 |\n| `--location` | 장소 |\n| `--attendees` | 참석자 이메일 (쉼표 구분) |\n| `--account` | 계정 (work, personal 등) |\n| `--adc` | gcloud ADC 사용 |\n| `--timezone` | 타임존 (기본값: Asia/Seoul) |\n| `--json` | JSON 형식 출력 |\n\n## References\n\n| 문서 | 내용 |\n|------|------|\n| `references/setup.md` | 초기 설정 상세 가이드 |\n| `references/credentials.json` | Google OAuth Client ID (gitignore) |\n\n## 파일 구조\n\n```\n.claude/skills/google-calendar/\n├── SKILL.md                    # 이 파일\n├── scripts/\n│   ├── calendar_client.py      # API 클라이언트\n│   ├── setup_auth.py           # 인증 설정\n│   ├── fetch_events.py         # 이벤트 조회 CLI\n│   └── manage_events.py        # 이벤트 생성/수정/삭제 CLI\n├── references/\n│   ├── setup.md                # 설정 가이드\n│   └── credentials.json        # OAuth Client ID (gitignore)\n└── accounts/                   # 계정별 토큰 (gitignore)\n    ├── work.json\n    └── personal.json\n```\n\n## 보안 주의사항\n\n- `accounts/*.json`: refresh token 포함, 절대 커밋 금지\n- `references/credentials.json`: Client Secret 포함, 커밋 금지\n- `.gitignore`에 추가 필수\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/examples/parallel-fetch.md",
    "content": "# 병렬 조회 예시\n\n## Subagent 병렬 실행\n\n여러 계정의 캘린더를 동시에 조회하려면 Task 도구를 병렬로 호출:\n\n```\n# 단일 메시지에 여러 Task 호출 (병렬 실행)\n\nTask(\n    subagent_type=\"general-purpose\",\n    prompt=\"다음 명령을 실행하고 결과를 JSON으로 반환:\n    uv run python .claude/skills/google-calendar/scripts/fetch_events.py --account work --days 7 --json\",\n    model=\"haiku\"\n)\n\nTask(\n    subagent_type=\"general-purpose\",\n    prompt=\"다음 명령을 실행하고 결과를 JSON으로 반환:\n    uv run python .claude/skills/google-calendar/scripts/fetch_events.py --account personal --days 7 --json\",\n    model=\"haiku\"\n)\n```\n\n## 결과 통합\n\n각 subagent가 반환한 JSON을 파싱하여 통합:\n\n```python\nimport json\nfrom datetime import datetime\n\n# subagent 결과들\nwork_events = json.loads(work_result)\npersonal_events = json.loads(personal_result)\n\n# 통합 및 시간순 정렬\nall_events = work_events + personal_events\nall_events.sort(key=lambda x: x[\"start\"])\n\n# 날짜별 그룹화\nevents_by_date = {}\nfor event in all_events:\n    date = event[\"start\"].split(\"T\")[0]\n    events_by_date.setdefault(date, []).append(event)\n```\n\n## 충돌 감지\n\n```python\ndef detect_conflicts(events):\n    \"\"\"동일 시간대 다른 계정 이벤트 = 충돌\"\"\"\n    conflicts = []\n    for i, e1 in enumerate(events):\n        for e2 in events[i+1:]:\n            if e1[\"account\"] == e2[\"account\"]:\n                continue\n            # 시간 겹침 확인\n            if is_overlapping(e1, e2):\n                conflicts.append((e1, e2))\n    return conflicts\n```\n\n## 출력 예시\n\n```\n📅 2026-01-06 (월)\n\n[09:00-10:00] 🔵 팀 스탠드업 (work)\n[14:00-15:00] 🔵 고객 미팅 (work)\n              ⚠️ 충돌: 개인 일정과 겹침\n[14:00-14:30] 🟢 은행 방문 (personal)\n\n📊 총 3개 일정 | 1건 충돌\n```\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/examples/quick-query.md",
    "content": "# 빠른 조회 예시\n\n## 오늘 일정\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --all --days 1 --pretty\n```\n\n## 이번 주 일정 (JSON)\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --all --days 7 --json\n```\n\n## 특정 계정만 조회\n\n```bash\n# 회사 캘린더만\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --account work --days 7 --pretty\n\n# 개인 캘린더만\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --account personal --days 7 --pretty\n```\n\n## 캘린더 목록 확인\n\n```bash\nuv run python .claude/skills/google-calendar/scripts/fetch_events.py \\\n  --account work --list-calendars\n```\n\n## 프로그래밍 방식 사용\n\n```python\nfrom calendar_client import CalendarClient, fetch_all_events\n\n# 단일 계정\nclient = CalendarClient(\"work\")\nevents = client.get_events(days=7)\n\n# 전체 계정 통합\nresult = fetch_all_events(days=7)\nprint(f\"총 {result['total']}개 이벤트\")\nprint(f\"충돌: {len(result['conflicts'])}건\")\n```\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/pyproject.toml",
    "content": "[project]\nname = \"google-calendar-skill\"\nversion = \"0.1.0\"\ndescription = \"Google Calendar sync skill for Claude Code\"\nrequires-python = \">=3.11\"\ndependencies = [\n    \"google-auth>=2.0.0\",\n    \"google-auth-oauthlib>=1.0.0\",\n    \"google-api-python-client>=2.0.0\",\n    \"httplib2>=0.22.0\",\n]\n\n[build-system]\nrequires = [\"hatchling\"]\nbuild-backend = \"hatchling.build\"\n\n[tool.hatch.build.targets.wheel]\npackages = [\"scripts\"]\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/scripts/calendar_client.py",
    "content": "\"\"\"Google Calendar API 클라이언트.\n\n여러 Google 계정의 캘린더를 조회하기 위한 클라이언트.\n저장된 refresh token을 사용하여 매번 인증 없이 API 호출.\n\nEnvironment Variables:\n    GOOGLE_CALENDAR_SKILL_PATH: Skill 루트 경로 (기본값: 이 파일의 부모의 부모)\n    GOOGLE_CALENDAR_TIMEOUT: API 요청 타임아웃 초 (기본값: 30)\n\"\"\"\n\nimport json\nimport os\nfrom datetime import datetime, timedelta\nfrom pathlib import Path\nfrom typing import Optional\n\nimport google.auth\nfrom google.oauth2.credentials import Credentials\nfrom google.auth.transport.requests import Request\nfrom googleapiclient.discovery import build\nimport httplib2\n\n# 환경변수에서 설정 로드\nDEFAULT_TIMEOUT = int(os.environ.get(\"GOOGLE_CALENDAR_TIMEOUT\", \"30\"))\n\n\nclass CalendarClient:\n    \"\"\"단일 Google 계정의 캘린더 클라이언트.\"\"\"\n\n    SCOPES = [\"https://www.googleapis.com/auth/calendar\"]  # 읽기/쓰기 권한\n\n    def __init__(\n        self,\n        account_name: str,\n        base_path: Optional[Path] = None,\n        timeout: int = DEFAULT_TIMEOUT,\n    ):\n        \"\"\"\n        Args:\n            account_name: 계정 식별자 (예: 'work', 'personal')\n            base_path: skill 루트 경로 (환경변수 GOOGLE_CALENDAR_SKILL_PATH 또는 기본값)\n            timeout: API 요청 타임아웃 (초)\n        \"\"\"\n        self.account_name = account_name\n        self.timeout = timeout\n\n        # 경로 우선순위: 인자 > 환경변수 > 기본값\n        if base_path:\n            self.base_path = base_path\n        elif os.environ.get(\"GOOGLE_CALENDAR_SKILL_PATH\"):\n            self.base_path = Path(os.environ[\"GOOGLE_CALENDAR_SKILL_PATH\"])\n        else:\n            self.base_path = Path(__file__).parent.parent\n\n        self.creds = self._load_credentials()\n\n    def _load_credentials(self):\n        \"\"\"저장된 refresh token으로 credentials 로드 및 갱신.\"\"\"\n        token_path = self.base_path / f\"accounts/{self.account_name}.json\"\n\n        if not token_path.exists():\n            raise FileNotFoundError(\n                f\"계정 '{self.account_name}'의 토큰이 없습니다. \"\n                f\"먼저 setup_auth.py --account {self.account_name} 실행 필요\"\n            )\n\n        with open(token_path) as f:\n            token_data = json.load(f)\n\n        # ADC 형식인지 확인 (client_id가 있으면 ADC)\n        if \"client_id\" in token_data and \"type\" not in token_data:\n            # gcloud ADC 형식 - quota project 포함\n            creds = Credentials(\n                token=token_data.get(\"token\"),\n                refresh_token=token_data.get(\"refresh_token\"),\n                token_uri=\"https://oauth2.googleapis.com/token\",\n                client_id=token_data.get(\"client_id\"),\n                client_secret=token_data.get(\"client_secret\"),\n                scopes=self.SCOPES,\n            )\n            # quota project 설정 (있을 때만)\n            quota_project = token_data.get(\"quota_project_id\")\n            if quota_project:\n                creds = creds.with_quota_project(quota_project)\n        else:\n            # 일반 OAuth 토큰 형식\n            creds = Credentials.from_authorized_user_info(token_data, self.SCOPES)\n\n        # 만료 시 자동 갱신\n        if creds.expired and creds.refresh_token:\n            creds.refresh(Request())\n            # 갱신된 토큰 저장\n            with open(token_path, \"w\") as f:\n                json.dump(json.loads(creds.to_json()), f, indent=2)\n\n        return creds\n\n    def get_events(\n        self,\n        days: int = 7,\n        calendar_id: str = \"primary\",\n        max_results: int = 100,\n    ) -> list[dict]:\n        \"\"\"향후 N일간 이벤트 조회.\n\n        Args:\n            days: 조회할 기간 (일)\n            calendar_id: 캘린더 ID (기본값: primary)\n            max_results: 최대 결과 수\n\n        Returns:\n            이벤트 목록 (dict 리스트)\n        \"\"\"\n        # credentials로 서비스 빌드\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        now = datetime.utcnow()\n        time_min = now.isoformat() + \"Z\"\n        time_max = (now + timedelta(days=days)).isoformat() + \"Z\"\n\n        events = []\n        page_token = None\n\n        while True:\n            result = (\n                service.events()\n                .list(\n                    calendarId=calendar_id,\n                    timeMin=time_min,\n                    timeMax=time_max,\n                    singleEvents=True,\n                    orderBy=\"startTime\",\n                    maxResults=max_results,\n                    pageToken=page_token,\n                )\n                .execute()\n            )\n\n            for event in result.get(\"items\", []):\n                start = event[\"start\"].get(\"dateTime\", event[\"start\"].get(\"date\"))\n                end = event[\"end\"].get(\"dateTime\", event[\"end\"].get(\"date\"))\n\n                events.append(\n                    {\n                        \"account\": self.account_name,\n                        \"id\": event.get(\"id\"),\n                        \"summary\": event.get(\"summary\", \"(제목 없음)\"),\n                        \"start\": start,\n                        \"end\": end,\n                        \"all_day\": \"date\" in event[\"start\"],\n                        \"location\": event.get(\"location\"),\n                        \"description\": event.get(\"description\"),\n                        \"attendees\": [\n                            a.get(\"email\") for a in event.get(\"attendees\", [])\n                        ],\n                        \"status\": event.get(\"status\"),\n                        \"html_link\": event.get(\"htmlLink\"),\n                    }\n                )\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return events\n\n    def list_calendars(self) -> list[dict]:\n        \"\"\"사용 가능한 모든 캘린더 목록 조회.\"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        calendars = []\n        page_token = None\n\n        while True:\n            result = (\n                service.calendarList().list(pageToken=page_token).execute()\n            )\n\n            for cal in result.get(\"items\", []):\n                calendars.append(\n                    {\n                        \"id\": cal.get(\"id\"),\n                        \"summary\": cal.get(\"summary\"),\n                        \"primary\": cal.get(\"primary\", False),\n                        \"access_role\": cal.get(\"accessRole\"),\n                    }\n                )\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return calendars\n\n    def create_event(\n        self,\n        summary: str,\n        start: str,\n        end: str,\n        description: Optional[str] = None,\n        location: Optional[str] = None,\n        attendees: Optional[list[str]] = None,\n        calendar_id: str = \"primary\",\n        timezone: str = \"Asia/Seoul\",\n    ) -> dict:\n        \"\"\"새 이벤트 생성.\n\n        Args:\n            summary: 일정 제목\n            start: 시작 시간 (ISO format: 2024-01-15T09:00:00 또는 2024-01-15)\n            end: 종료 시간 (ISO format)\n            description: 일정 설명\n            location: 장소\n            attendees: 참석자 이메일 목록\n            calendar_id: 캘린더 ID (기본값: primary)\n            timezone: 타임존 (기본값: Asia/Seoul)\n\n        Returns:\n            생성된 이벤트 정보\n        \"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        # 종일 일정인지 확인 (T가 없으면 종일)\n        is_all_day = \"T\" not in start\n\n        event = {\n            \"summary\": summary,\n        }\n\n        if is_all_day:\n            event[\"start\"] = {\"date\": start}\n            event[\"end\"] = {\"date\": end}\n        else:\n            event[\"start\"] = {\"dateTime\": start, \"timeZone\": timezone}\n            event[\"end\"] = {\"dateTime\": end, \"timeZone\": timezone}\n\n        if description:\n            event[\"description\"] = description\n        if location:\n            event[\"location\"] = location\n        if attendees:\n            event[\"attendees\"] = [{\"email\": email} for email in attendees]\n\n        result = service.events().insert(calendarId=calendar_id, body=event).execute()\n\n        return {\n            \"id\": result.get(\"id\"),\n            \"summary\": result.get(\"summary\"),\n            \"start\": result[\"start\"].get(\"dateTime\", result[\"start\"].get(\"date\")),\n            \"end\": result[\"end\"].get(\"dateTime\", result[\"end\"].get(\"date\")),\n            \"html_link\": result.get(\"htmlLink\"),\n            \"status\": \"created\",\n        }\n\n    def update_event(\n        self,\n        event_id: str,\n        summary: Optional[str] = None,\n        start: Optional[str] = None,\n        end: Optional[str] = None,\n        description: Optional[str] = None,\n        location: Optional[str] = None,\n        calendar_id: str = \"primary\",\n        timezone: str = \"Asia/Seoul\",\n    ) -> dict:\n        \"\"\"기존 이벤트 수정.\n\n        Args:\n            event_id: 수정할 이벤트 ID\n            summary: 새 제목 (None이면 유지)\n            start: 새 시작 시간 (None이면 유지)\n            end: 새 종료 시간 (None이면 유지)\n            description: 새 설명 (None이면 유지)\n            location: 새 장소 (None이면 유지)\n            calendar_id: 캘린더 ID\n            timezone: 타임존\n\n        Returns:\n            수정된 이벤트 정보\n        \"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        # 기존 이벤트 조회\n        event = service.events().get(calendarId=calendar_id, eventId=event_id).execute()\n\n        # 변경할 필드만 업데이트\n        if summary is not None:\n            event[\"summary\"] = summary\n        if description is not None:\n            event[\"description\"] = description\n        if location is not None:\n            event[\"location\"] = location\n\n        if start is not None:\n            is_all_day = \"T\" not in start\n            if is_all_day:\n                event[\"start\"] = {\"date\": start}\n            else:\n                event[\"start\"] = {\"dateTime\": start, \"timeZone\": timezone}\n\n        if end is not None:\n            is_all_day = \"T\" not in end\n            if is_all_day:\n                event[\"end\"] = {\"date\": end}\n            else:\n                event[\"end\"] = {\"dateTime\": end, \"timeZone\": timezone}\n\n        result = (\n            service.events()\n            .update(calendarId=calendar_id, eventId=event_id, body=event)\n            .execute()\n        )\n\n        return {\n            \"id\": result.get(\"id\"),\n            \"summary\": result.get(\"summary\"),\n            \"start\": result[\"start\"].get(\"dateTime\", result[\"start\"].get(\"date\")),\n            \"end\": result[\"end\"].get(\"dateTime\", result[\"end\"].get(\"date\")),\n            \"html_link\": result.get(\"htmlLink\"),\n            \"status\": \"updated\",\n        }\n\n    def delete_event(\n        self,\n        event_id: str,\n        calendar_id: str = \"primary\",\n    ) -> dict:\n        \"\"\"이벤트 삭제.\n\n        Args:\n            event_id: 삭제할 이벤트 ID\n            calendar_id: 캘린더 ID\n\n        Returns:\n            삭제 결과\n        \"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        service.events().delete(calendarId=calendar_id, eventId=event_id).execute()\n\n        return {\n            \"id\": event_id,\n            \"status\": \"deleted\",\n        }\n\n\nclass ADCCalendarClient:\n    \"\"\"Application Default Credentials를 사용하는 캘린더 클라이언트.\n\n    gcloud auth application-default login으로 인증된 계정 사용.\n    별도 토큰 파일 없이 바로 사용 가능.\n    \"\"\"\n\n    SCOPES = [\"https://www.googleapis.com/auth/calendar\"]  # 읽기/쓰기 권한\n\n    def __init__(self, account_name: str = \"default\", timeout: int = DEFAULT_TIMEOUT):\n        \"\"\"\n        Args:\n            account_name: 계정 식별자 (표시용)\n            timeout: API 요청 타임아웃 (초)\n        \"\"\"\n        self.account_name = account_name\n        self.timeout = timeout\n        self.creds, self.project = google.auth.default(scopes=self.SCOPES)\n\n    def get_events(\n        self,\n        days: int = 7,\n        calendar_id: str = \"primary\",\n        max_results: int = 100,\n    ) -> list[dict]:\n        \"\"\"향후 N일간 이벤트 조회.\"\"\"\n        http = httplib2.Http(timeout=self.timeout)\n        http = google.auth.transport.requests.AuthorizedSession(self.creds)\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        now = datetime.utcnow()\n        time_min = now.isoformat() + \"Z\"\n        time_max = (now + timedelta(days=days)).isoformat() + \"Z\"\n\n        events = []\n        page_token = None\n\n        while True:\n            result = (\n                service.events()\n                .list(\n                    calendarId=calendar_id,\n                    timeMin=time_min,\n                    timeMax=time_max,\n                    singleEvents=True,\n                    orderBy=\"startTime\",\n                    maxResults=max_results,\n                    pageToken=page_token,\n                )\n                .execute()\n            )\n\n            for event in result.get(\"items\", []):\n                start = event[\"start\"].get(\"dateTime\", event[\"start\"].get(\"date\"))\n                end = event[\"end\"].get(\"dateTime\", event[\"end\"].get(\"date\"))\n\n                events.append(\n                    {\n                        \"account\": self.account_name,\n                        \"id\": event.get(\"id\"),\n                        \"summary\": event.get(\"summary\", \"(제목 없음)\"),\n                        \"start\": start,\n                        \"end\": end,\n                        \"all_day\": \"date\" in event[\"start\"],\n                        \"location\": event.get(\"location\"),\n                        \"description\": event.get(\"description\"),\n                        \"attendees\": [\n                            a.get(\"email\") for a in event.get(\"attendees\", [])\n                        ],\n                        \"status\": event.get(\"status\"),\n                        \"html_link\": event.get(\"htmlLink\"),\n                    }\n                )\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return events\n\n    def list_calendars(self) -> list[dict]:\n        \"\"\"사용 가능한 모든 캘린더 목록 조회.\"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        calendars = []\n        page_token = None\n\n        while True:\n            result = service.calendarList().list(pageToken=page_token).execute()\n\n            for cal in result.get(\"items\", []):\n                calendars.append(\n                    {\n                        \"id\": cal.get(\"id\"),\n                        \"summary\": cal.get(\"summary\"),\n                        \"primary\": cal.get(\"primary\", False),\n                        \"access_role\": cal.get(\"accessRole\"),\n                    }\n                )\n\n            page_token = result.get(\"nextPageToken\")\n            if not page_token:\n                break\n\n        return calendars\n\n    def create_event(\n        self,\n        summary: str,\n        start: str,\n        end: str,\n        description: Optional[str] = None,\n        location: Optional[str] = None,\n        attendees: Optional[list[str]] = None,\n        calendar_id: str = \"primary\",\n        timezone: str = \"Asia/Seoul\",\n    ) -> dict:\n        \"\"\"새 이벤트 생성.\"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        is_all_day = \"T\" not in start\n        event = {\"summary\": summary}\n\n        if is_all_day:\n            event[\"start\"] = {\"date\": start}\n            event[\"end\"] = {\"date\": end}\n        else:\n            event[\"start\"] = {\"dateTime\": start, \"timeZone\": timezone}\n            event[\"end\"] = {\"dateTime\": end, \"timeZone\": timezone}\n\n        if description:\n            event[\"description\"] = description\n        if location:\n            event[\"location\"] = location\n        if attendees:\n            event[\"attendees\"] = [{\"email\": email} for email in attendees]\n\n        result = service.events().insert(calendarId=calendar_id, body=event).execute()\n\n        return {\n            \"id\": result.get(\"id\"),\n            \"summary\": result.get(\"summary\"),\n            \"start\": result[\"start\"].get(\"dateTime\", result[\"start\"].get(\"date\")),\n            \"end\": result[\"end\"].get(\"dateTime\", result[\"end\"].get(\"date\")),\n            \"html_link\": result.get(\"htmlLink\"),\n            \"status\": \"created\",\n        }\n\n    def update_event(\n        self,\n        event_id: str,\n        summary: Optional[str] = None,\n        start: Optional[str] = None,\n        end: Optional[str] = None,\n        description: Optional[str] = None,\n        location: Optional[str] = None,\n        calendar_id: str = \"primary\",\n        timezone: str = \"Asia/Seoul\",\n    ) -> dict:\n        \"\"\"기존 이벤트 수정.\"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        event = service.events().get(calendarId=calendar_id, eventId=event_id).execute()\n\n        if summary is not None:\n            event[\"summary\"] = summary\n        if description is not None:\n            event[\"description\"] = description\n        if location is not None:\n            event[\"location\"] = location\n\n        if start is not None:\n            is_all_day = \"T\" not in start\n            if is_all_day:\n                event[\"start\"] = {\"date\": start}\n            else:\n                event[\"start\"] = {\"dateTime\": start, \"timeZone\": timezone}\n\n        if end is not None:\n            is_all_day = \"T\" not in end\n            if is_all_day:\n                event[\"end\"] = {\"date\": end}\n            else:\n                event[\"end\"] = {\"dateTime\": end, \"timeZone\": timezone}\n\n        result = (\n            service.events()\n            .update(calendarId=calendar_id, eventId=event_id, body=event)\n            .execute()\n        )\n\n        return {\n            \"id\": result.get(\"id\"),\n            \"summary\": result.get(\"summary\"),\n            \"start\": result[\"start\"].get(\"dateTime\", result[\"start\"].get(\"date\")),\n            \"end\": result[\"end\"].get(\"dateTime\", result[\"end\"].get(\"date\")),\n            \"html_link\": result.get(\"htmlLink\"),\n            \"status\": \"updated\",\n        }\n\n    def delete_event(\n        self,\n        event_id: str,\n        calendar_id: str = \"primary\",\n    ) -> dict:\n        \"\"\"이벤트 삭제.\"\"\"\n        service = build(\"calendar\", \"v3\", credentials=self.creds)\n\n        service.events().delete(calendarId=calendar_id, eventId=event_id).execute()\n\n        return {\n            \"id\": event_id,\n            \"status\": \"deleted\",\n        }\n\n\ndef get_all_accounts(base_path: Optional[Path] = None) -> list[str]:\n    \"\"\"등록된 모든 계정 이름 반환.\"\"\"\n    base_path = base_path or Path(__file__).parent.parent\n    accounts_dir = base_path / \"accounts\"\n\n    if not accounts_dir.exists():\n        return []\n\n    return [\n        f.stem\n        for f in accounts_dir.glob(\"*.json\")\n        if f.stem not in (\"credentials\",)\n    ]\n\n\ndef fetch_all_events(days: int = 7, base_path: Optional[Path] = None) -> dict:\n    \"\"\"모든 계정의 이벤트를 조회하여 통합.\n\n    Args:\n        days: 조회할 기간 (일)\n        base_path: skill 루트 경로\n\n    Returns:\n        {\n            \"accounts\": [\"work\", \"personal\"],\n            \"events\": [...],\n            \"errors\": {\"account_name\": \"error message\"},\n            \"total\": 10,\n            \"conflicts\": [...]\n        }\n    \"\"\"\n    accounts = get_all_accounts(base_path)\n    all_events = []\n    errors = {}\n\n    for account in accounts:\n        try:\n            client = CalendarClient(account, base_path)\n            events = client.get_events(days=days)\n            all_events.extend(events)\n        except Exception as e:\n            errors[account] = str(e)\n\n    # 시간순 정렬\n    all_events.sort(key=lambda x: x[\"start\"])\n\n    # 충돌 감지\n    conflicts = detect_conflicts(all_events)\n\n    return {\n        \"accounts\": accounts,\n        \"events\": all_events,\n        \"errors\": errors,\n        \"total\": len(all_events),\n        \"conflicts\": conflicts,\n    }\n\n\ndef detect_conflicts(events: list[dict]) -> list[dict]:\n    \"\"\"동일 시간대 이벤트 충돌 감지.\n\n    Args:\n        events: 시간순 정렬된 이벤트 목록\n\n    Returns:\n        충돌 이벤트 쌍 목록\n    \"\"\"\n    conflicts = []\n\n    for i, event1 in enumerate(events):\n        if event1.get(\"all_day\"):\n            continue\n\n        for event2 in events[i + 1 :]:\n            if event2.get(\"all_day\"):\n                continue\n\n            # 같은 계정이면 충돌 아님\n            if event1[\"account\"] == event2[\"account\"]:\n                continue\n\n            # 시간 비교\n            start1 = datetime.fromisoformat(event1[\"start\"].replace(\"Z\", \"+00:00\"))\n            end1 = datetime.fromisoformat(event1[\"end\"].replace(\"Z\", \"+00:00\"))\n            start2 = datetime.fromisoformat(event2[\"start\"].replace(\"Z\", \"+00:00\"))\n            end2 = datetime.fromisoformat(event2[\"end\"].replace(\"Z\", \"+00:00\"))\n\n            # event2 시작이 event1 끝 이후면 더 이상 비교 불필요\n            if start2 >= end1:\n                break\n\n            # 겹침 확인\n            if start1 < end2 and start2 < end1:\n                conflicts.append(\n                    {\n                        \"event1\": {\n                            \"account\": event1[\"account\"],\n                            \"summary\": event1[\"summary\"],\n                            \"start\": event1[\"start\"],\n                            \"end\": event1[\"end\"],\n                        },\n                        \"event2\": {\n                            \"account\": event2[\"account\"],\n                            \"summary\": event2[\"summary\"],\n                            \"start\": event2[\"start\"],\n                            \"end\": event2[\"end\"],\n                        },\n                    }\n                )\n\n    return conflicts\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/scripts/fetch_events.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Google Calendar 이벤트 조회 CLI.\n\nSubagent에서 호출하여 특정 계정의 이벤트를 JSON으로 반환.\n\nUsage:\n    # ADC(Application Default Credentials) 사용 - 가장 간단\n    uv run python fetch_events.py --adc --days 7\n\n    # 특정 계정 조회\n    uv run python fetch_events.py --account work --days 7\n\n    # 모든 계정 조회 (통합)\n    uv run python fetch_events.py --all --days 7\n\n    # 캘린더 목록 조회\n    uv run python fetch_events.py --adc --list-calendars\n\"\"\"\n\nimport argparse\nimport json\nimport sys\nfrom datetime import datetime\nfrom pathlib import Path\nfrom zoneinfo import ZoneInfo\n\nfrom calendar_client import CalendarClient, ADCCalendarClient, fetch_all_events, get_all_accounts\n\n\ndef format_event_for_display(event: dict, tz: ZoneInfo = None) -> str:\n    \"\"\"이벤트를 사람이 읽기 좋은 형식으로 변환.\"\"\"\n    if tz is None:\n        tz = ZoneInfo(\"Asia/Seoul\")\n\n    start = event[\"start\"]\n    end = event[\"end\"]\n    account = event[\"account\"]\n    summary = event[\"summary\"]\n\n    # 시간 파싱\n    if event.get(\"all_day\"):\n        time_str = \"종일\"\n    else:\n        start_dt = datetime.fromisoformat(start.replace(\"Z\", \"+00:00\")).astimezone(tz)\n        end_dt = datetime.fromisoformat(end.replace(\"Z\", \"+00:00\")).astimezone(tz)\n        time_str = f\"{start_dt.strftime('%H:%M')}-{end_dt.strftime('%H:%M')}\"\n\n    # 계정별 아이콘\n    icon = \"🔵\" if account == \"work\" else \"🟢\"\n\n    return f\"[{time_str}] {icon} {summary} ({account})\"\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Google Calendar 이벤트 조회\"\n    )\n    parser.add_argument(\n        \"--account\",\n        \"-a\",\n        help=\"계정 식별자 (예: work, personal)\",\n    )\n    parser.add_argument(\n        \"--all\",\n        action=\"store_true\",\n        help=\"모든 계정의 이벤트 조회\",\n    )\n    parser.add_argument(\n        \"--days\",\n        \"-d\",\n        type=int,\n        default=7,\n        help=\"조회할 기간 (일, 기본값: 7)\",\n    )\n    parser.add_argument(\n        \"--list-calendars\",\n        action=\"store_true\",\n        help=\"캘린더 목록 조회\",\n    )\n    parser.add_argument(\n        \"--json\",\n        \"-j\",\n        action=\"store_true\",\n        help=\"JSON 형식으로 출력\",\n    )\n    parser.add_argument(\n        \"--pretty\",\n        \"-p\",\n        action=\"store_true\",\n        help=\"사람이 읽기 좋은 형식으로 출력\",\n    )\n    parser.add_argument(\n        \"--adc\",\n        action=\"store_true\",\n        help=\"Application Default Credentials 사용 (gcloud auth application-default login)\",\n    )\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    # ADC 모드\n    if args.adc:\n        try:\n            client = ADCCalendarClient(account_name=\"gcloud\")\n\n            # 캘린더 목록\n            if args.list_calendars:\n                calendars = client.list_calendars()\n                if args.json or not args.pretty:\n                    print(json.dumps(calendars, ensure_ascii=False, indent=2))\n                else:\n                    print(\"📋 ADC 계정의 캘린더:\\n\")\n                    for cal in calendars:\n                        primary = \" (기본)\" if cal[\"primary\"] else \"\"\n                        print(f\"  - {cal['summary']}{primary}\")\n                        print(f\"    ID: {cal['id']}\")\n                return\n\n            # 이벤트 조회\n            events = client.get_events(days=args.days)\n\n            if args.json or not args.pretty:\n                print(json.dumps(events, ensure_ascii=False, indent=2))\n            else:\n                print(f\"📅 ADC 계정 - 향후 {args.days}일간 일정\\n\")\n\n                # 날짜별 그룹화\n                events_by_date = {}\n                for event in events:\n                    start = event[\"start\"]\n                    if \"T\" in start:\n                        date = start.split(\"T\")[0]\n                    else:\n                        date = start\n                    events_by_date.setdefault(date, []).append(event)\n\n                for date in sorted(events_by_date.keys()):\n                    dt = datetime.fromisoformat(date)\n                    print(f\"### {dt.strftime('%Y-%m-%d (%a)')}\")\n                    for event in events_by_date[date]:\n                        print(f\"  {format_event_for_display(event)}\")\n                    print()\n\n                print(f\"📊 총 {len(events)}개 일정\")\n\n        except Exception as e:\n            print(f\"❌ ADC 오류: {e}\", file=sys.stderr)\n            print(\"gcloud auth application-default login 실행 필요\", file=sys.stderr)\n            sys.exit(1)\n        return\n\n    # 계정 목록 확인\n    accounts = get_all_accounts(base_path)\n    if not accounts:\n        print(\"❌ 등록된 계정이 없습니다.\", file=sys.stderr)\n        print(\"먼저 setup_auth.py로 계정을 등록하세요:\", file=sys.stderr)\n        print(\"  uv run python setup_auth.py --account work\", file=sys.stderr)\n        sys.exit(1)\n\n    # 모든 계정 조회\n    if args.all:\n        result = fetch_all_events(days=args.days, base_path=base_path)\n\n        if args.json or not args.pretty:\n            print(json.dumps(result, ensure_ascii=False, indent=2))\n        else:\n            print(f\"📅 향후 {args.days}일간 일정\\n\")\n\n            # 날짜별 그룹화\n            events_by_date = {}\n            for event in result[\"events\"]:\n                start = event[\"start\"]\n                if \"T\" in start:\n                    date = start.split(\"T\")[0]\n                else:\n                    date = start\n                events_by_date.setdefault(date, []).append(event)\n\n            for date in sorted(events_by_date.keys()):\n                dt = datetime.fromisoformat(date)\n                print(f\"### {dt.strftime('%Y-%m-%d (%a)')}\")\n                for event in events_by_date[date]:\n                    print(f\"  {format_event_for_display(event)}\")\n                print()\n\n            # 요약\n            print(f\"📊 총 {result['total']}개 일정\")\n            for account in result[\"accounts\"]:\n                count = len([e for e in result[\"events\"] if e[\"account\"] == account])\n                print(f\"   - {account}: {count}개\")\n\n            if result[\"conflicts\"]:\n                print(f\"\\n⚠️  {len(result['conflicts'])}건 충돌:\")\n                for conflict in result[\"conflicts\"]:\n                    e1, e2 = conflict[\"event1\"], conflict[\"event2\"]\n                    print(f\"   - {e1['summary']} ({e1['account']}) ↔ {e2['summary']} ({e2['account']})\")\n\n            if result[\"errors\"]:\n                print(\"\\n❌ 오류:\")\n                for account, error in result[\"errors\"].items():\n                    print(f\"   - {account}: {error}\")\n\n        return\n\n    # 특정 계정 조회\n    if not args.account:\n        parser.print_help()\n        print()\n        print(f\"등록된 계정: {', '.join(accounts)}\")\n        return\n\n    if args.account not in accounts:\n        print(f\"❌ 계정 '{args.account}'이 등록되지 않았습니다.\", file=sys.stderr)\n        print(f\"등록된 계정: {', '.join(accounts)}\", file=sys.stderr)\n        sys.exit(1)\n\n    try:\n        client = CalendarClient(args.account, base_path)\n\n        # 캘린더 목록\n        if args.list_calendars:\n            calendars = client.list_calendars()\n            if args.json:\n                print(json.dumps(calendars, ensure_ascii=False, indent=2))\n            else:\n                print(f\"📋 '{args.account}' 계정의 캘린더:\\n\")\n                for cal in calendars:\n                    primary = \" (기본)\" if cal[\"primary\"] else \"\"\n                    print(f\"  - {cal['summary']}{primary}\")\n                    print(f\"    ID: {cal['id']}\")\n            return\n\n        # 이벤트 조회\n        events = client.get_events(days=args.days)\n\n        if args.json or not args.pretty:\n            print(json.dumps(events, ensure_ascii=False, indent=2))\n        else:\n            print(f\"📅 '{args.account}' 계정 - 향후 {args.days}일간 일정\\n\")\n            for event in events:\n                print(f\"  {format_event_for_display(event)}\")\n            print(f\"\\n총 {len(events)}개 일정\")\n\n    except FileNotFoundError as e:\n        print(f\"❌ {e}\", file=sys.stderr)\n        sys.exit(1)\n    except Exception as e:\n        print(f\"❌ 오류 발생: {e}\", file=sys.stderr)\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/scripts/manage_events.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Google Calendar 이벤트 관리 CLI.\n\n일정 생성, 수정, 삭제를 위한 CLI 도구.\n\nUsage:\n    # 일정 생성 (시간 지정)\n    uv run python manage_events.py create \\\n        --summary \"팀 미팅\" \\\n        --start \"2026-01-06T14:00:00\" \\\n        --end \"2026-01-06T15:00:00\" \\\n        --account work\n\n    # 종일 일정 생성\n    uv run python manage_events.py create \\\n        --summary \"연차\" \\\n        --start \"2026-01-10\" \\\n        --end \"2026-01-11\" \\\n        --account personal\n\n    # 일정 수정\n    uv run python manage_events.py update \\\n        --event-id \"abc123\" \\\n        --summary \"팀 미팅 (변경)\" \\\n        --account work\n\n    # 일정 삭제\n    uv run python manage_events.py delete \\\n        --event-id \"abc123\" \\\n        --account work\n\n    # ADC 사용\n    uv run python manage_events.py create \\\n        --summary \"테스트\" \\\n        --start \"2026-01-06T10:00:00\" \\\n        --end \"2026-01-06T11:00:00\" \\\n        --adc\n\"\"\"\n\nimport argparse\nimport json\nimport sys\nfrom pathlib import Path\n\nfrom calendar_client import CalendarClient, ADCCalendarClient\n\n\ndef cmd_create(args):\n    \"\"\"일정 생성.\"\"\"\n    if args.adc:\n        client = ADCCalendarClient()\n    else:\n        if not args.account:\n            print(\"❌ --account 또는 --adc 필수\", file=sys.stderr)\n            sys.exit(1)\n        base_path = Path(__file__).parent.parent\n        client = CalendarClient(args.account, base_path)\n\n    attendees = args.attendees.split(\",\") if args.attendees else None\n\n    result = client.create_event(\n        summary=args.summary,\n        start=args.start,\n        end=args.end,\n        description=args.description,\n        location=args.location,\n        attendees=attendees,\n        timezone=args.timezone,\n    )\n\n    if args.json:\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        print(f\"✅ 일정 생성 완료\")\n        print(f\"   제목: {result['summary']}\")\n        print(f\"   시간: {result['start']} ~ {result['end']}\")\n        print(f\"   ID: {result['id']}\")\n        print(f\"   링크: {result['html_link']}\")\n\n\ndef cmd_update(args):\n    \"\"\"일정 수정.\"\"\"\n    if args.adc:\n        client = ADCCalendarClient()\n    else:\n        if not args.account:\n            print(\"❌ --account 또는 --adc 필수\", file=sys.stderr)\n            sys.exit(1)\n        base_path = Path(__file__).parent.parent\n        client = CalendarClient(args.account, base_path)\n\n    result = client.update_event(\n        event_id=args.event_id,\n        summary=args.summary,\n        start=args.start,\n        end=args.end,\n        description=args.description,\n        location=args.location,\n        timezone=args.timezone,\n    )\n\n    if args.json:\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        print(f\"✅ 일정 수정 완료\")\n        print(f\"   제목: {result['summary']}\")\n        print(f\"   시간: {result['start']} ~ {result['end']}\")\n        print(f\"   ID: {result['id']}\")\n        print(f\"   링크: {result['html_link']}\")\n\n\ndef cmd_delete(args):\n    \"\"\"일정 삭제.\"\"\"\n    if args.adc:\n        client = ADCCalendarClient()\n    else:\n        if not args.account:\n            print(\"❌ --account 또는 --adc 필수\", file=sys.stderr)\n            sys.exit(1)\n        base_path = Path(__file__).parent.parent\n        client = CalendarClient(args.account, base_path)\n\n    result = client.delete_event(event_id=args.event_id)\n\n    if args.json:\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        print(f\"✅ 일정 삭제 완료\")\n        print(f\"   ID: {result['id']}\")\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Google Calendar 이벤트 관리\")\n    parser.add_argument(\"--json\", \"-j\", action=\"store_true\", help=\"JSON 출력\")\n\n    subparsers = parser.add_subparsers(dest=\"command\", help=\"명령\")\n\n    # create 명령\n    create_parser = subparsers.add_parser(\"create\", help=\"일정 생성\")\n    create_parser.add_argument(\"--summary\", \"-s\", required=True, help=\"일정 제목\")\n    create_parser.add_argument(\"--start\", required=True, help=\"시작 시간 (ISO format)\")\n    create_parser.add_argument(\"--end\", required=True, help=\"종료 시간 (ISO format)\")\n    create_parser.add_argument(\"--description\", \"-d\", help=\"설명\")\n    create_parser.add_argument(\"--location\", \"-l\", help=\"장소\")\n    create_parser.add_argument(\"--attendees\", help=\"참석자 (쉼표 구분)\")\n    create_parser.add_argument(\"--account\", \"-a\", help=\"계정\")\n    create_parser.add_argument(\"--adc\", action=\"store_true\", help=\"ADC 사용\")\n    create_parser.add_argument(\"--timezone\", default=\"Asia/Seoul\", help=\"타임존\")\n    create_parser.add_argument(\"--json\", \"-j\", action=\"store_true\", help=\"JSON 출력\")\n\n    # update 명령\n    update_parser = subparsers.add_parser(\"update\", help=\"일정 수정\")\n    update_parser.add_argument(\"--event-id\", required=True, help=\"이벤트 ID\")\n    update_parser.add_argument(\"--summary\", \"-s\", help=\"새 제목\")\n    update_parser.add_argument(\"--start\", help=\"새 시작 시간\")\n    update_parser.add_argument(\"--end\", help=\"새 종료 시간\")\n    update_parser.add_argument(\"--description\", \"-d\", help=\"새 설명\")\n    update_parser.add_argument(\"--location\", \"-l\", help=\"새 장소\")\n    update_parser.add_argument(\"--account\", \"-a\", help=\"계정\")\n    update_parser.add_argument(\"--adc\", action=\"store_true\", help=\"ADC 사용\")\n    update_parser.add_argument(\"--timezone\", default=\"Asia/Seoul\", help=\"타임존\")\n    update_parser.add_argument(\"--json\", \"-j\", action=\"store_true\", help=\"JSON 출력\")\n\n    # delete 명령\n    delete_parser = subparsers.add_parser(\"delete\", help=\"일정 삭제\")\n    delete_parser.add_argument(\"--event-id\", required=True, help=\"이벤트 ID\")\n    delete_parser.add_argument(\"--account\", \"-a\", help=\"계정\")\n    delete_parser.add_argument(\"--adc\", action=\"store_true\", help=\"ADC 사용\")\n    delete_parser.add_argument(\"--json\", \"-j\", action=\"store_true\", help=\"JSON 출력\")\n\n    args = parser.parse_args()\n\n    if not args.command:\n        parser.print_help()\n        sys.exit(1)\n\n    try:\n        if args.command == \"create\":\n            cmd_create(args)\n        elif args.command == \"update\":\n            cmd_update(args)\n        elif args.command == \"delete\":\n            cmd_delete(args)\n    except Exception as e:\n        print(f\"❌ 오류: {e}\", file=sys.stderr)\n        sys.exit(1)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/google-calendar/skills/google-calendar/scripts/setup_auth.py",
    "content": "#!/usr/bin/env python3\n\"\"\"Google Calendar OAuth 인증 설정.\n\n최초 1회 실행하여 계정별 refresh token을 저장.\n이후에는 저장된 token으로 자동 인증됨.\n\nUsage:\n    uv run python setup_auth.py --account work\n    uv run python setup_auth.py --account personal\n\"\"\"\n\nimport argparse\nimport json\nfrom pathlib import Path\n\nfrom google_auth_oauthlib.flow import InstalledAppFlow\n\n\nSCOPES = [\"https://www.googleapis.com/auth/calendar.readonly\"]\n\n\ndef setup_auth(account_name: str, base_path: Path) -> None:\n    \"\"\"OAuth 인증 플로우 실행 및 토큰 저장.\n\n    Args:\n        account_name: 계정 식별자 (예: 'work', 'personal')\n        base_path: skill 루트 경로\n    \"\"\"\n    credentials_path = base_path / \"references\" / \"credentials.json\"\n    token_path = base_path / \"accounts\" / f\"{account_name}.json\"\n\n    if not credentials_path.exists():\n        print(f\"❌ OAuth Client ID 파일이 없습니다: {credentials_path}\")\n        print()\n        print(\"설정 방법:\")\n        print(\"1. https://console.cloud.google.com 접속\")\n        print(\"2. 프로젝트 생성 또는 선택\")\n        print(\"3. 'API 및 서비스' > '사용자 인증 정보'\")\n        print(\"4. 'OAuth 2.0 클라이언트 ID' 생성 (Desktop 유형)\")\n        print(\"5. JSON 다운로드 → references/credentials.json 저장\")\n        return\n\n    # 기존 토큰 확인\n    if token_path.exists():\n        print(f\"⚠️  계정 '{account_name}'의 토큰이 이미 존재합니다.\")\n        response = input(\"덮어쓰시겠습니까? [y/N]: \")\n        if response.lower() != \"y\":\n            print(\"취소됨\")\n            return\n\n    print(f\"🔐 '{account_name}' 계정 인증을 시작합니다...\")\n    print(\"브라우저가 열리면 Google 계정으로 로그인하세요.\")\n    print()\n\n    # OAuth 플로우 실행\n    flow = InstalledAppFlow.from_client_secrets_file(\n        str(credentials_path),\n        SCOPES,\n    )\n\n    # 로컬 서버로 콜백 받기\n    creds = flow.run_local_server(port=0)\n\n    # 토큰 저장\n    token_path.parent.mkdir(parents=True, exist_ok=True)\n    with open(token_path, \"w\") as f:\n        json.dump(json.loads(creds.to_json()), f, indent=2)\n\n    print()\n    print(f\"✅ 인증 완료! 토큰 저장됨: {token_path}\")\n    print(f\"   계정: {account_name}\")\n\n\ndef list_accounts(base_path: Path) -> None:\n    \"\"\"등록된 계정 목록 출력.\"\"\"\n    accounts_dir = base_path / \"accounts\"\n\n    if not accounts_dir.exists():\n        print(\"등록된 계정이 없습니다.\")\n        return\n\n    accounts = [f.stem for f in accounts_dir.glob(\"*.json\")]\n\n    if not accounts:\n        print(\"등록된 계정이 없습니다.\")\n        return\n\n    print(\"📋 등록된 계정:\")\n    for account in accounts:\n        print(f\"   - {account}\")\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Google Calendar OAuth 인증 설정\"\n    )\n    parser.add_argument(\n        \"--account\",\n        \"-a\",\n        help=\"계정 식별자 (예: work, personal)\",\n    )\n    parser.add_argument(\n        \"--list\",\n        \"-l\",\n        action=\"store_true\",\n        help=\"등록된 계정 목록 출력\",\n    )\n\n    args = parser.parse_args()\n    base_path = Path(__file__).parent.parent\n\n    if args.list:\n        list_accounts(base_path)\n        return\n\n    if not args.account:\n        parser.print_help()\n        print()\n        print(\"예시:\")\n        print(\"  uv run python setup_auth.py --account work\")\n        print(\"  uv run python setup_auth.py --account personal\")\n        print(\"  uv run python setup_auth.py --list\")\n        return\n\n    setup_auth(args.account, base_path)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/interactive-review/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"interactive-review\",\n  \"description\": \"Interactive markdown review with web UI - review plans and documents with checkbox approvals and inline comments\",\n  \"version\": \"1.0.4\",\n  \"author\": {\n    \"name\": \"Team Attention\"\n  },\n  \"license\": \"MIT\",\n  \"keywords\": [\"review\", \"markdown\", \"interactive\", \"web-ui\", \"feedback\"],\n  \"repository\": \"https://github.com/team-attention/agents\"\n}\n"
  },
  {
    "path": "plugins/interactive-review/.mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"interactive_review\": {\n      \"command\": \"uv\",\n      \"args\": [\"run\", \"--directory\", \"${CLAUDE_PLUGIN_ROOT}/mcp-server\", \"server.py\"]\n    }\n  }\n}\n"
  },
  {
    "path": "plugins/interactive-review/CLAUDE.md",
    "content": "# Interactive Review Plugin\n\nClaude Code plugin for interactive markdown review with web UI.\n\n## Directory Structure\n\n```\ninteractive-review/\n├── .claude-plugin/\n│   └── plugin.json      # Plugin metadata and version\n├── mcp-server/\n│   ├── server.py        # MCP server (PEP 723 dependencies)\n│   └── web_ui.py        # HTML/CSS generation\n├── skills/\n│   └── review.md        # /review skill definition\n└── .mcp.json            # MCP server configuration\n```\n\n## Development\n\n### MCP Server\n\n`uv run`을 사용하여 의존성 자동 설치. `server.py` 상단의 PEP 723 metadata 참조:\n\n```python\n# /// script\n# dependencies = [\"mcp>=1.0.0\"]\n# ///\n```\n\n### Testing Locally\n\n```bash\ncd mcp-server\nuv run python server.py\n```\n\n## Versioning\n\n### Before Commit/Push\n\n**반드시 `.claude-plugin/plugin.json`의 버전을 올려야 합니다.**\n\n```json\n{\n  \"version\": \"1.0.3\"  // <- 이 값을 수정\n}\n```\n\n### Semantic Versioning\n\n- **MAJOR** (1.0.0 → 2.0.0): Breaking changes, API 변경\n- **MINOR** (1.0.0 → 1.1.0): 새 기능 추가 (하위 호환)\n- **PATCH** (1.0.0 → 1.0.1): 버그 수정, 문서 업데이트\n\n### Checklist\n\n- [ ] 기능 변경 시 버전 올리기\n- [ ] `plugin.json` 버전 업데이트\n- [ ] 변경 사항 테스트\n\n## Marketplace Publishing\n\n**Marketplace 퍼블리싱은 선택 사항입니다.**\n\n| 상황 | Marketplace 필요? |\n|------|------------------|\n| 개인용/로컬 테스트 | 불필요 |\n| 팀 내부 공유 | 선택적 |\n| 커뮤니티 배포 | 권장 |\n\n현재 이 플러그인은 `team-attention/agents` 레포에 포함되어 있어 별도 marketplace 등록 없이 사용 가능합니다.\n\n### 참고 문서\n\n- [Claude Code Plugins](https://www.anthropic.com/news/claude-code-plugins)\n- [Plugin Marketplaces Guide](https://code.claude.com/docs/en/plugin-marketplaces)\n"
  },
  {
    "path": "plugins/interactive-review/README.md",
    "content": "# Interactive Review Plugin\n\nInteractive markdown review with web UI for Claude Code.\n\n## Features\n\n- **Visual Review UI**: Opens a web browser with an interactive review interface\n- **Checkbox Approvals**: Approve or reject each section of your plan/document\n- **Inline Comments**: Add comments to any item\n- **Keyboard Shortcuts**: `Cmd+Enter` to submit, `Esc` to cancel\n\n## Installation\n\n### From Marketplace\n\n```bash\n/plugin marketplace add team-attention/agents\n/plugin install interactive-review@team-attention-plugins\n```\n\n### Local Development\n\n```bash\nclaude --plugin-dir /path/to/plugins/interactive-review\n```\n\n## Requirements\n\n- Python 3.9+\n- `mcp` package (`pip install mcp`)\n\n## Usage\n\nAfter Claude generates a plan or document:\n\n1. Say \"review this\" or \"/review\"\n2. A browser window opens with the review UI\n3. Check/uncheck items to approve or reject\n4. Add comments where needed\n5. Click \"Submit Review\" or press `Cmd+Enter`\n6. Claude processes your feedback\n\n## Review Result Format\n\n```json\n{\n  \"status\": \"submitted\",\n  \"items\": [\n    {\"id\": \"block-0\", \"text\": \"Step 1\", \"checked\": true, \"comment\": \"LGTM\"},\n    {\"id\": \"block-1\", \"text\": \"Step 2\", \"checked\": false, \"comment\": \"Use different approach\"}\n  ],\n  \"summary\": {\n    \"total\": 2,\n    \"approved\": 1,\n    \"rejected\": 1,\n    \"has_comments\": 2\n  }\n}\n```\n\n## How Feedback is Processed\n\n| checked | comment | Meaning |\n|---------|---------|---------|\n| true | empty | Approved - proceed as planned |\n| true | has text | Approved with note |\n| false | has text | Rejected - modify per comment |\n| false | empty | Rejected - remove or reconsider |\n"
  },
  {
    "path": "plugins/interactive-review/mcp-server/requirements.txt",
    "content": "mcp>=1.0.0\n"
  },
  {
    "path": "plugins/interactive-review/mcp-server/server.py",
    "content": "#!/usr/bin/env python3\n# /// script\n# dependencies = [\"mcp>=1.0.0\"]\n# ///\n\"\"\"\nInteractive Review MCP Server\n\nProvides the start_review tool that:\n1. Parses markdown content into reviewable blocks\n2. Generates an interactive HTML UI\n3. Serves it via a local HTTP server\n4. Opens the browser automatically\n5. Waits for user feedback\n6. Returns structured review results\n\"\"\"\n\nimport asyncio\nimport json\nimport os\nimport signal\nimport socket\nimport sys\nimport tempfile\nimport threading\nimport uuid\nimport webbrowser\nfrom dataclasses import asdict\nfrom http.server import HTTPServer, SimpleHTTPRequestHandler\nfrom pathlib import Path\nfrom typing import Any\n\nfrom mcp.server import Server\nfrom mcp.server.stdio import stdio_server\nfrom mcp.types import Tool, TextContent\n\nfrom web_ui import parse_markdown, generate_html\n\n\n# Global state for the HTTP server\n_review_result: dict | None = None\n_result_event = threading.Event()\n\n\ndef find_free_port() -> int:\n    \"\"\"Find a free port on localhost.\"\"\"\n    with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n        s.bind(('', 0))\n        s.listen(1)\n        return s.getsockname()[1]\n\n\nclass ReviewHTTPHandler(SimpleHTTPRequestHandler):\n    \"\"\"HTTP handler for serving the review UI and receiving results.\"\"\"\n\n    def __init__(self, *args, review_dir: str, **kwargs):\n        self.review_dir = review_dir\n        super().__init__(*args, directory=review_dir, **kwargs)\n\n    def do_POST(self):\n        \"\"\"Handle POST request for submitting review results.\"\"\"\n        global _review_result\n\n        if self.path == '/submit':\n            content_length = int(self.headers['Content-Length'])\n            post_data = self.rfile.read(content_length)\n\n            try:\n                _review_result = json.loads(post_data.decode('utf-8'))\n                _result_event.set()\n\n                self.send_response(200)\n                self.send_header('Content-Type', 'application/json')\n                self.send_header('Access-Control-Allow-Origin', '*')\n                self.end_headers()\n                self.wfile.write(b'{\"status\": \"ok\"}')\n            except Exception as e:\n                self.send_response(500)\n                self.send_header('Content-Type', 'application/json')\n                self.end_headers()\n                self.wfile.write(json.dumps({\"error\": str(e)}).encode())\n        else:\n            self.send_response(404)\n            self.end_headers()\n\n    def do_OPTIONS(self):\n        \"\"\"Handle CORS preflight requests.\"\"\"\n        self.send_response(200)\n        self.send_header('Access-Control-Allow-Origin', '*')\n        self.send_header('Access-Control-Allow-Methods', 'POST, OPTIONS')\n        self.send_header('Access-Control-Allow-Headers', 'Content-Type')\n        self.end_headers()\n\n    def log_message(self, format, *args):\n        \"\"\"Suppress logging to stderr.\"\"\"\n        pass\n\n\ndef make_handler(review_dir: str):\n    \"\"\"Factory to create handler with review_dir bound.\"\"\"\n    def handler(*args, **kwargs):\n        return ReviewHTTPHandler(*args, review_dir=review_dir, **kwargs)\n    return handler\n\n\nasync def start_review_impl(content: str, title: str = \"Review\") -> dict[str, Any]:\n    \"\"\"\n    Implementation of the start_review tool.\n\n    Args:\n        content: Markdown content to review\n        title: Title for the review UI\n\n    Returns:\n        Review results with status, items, and summary\n    \"\"\"\n    global _review_result, _result_event\n\n    # Reset state\n    _review_result = None\n    _result_event.clear()\n\n    # Create temp directory\n    review_id = str(uuid.uuid4())[:8]\n    review_dir = Path(tempfile.gettempdir()) / f\"claude-review-{review_id}\"\n    review_dir.mkdir(parents=True, exist_ok=True)\n\n    server = None\n    try:\n        # Parse markdown\n        blocks = parse_markdown(content)\n\n        if not blocks:\n            return {\n                \"status\": \"error\",\n                \"message\": \"No reviewable content found in the markdown\"\n            }\n\n        # Find a free port\n        port = find_free_port()\n\n        # Generate HTML\n        html_content = generate_html(title, content, blocks, port)\n        html_path = review_dir / \"index.html\"\n        html_path.write_text(html_content, encoding='utf-8')\n\n        # Save input for reference\n        input_data = {\n            \"version\": \"1.0\",\n            \"title\": title,\n            \"content\": content,\n            \"blocks\": [asdict(b) for b in blocks]\n        }\n        (review_dir / \"input.json\").write_text(\n            json.dumps(input_data, indent=2, ensure_ascii=False),\n            encoding='utf-8'\n        )\n\n        # Start HTTP server in a thread\n        server = HTTPServer(('localhost', port), make_handler(str(review_dir)))\n        server_thread = threading.Thread(target=server.serve_forever)\n        server_thread.daemon = True\n        server_thread.start()\n\n        # Open browser\n        url = f\"http://localhost:{port}/index.html\"\n        webbrowser.open(url)\n\n        # Wait for result (timeout: 5 minutes)\n        timeout = 300\n        result_received = await asyncio.get_event_loop().run_in_executor(\n            None, lambda: _result_event.wait(timeout)\n        )\n\n        if not result_received:\n            return {\n                \"status\": \"timeout\",\n                \"message\": \"Review timed out after 5 minutes\"\n            }\n\n        if _review_result is None:\n            return {\n                \"status\": \"error\",\n                \"message\": \"No result received\"\n            }\n\n        # Enrich result with summary\n        items = _review_result.get(\"items\", [])\n        approved = sum(1 for item in items if item.get(\"checked\", False))\n        rejected = len(items) - approved\n        has_comments = sum(1 for item in items if item.get(\"comment\", \"\").strip())\n\n        return {\n            \"status\": _review_result.get(\"status\", \"unknown\"),\n            \"items\": items,\n            \"summary\": {\n                \"total\": len(items),\n                \"approved\": approved,\n                \"rejected\": rejected,\n                \"has_comments\": has_comments\n            }\n        }\n\n    finally:\n        # Shutdown server\n        if server:\n            try:\n                server.shutdown()\n            except Exception as e:\n                print(f\"Error shutting down HTTP server: {e}\", file=sys.stderr)\n\n        # Cleanup temp directory\n        try:\n            import shutil\n            shutil.rmtree(review_dir, ignore_errors=True)\n        except Exception:\n            pass\n\n\n# Create MCP server\napp = Server(\"interactive-review\")\n\n\n@app.list_tools()\nasync def list_tools() -> list[Tool]:\n    \"\"\"List available tools.\"\"\"\n    return [\n        Tool(\n            name=\"start_review\",\n            description=\"\"\"Open an interactive web UI to review markdown content.\n\nThe user can:\n- Check/uncheck items to approve or reject them\n- Add comments to any item\n- Submit the review when done\n\nReturns structured feedback with approval status and comments for each item.\"\"\",\n            inputSchema={\n                \"type\": \"object\",\n                \"properties\": {\n                    \"content\": {\n                        \"type\": \"string\",\n                        \"description\": \"Markdown content to review (plans, documents, etc.)\"\n                    },\n                    \"title\": {\n                        \"type\": \"string\",\n                        \"description\": \"Title for the review UI\",\n                        \"default\": \"Review\"\n                    }\n                },\n                \"required\": [\"content\"]\n            }\n        )\n    ]\n\n\n@app.call_tool()\nasync def call_tool(name: str, arguments: dict) -> list[TextContent]:\n    \"\"\"Handle tool calls.\"\"\"\n    if name == \"start_review\":\n        content = arguments.get(\"content\", \"\")\n        title = arguments.get(\"title\", \"Review\")\n\n        result = await start_review_impl(content, title)\n\n        return [TextContent(\n            type=\"text\",\n            text=json.dumps(result, indent=2, ensure_ascii=False)\n        )]\n\n    return [TextContent(\n        type=\"text\",\n        text=json.dumps({\"error\": f\"Unknown tool: {name}\"})\n    )]\n\n\ndef setup_signal_handlers():\n    \"\"\"Set up signal handlers for graceful shutdown.\"\"\"\n    def handle_shutdown(signum, frame):\n        sys.exit(0)\n\n    signal.signal(signal.SIGTERM, handle_shutdown)\n    signal.signal(signal.SIGHUP, handle_shutdown)\n    signal.signal(signal.SIGPIPE, signal.SIG_DFL)\n\n\nasync def main():\n    \"\"\"Main entry point.\"\"\"\n    setup_signal_handlers()\n\n    try:\n        async with stdio_server() as (read_stream, write_stream):\n            await app.run(\n                read_stream,\n                write_stream,\n                app.create_initialization_options()\n            )\n    except (BrokenPipeError, ConnectionResetError, EOFError):\n        # Parent process closed the pipe - exit gracefully\n        pass\n    except KeyboardInterrupt:\n        pass\n    finally:\n        sys.exit(0)\n\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n"
  },
  {
    "path": "plugins/interactive-review/mcp-server/web_ui.py",
    "content": "\"\"\"\nWeb UI Generator for Interactive Review\n\nGenerates a self-contained HTML file with embedded CSS and JavaScript\nfor reviewing markdown content with line-level comments (GitHub-style).\nUses marked.js for markdown rendering.\n\"\"\"\n\nimport json\nfrom typing import List, Dict, Any\nfrom dataclasses import dataclass, asdict\n\n\n@dataclass\nclass Block:\n    \"\"\"Represents a reviewable block in the markdown content.\"\"\"\n    id: str\n    type: str  # heading, list-item, paragraph, code\n    text: str\n    level: int = 0  # for headings\n    raw: str = \"\"  # original markdown\n\n\ndef parse_markdown(content: str) -> List[Block]:\n    \"\"\"\n    Parse markdown content into lines for line-level commenting.\n    Returns list of Block objects, one per line.\n    \"\"\"\n    blocks = []\n    lines = content.split('\\n')\n\n    for i, line in enumerate(lines):\n        blocks.append(Block(\n            id=f\"line-{i}\",\n            type=\"line\",\n            text=line,\n            level=0,\n            raw=line\n        ))\n\n    return blocks\n\n\ndef generate_html(title: str, content: str, blocks: List[Block], server_port: int) -> str:\n    \"\"\"Generate the complete HTML for the review UI with marked.js and line comments.\"\"\"\n\n    # Escape content for JSON embedding\n    content_json = json.dumps(content)\n    lines_json = json.dumps([{\"id\": b.id, \"text\": b.text, \"lineNum\": i} for i, b in enumerate(blocks)])\n\n    return f'''<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n    <meta charset=\"UTF-8\">\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n    <title>{title} - Interactive Review</title>\n    <script src=\"https://cdn.jsdelivr.net/npm/marked/marked.min.js\"></script>\n    <script src=\"https://cdn.jsdelivr.net/npm/highlight.js@11.9.0/lib/highlight.min.js\"></script>\n    <link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/highlight.js@11.9.0/styles/github-dark.min.css\">\n    <style>\n        :root {{\n            --bg-primary: #0d1117;\n            --bg-secondary: #161b22;\n            --bg-tertiary: #21262d;\n            --bg-card: #1c2128;\n            --text-primary: #e6edf3;\n            --text-secondary: #8b949e;\n            --text-muted: #6e7681;\n            --accent: #58a6ff;\n            --accent-hover: #79b8ff;\n            --success: #3fb950;\n            --warning: #d29922;\n            --danger: #f85149;\n            --border: #30363d;\n            --border-accent: #388bfd;\n            --highlight-bg: rgba(56, 139, 253, 0.15);\n            --comment-bg: #2d333b;\n            --selection-bg: rgba(56, 139, 253, 0.3);\n        }}\n\n        * {{\n            box-sizing: border-box;\n            margin: 0;\n            padding: 0;\n        }}\n\n        body {{\n            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Noto Sans', Helvetica, Arial, sans-serif;\n            background: var(--bg-primary);\n            color: var(--text-primary);\n            line-height: 1.6;\n            min-height: 100vh;\n        }}\n\n        .layout {{\n            display: flex;\n            min-height: 100vh;\n        }}\n\n        .main-content {{\n            flex: 1;\n            max-width: 900px;\n            padding: 2rem;\n            overflow-y: auto;\n        }}\n\n        .comments-sidebar {{\n            width: 350px;\n            background: var(--bg-secondary);\n            border-left: 1px solid var(--border);\n            padding: 1rem;\n            overflow-y: auto;\n            position: sticky;\n            top: 0;\n            height: 100vh;\n        }}\n\n        header {{\n            display: flex;\n            justify-content: space-between;\n            align-items: center;\n            margin-bottom: 1.5rem;\n            padding-bottom: 1rem;\n            border-bottom: 1px solid var(--border);\n        }}\n\n        h1 {{\n            font-size: 1.5rem;\n            font-weight: 600;\n        }}\n\n        .summary {{\n            font-size: 0.875rem;\n            color: var(--text-secondary);\n        }}\n\n        .summary .count {{\n            background: var(--bg-tertiary);\n            padding: 0.25rem 0.5rem;\n            border-radius: 12px;\n            margin-left: 0.5rem;\n        }}\n\n        /* Markdown content area */\n        .markdown-container {{\n            background: var(--bg-secondary);\n            border: 1px solid var(--border);\n            border-radius: 8px;\n            overflow: hidden;\n        }}\n\n        .line-wrapper {{\n            display: flex;\n            position: relative;\n            border-bottom: 1px solid transparent;\n        }}\n\n        .line-wrapper:hover {{\n            background: var(--bg-tertiary);\n        }}\n\n        .line-wrapper.has-comment {{\n            background: var(--highlight-bg);\n            border-left: 3px solid var(--accent);\n        }}\n\n        .line-wrapper.selecting {{\n            background: var(--selection-bg);\n        }}\n\n        .line-number {{\n            flex-shrink: 0;\n            width: 50px;\n            padding: 0 12px;\n            text-align: right;\n            color: var(--text-muted);\n            font-family: 'SF Mono', Monaco, 'Consolas', monospace;\n            font-size: 12px;\n            user-select: none;\n            cursor: pointer;\n            border-right: 1px solid var(--border);\n        }}\n\n        .line-number:hover {{\n            color: var(--accent);\n        }}\n\n        .add-comment-btn {{\n            position: absolute;\n            left: 4px;\n            top: 50%;\n            transform: translateY(-50%);\n            width: 20px;\n            height: 20px;\n            background: var(--accent);\n            border: none;\n            border-radius: 50%;\n            color: white;\n            font-size: 14px;\n            font-weight: bold;\n            cursor: pointer;\n            opacity: 0;\n            transition: opacity 0.2s;\n            display: flex;\n            align-items: center;\n            justify-content: center;\n        }}\n\n        .line-wrapper:hover .add-comment-btn {{\n            opacity: 1;\n        }}\n\n        .line-content {{\n            flex: 1;\n            padding: 0 16px;\n            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Noto Sans', Helvetica, Arial, sans-serif;\n            font-size: 14px;\n            white-space: pre-wrap;\n            word-wrap: break-word;\n        }}\n\n        /* Rendered markdown styling */\n        .rendered-markdown {{\n            padding: 24px;\n        }}\n\n        .rendered-markdown h1,\n        .rendered-markdown h2,\n        .rendered-markdown h3,\n        .rendered-markdown h4,\n        .rendered-markdown h5,\n        .rendered-markdown h6 {{\n            margin-top: 24px;\n            margin-bottom: 16px;\n            font-weight: 600;\n            line-height: 1.25;\n            border-bottom: 1px solid var(--border);\n            padding-bottom: 0.3em;\n        }}\n\n        .rendered-markdown h1 {{ font-size: 2em; }}\n        .rendered-markdown h2 {{ font-size: 1.5em; }}\n        .rendered-markdown h3 {{ font-size: 1.25em; border-bottom: none; }}\n        .rendered-markdown h4 {{ font-size: 1em; border-bottom: none; }}\n\n        .rendered-markdown p {{\n            margin-bottom: 16px;\n        }}\n\n        .rendered-markdown ul,\n        .rendered-markdown ol {{\n            margin-bottom: 16px;\n            padding-left: 2em;\n        }}\n\n        .rendered-markdown li {{\n            margin-bottom: 4px;\n        }}\n\n        .rendered-markdown code {{\n            background: var(--bg-tertiary);\n            padding: 0.2em 0.4em;\n            border-radius: 6px;\n            font-family: 'SF Mono', Monaco, 'Consolas', monospace;\n            font-size: 85%;\n        }}\n\n        .rendered-markdown pre {{\n            background: var(--bg-tertiary);\n            padding: 16px;\n            border-radius: 6px;\n            overflow-x: auto;\n            margin-bottom: 16px;\n        }}\n\n        .rendered-markdown pre code {{\n            background: none;\n            padding: 0;\n            font-size: 14px;\n        }}\n\n        .rendered-markdown table {{\n            border-collapse: collapse;\n            width: 100%;\n            margin-bottom: 16px;\n        }}\n\n        .rendered-markdown th,\n        .rendered-markdown td {{\n            border: 1px solid var(--border);\n            padding: 6px 13px;\n        }}\n\n        .rendered-markdown th {{\n            background: var(--bg-tertiary);\n            font-weight: 600;\n        }}\n\n        .rendered-markdown blockquote {{\n            border-left: 4px solid var(--border);\n            padding-left: 16px;\n            color: var(--text-secondary);\n            margin-bottom: 16px;\n        }}\n\n        /* Source view with line numbers */\n        .source-view {{\n            display: none;\n        }}\n\n        .source-view.active {{\n            display: block;\n        }}\n\n        .rendered-view {{\n            display: block;\n        }}\n\n        .rendered-view.hidden {{\n            display: none;\n        }}\n\n        /* View toggle */\n        .view-toggle {{\n            display: flex;\n            gap: 0;\n            margin-bottom: 1rem;\n            border: 1px solid var(--border);\n            border-radius: 6px;\n            overflow: hidden;\n            width: fit-content;\n        }}\n\n        .view-toggle button {{\n            padding: 0.5rem 1rem;\n            background: var(--bg-secondary);\n            border: none;\n            color: var(--text-secondary);\n            cursor: pointer;\n            font-size: 0.875rem;\n            transition: all 0.2s;\n        }}\n\n        .view-toggle button:not(:last-child) {{\n            border-right: 1px solid var(--border);\n        }}\n\n        .view-toggle button.active {{\n            background: var(--accent);\n            color: white;\n        }}\n\n        .view-toggle button:hover:not(.active) {{\n            background: var(--bg-tertiary);\n        }}\n\n        /* Comments sidebar */\n        .sidebar-header {{\n            font-size: 0.875rem;\n            font-weight: 600;\n            color: var(--text-secondary);\n            margin-bottom: 1rem;\n            padding-bottom: 0.5rem;\n            border-bottom: 1px solid var(--border);\n            display: flex;\n            justify-content: space-between;\n            align-items: center;\n        }}\n\n        .comment-card {{\n            background: var(--bg-card);\n            border: 1px solid var(--border);\n            border-radius: 8px;\n            margin-bottom: 1rem;\n            overflow: hidden;\n        }}\n\n        .comment-card.editing {{\n            border-color: var(--accent);\n        }}\n\n        .comment-header {{\n            display: flex;\n            justify-content: space-between;\n            align-items: center;\n            padding: 0.75rem;\n            background: var(--bg-tertiary);\n            font-size: 0.75rem;\n            color: var(--text-secondary);\n        }}\n\n        .comment-lines {{\n            font-family: 'SF Mono', Monaco, monospace;\n            color: var(--accent);\n        }}\n\n        .comment-preview {{\n            padding: 0.75rem;\n            font-size: 0.875rem;\n            background: var(--bg-primary);\n            border-bottom: 1px solid var(--border);\n            color: var(--text-muted);\n            font-family: 'SF Mono', Monaco, monospace;\n            max-height: 60px;\n            overflow: hidden;\n            white-space: pre-wrap;\n        }}\n\n        .comment-body {{\n            padding: 0.75rem;\n        }}\n\n        .comment-textarea {{\n            width: 100%;\n            min-height: 80px;\n            padding: 0.75rem;\n            background: var(--bg-primary);\n            border: 1px solid var(--border);\n            border-radius: 6px;\n            color: var(--text-primary);\n            font-size: 0.875rem;\n            resize: vertical;\n            font-family: inherit;\n        }}\n\n        .comment-textarea:focus {{\n            outline: none;\n            border-color: var(--accent);\n        }}\n\n        .comment-textarea::placeholder {{\n            color: var(--text-muted);\n        }}\n\n        .comment-textarea.saved {{\n            border-color: var(--success);\n            transition: border-color 0.3s;\n        }}\n\n        .save-indicator {{\n            font-size: 0.7rem;\n            color: var(--success);\n            opacity: 0;\n            transition: opacity 0.2s;\n            margin-top: 0.25rem;\n        }}\n\n        .save-indicator.visible {{\n            opacity: 1;\n        }}\n\n        .comment-actions {{\n            display: flex;\n            justify-content: flex-end;\n            gap: 0.5rem;\n            margin-top: 0.5rem;\n        }}\n\n        .comment-text {{\n            font-size: 0.875rem;\n            line-height: 1.5;\n            white-space: pre-wrap;\n        }}\n\n        .comment-text.empty {{\n            color: var(--text-muted);\n            font-style: italic;\n        }}\n\n        .no-comments {{\n            text-align: center;\n            color: var(--text-muted);\n            padding: 2rem;\n            font-size: 0.875rem;\n        }}\n\n        /* Inline comment box (appears when selecting lines) */\n        .inline-comment-box {{\n            display: none;\n            background: var(--bg-card);\n            border: 1px solid var(--accent);\n            border-radius: 8px;\n            margin: 0.5rem 0;\n            overflow: hidden;\n        }}\n\n        .inline-comment-box.visible {{\n            display: block;\n        }}\n\n        .inline-comment-header {{\n            padding: 0.5rem 0.75rem;\n            background: var(--bg-tertiary);\n            font-size: 0.75rem;\n            color: var(--text-secondary);\n            border-bottom: 1px solid var(--border);\n        }}\n\n        .inline-comment-body {{\n            padding: 0.75rem;\n        }}\n\n        /* Actions bar */\n        .actions {{\n            display: flex;\n            gap: 1rem;\n            justify-content: space-between;\n            align-items: center;\n            margin-top: 1.5rem;\n            padding-top: 1rem;\n            border-top: 1px solid var(--border);\n        }}\n\n        .action-group {{\n            display: flex;\n            gap: 0.5rem;\n        }}\n\n        button {{\n            padding: 0.5rem 1rem;\n            border: none;\n            border-radius: 6px;\n            font-size: 0.875rem;\n            font-weight: 500;\n            cursor: pointer;\n            transition: all 0.2s;\n        }}\n\n        .btn-sm {{\n            padding: 0.25rem 0.5rem;\n            font-size: 0.75rem;\n        }}\n\n        .btn-secondary {{\n            background: var(--bg-tertiary);\n            color: var(--text-primary);\n            border: 1px solid var(--border);\n        }}\n\n        .btn-secondary:hover {{\n            background: var(--border);\n        }}\n\n        .btn-success {{\n            background: var(--success);\n            color: white;\n        }}\n\n        .btn-success:hover {{\n            opacity: 0.9;\n        }}\n\n        .btn-danger {{\n            background: transparent;\n            color: var(--danger);\n            border: 1px solid var(--danger);\n        }}\n\n        .btn-danger:hover {{\n            background: var(--danger);\n            color: white;\n        }}\n\n        .btn-primary {{\n            background: var(--accent);\n            color: white;\n        }}\n\n        .btn-primary:hover {{\n            background: var(--accent-hover);\n        }}\n\n        .keyboard-hint {{\n            font-size: 0.75rem;\n            color: var(--text-muted);\n        }}\n\n        kbd {{\n            background: var(--bg-tertiary);\n            padding: 0.2rem 0.4rem;\n            border-radius: 4px;\n            font-family: inherit;\n            border: 1px solid var(--border);\n            font-size: 0.7rem;\n        }}\n\n        /* Selection highlight */\n        .selection-indicator {{\n            position: fixed;\n            bottom: 20px;\n            left: 50%;\n            transform: translateX(-50%);\n            background: var(--accent);\n            color: white;\n            padding: 0.75rem 1.5rem;\n            border-radius: 8px;\n            display: none;\n            align-items: center;\n            gap: 1rem;\n            box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);\n            z-index: 1000;\n        }}\n\n        .selection-indicator.visible {{\n            display: flex;\n        }}\n\n        /* Delete button for comments */\n        .delete-comment {{\n            background: none;\n            border: none;\n            color: var(--text-muted);\n            cursor: pointer;\n            padding: 0.25rem;\n            font-size: 1rem;\n            line-height: 1;\n        }}\n\n        .delete-comment:hover {{\n            color: var(--danger);\n        }}\n\n        /* Floating comment toolbar for text selection */\n        .floating-toolbar {{\n            position: fixed;\n            background: var(--bg-card);\n            border: 1px solid var(--accent);\n            border-radius: 8px;\n            padding: 0.5rem;\n            box-shadow: 0 4px 16px rgba(0, 0, 0, 0.5);\n            z-index: 1000;\n            display: none;\n            align-items: center;\n            gap: 0.5rem;\n            animation: fadeIn 0.15s ease-out;\n        }}\n\n        @keyframes fadeIn {{\n            from {{ opacity: 0; transform: translateY(-4px); }}\n            to {{ opacity: 1; transform: translateY(0); }}\n        }}\n\n        .floating-toolbar.visible {{\n            display: flex;\n        }}\n\n        .floating-toolbar button {{\n            padding: 0.4rem 0.75rem;\n            font-size: 0.8rem;\n        }}\n\n        /* Inline comment popup */\n        .inline-comment-popup {{\n            position: fixed;\n            background: var(--bg-card);\n            border: 1px solid var(--accent);\n            border-radius: 8px;\n            padding: 0.5rem;\n            box-shadow: 0 4px 16px rgba(0, 0, 0, 0.5);\n            z-index: 1001;\n            display: none;\n            min-width: 300px;\n        }}\n\n        .inline-comment-popup.visible {{\n            display: block;\n        }}\n\n        .inline-comment-popup input {{\n            width: 100%;\n            padding: 0.5rem 0.75rem;\n            background: var(--bg-primary);\n            border: 1px solid var(--border);\n            border-radius: 6px;\n            color: var(--text-primary);\n            font-size: 0.875rem;\n            outline: none;\n        }}\n\n        .inline-comment-popup input:focus {{\n            border-color: var(--accent);\n        }}\n\n        .inline-comment-popup input::placeholder {{\n            color: var(--text-muted);\n        }}\n\n        /* Highlighted text in preview */\n        .commented-text {{\n            background: var(--highlight-bg);\n            border-bottom: 2px solid var(--accent);\n            cursor: pointer;\n            padding: 0 2px;\n            border-radius: 2px;\n        }}\n\n        .commented-text:hover {{\n            background: var(--selection-bg);\n        }}\n\n        /* Responsive */\n        @media (max-width: 1200px) {{\n            .comments-sidebar {{\n                width: 300px;\n            }}\n        }}\n\n        @media (max-width: 900px) {{\n            .layout {{\n                flex-direction: column;\n            }}\n\n            .comments-sidebar {{\n                width: 100%;\n                height: auto;\n                position: static;\n                border-left: none;\n                border-top: 1px solid var(--border);\n            }}\n        }}\n    </style>\n</head>\n<body>\n    <div class=\"layout\">\n        <div class=\"main-content\">\n            <header>\n                <h1>{title}</h1>\n                <div class=\"summary\">\n                    Comments: <span class=\"count\" id=\"comment-count\">0</span>\n                </div>\n            </header>\n\n            <div class=\"view-toggle\">\n                <button class=\"active\" onclick=\"switchView('rendered')\">Preview</button>\n                <button onclick=\"switchView('source')\">Source</button>\n            </div>\n\n            <div class=\"markdown-container\">\n                <div class=\"rendered-view\" id=\"rendered-view\">\n                    <div class=\"rendered-markdown\" id=\"rendered-content\"></div>\n                </div>\n                <div class=\"source-view\" id=\"source-view\"></div>\n            </div>\n\n            <div class=\"actions\">\n                <div class=\"keyboard-hint\">\n                    <kbd>Cmd</kbd>+<kbd>Enter</kbd> submit | <kbd>Esc</kbd> cancel\n                </div>\n                <div class=\"action-group\">\n                    <button class=\"btn-secondary\" onclick=\"cancelReview()\">Cancel</button>\n                    <button class=\"btn-primary\" onclick=\"submitReview()\">Submit Review</button>\n                </div>\n            </div>\n        </div>\n\n        <div class=\"comments-sidebar\">\n            <div class=\"sidebar-header\">\n                <span>Comments</span>\n                <button class=\"btn-sm btn-secondary\" onclick=\"clearAllComments()\">Clear All</button>\n            </div>\n            <div id=\"comments-list\">\n                <div class=\"no-comments\">\n                    Click on a line number or select text to add comments\n                </div>\n            </div>\n        </div>\n    </div>\n\n    <div class=\"selection-indicator\" id=\"selection-indicator\">\n        <span id=\"selection-text\">Lines 1-3 selected</span>\n        <button class=\"btn-sm btn-primary\" onclick=\"addCommentForSelection()\">Add Comment</button>\n        <button class=\"btn-sm btn-secondary\" onclick=\"clearSelection()\">Cancel</button>\n    </div>\n\n    <div class=\"floating-toolbar\" id=\"floating-toolbar\">\n        <span style=\"color: var(--text-secondary); font-size: 0.75rem; margin-right: 0.5rem;\">💬</span>\n        <button class=\"btn-sm btn-primary\" onclick=\"showInlineCommentInput()\">Comment</button>\n    </div>\n\n    <div class=\"inline-comment-popup\" id=\"inline-comment-popup\">\n        <input type=\"text\" id=\"inline-comment-input\" placeholder=\"Add comment... (Enter to save, Esc to cancel)\">\n    </div>\n\n    <script>\n        const rawContent = {content_json};\n        const lines = {lines_json};\n        const serverPort = {server_port};\n\n        // State\n        let comments = []; // {{ id, startLine, endLine, text, linePreview, type }}\n        let selectionStart = null;\n        let selectionEnd = null;\n        let currentView = 'rendered';\n        let commentIdCounter = 0;\n        let selectedText = '';\n        let selectionRange = null;\n\n        // Initialize marked\n        marked.setOptions({{\n            highlight: function(code, lang) {{\n                if (lang && hljs.getLanguage(lang)) {{\n                    return hljs.highlight(code, {{ language: lang }}).value;\n                }}\n                return hljs.highlightAuto(code).value;\n            }},\n            breaks: false,\n            gfm: true\n        }});\n\n        function init() {{\n            // Render markdown preview\n            document.getElementById('rendered-content').innerHTML = marked.parse(rawContent);\n\n            // Render source view with line numbers\n            renderSourceView();\n\n            // Apply syntax highlighting to rendered code blocks\n            document.querySelectorAll('.rendered-markdown pre code').forEach(block => {{\n                hljs.highlightElement(block);\n            }});\n\n            // Setup text selection handler for preview\n            setupTextSelectionHandler();\n        }}\n\n        // Text selection in Preview view\n        function setupTextSelectionHandler() {{\n            const renderedContent = document.getElementById('rendered-content');\n            const floatingToolbar = document.getElementById('floating-toolbar');\n\n            console.log('Setting up text selection handler...', {{ renderedContent: !!renderedContent, floatingToolbar: !!floatingToolbar }});\n\n            renderedContent.addEventListener('mouseup', (e) => {{\n                // Delay to let selection finalize\n                setTimeout(() => {{\n                    const selection = window.getSelection();\n                    const text = selection.toString().trim();\n\n                    console.log('Selection detected:', text ? `\"${{text.substring(0, 30)}}...\"` : '(empty)');\n\n                    if (text && text.length > 0) {{\n                        selectedText = text;\n                        try {{\n                            selectionRange = selection.getRangeAt(0).cloneRange();\n\n                            // Position floating toolbar near selection (fixed positioning)\n                            const rect = selection.getRangeAt(0).getBoundingClientRect();\n                            const top = rect.bottom + 8;\n                            const left = Math.max(10, rect.left + (rect.width / 2) - 50);\n\n                            console.log('Showing toolbar at:', top, left);\n\n                            floatingToolbar.style.top = `${{top}}px`;\n                            floatingToolbar.style.left = `${{left}}px`;\n                            floatingToolbar.classList.add('visible');\n                        }} catch (err) {{\n                            console.log('Selection error:', err);\n                        }}\n                    }} else {{\n                        hideFloatingToolbar();\n                    }}\n                }}, 50);\n            }});\n\n            // Hide toolbar when clicking elsewhere\n            document.addEventListener('mousedown', (e) => {{\n                if (!floatingToolbar.contains(e.target) && !renderedContent.contains(e.target)) {{\n                    hideFloatingToolbar();\n                }}\n            }});\n        }}\n\n        function hideFloatingToolbar() {{\n            const floatingToolbar = document.getElementById('floating-toolbar');\n            floatingToolbar.classList.remove('visible');\n        }}\n\n        function hideInlineCommentPopup() {{\n            const popup = document.getElementById('inline-comment-popup');\n            popup.classList.remove('visible');\n            document.getElementById('inline-comment-input').value = '';\n        }}\n\n        function showInlineCommentInput() {{\n            if (!selectedText) return;\n\n            const floatingToolbar = document.getElementById('floating-toolbar');\n            const popup = document.getElementById('inline-comment-popup');\n            const input = document.getElementById('inline-comment-input');\n\n            // Position popup below the floating toolbar\n            const toolbarRect = floatingToolbar.getBoundingClientRect();\n            popup.style.top = `${{toolbarRect.bottom + 8}}px`;\n            popup.style.left = `${{Math.max(10, toolbarRect.left)}}px`;\n\n            // Hide toolbar, show popup\n            hideFloatingToolbar();\n            popup.classList.add('visible');\n            input.value = '';\n            input.focus();\n        }}\n\n        function confirmInlineComment() {{\n            const input = document.getElementById('inline-comment-input');\n            const commentText = input.value.trim();\n\n            if (!selectedText) {{\n                hideInlineCommentPopup();\n                return;\n            }}\n\n            const preview = selectedText.length > 100 ? selectedText.substring(0, 100) + '...' : selectedText;\n\n            const comment = {{\n                id: `comment-${{commentIdCounter++}}`,\n                type: 'text',\n                startLine: null,\n                endLine: null,\n                text: commentText,\n                linePreview: preview,\n                selectedText: selectedText\n            }};\n\n            comments.push(comment);\n\n            // Highlight the selected text in the preview\n            highlightTextInPreview(selectionRange, comment.id);\n\n            // Clear state\n            hideInlineCommentPopup();\n            window.getSelection().removeAllRanges();\n            selectedText = '';\n            selectionRange = null;\n\n            renderComments();\n\n            console.log('Comment added:', comment);\n        }}\n\n        // Setup inline comment input handlers\n        document.getElementById('inline-comment-input').addEventListener('keydown', (e) => {{\n            if (e.key === 'Enter') {{\n                e.preventDefault();\n                e.stopPropagation();\n                confirmInlineComment();\n            }} else if (e.key === 'Escape') {{\n                e.preventDefault();\n                e.stopPropagation();\n                hideInlineCommentPopup();\n                selectedText = '';\n                selectionRange = null;\n            }}\n        }});\n\n        function addCommentForTextSelection() {{\n            // Legacy function - now uses inline input\n            showInlineCommentInput();\n        }}\n\n        function highlightTextInPreview(range, commentId) {{\n            if (!range) return;\n\n            try {{\n                const span = document.createElement('span');\n                span.className = 'commented-text';\n                span.dataset.commentId = commentId;\n                span.onclick = () => scrollToComment(commentId);\n                range.surroundContents(span);\n            }} catch (e) {{\n                // If surroundContents fails (crosses element boundaries), skip highlighting\n                console.log('Could not highlight selection:', e);\n            }}\n        }}\n\n        function scrollToComment(commentId) {{\n            const commentCard = document.querySelector(`[data-comment-id=\"${{commentId}}\"]`);\n            if (commentCard) {{\n                commentCard.scrollIntoView({{ behavior: 'smooth', block: 'center' }});\n                commentCard.style.borderColor = 'var(--accent)';\n                setTimeout(() => commentCard.style.borderColor = '', 1500);\n            }}\n        }}\n\n        function renderSourceView() {{\n            const container = document.getElementById('source-view');\n            container.innerHTML = lines.map((line, index) => {{\n                const hasComment = comments.some(c => index >= c.startLine && index <= c.endLine);\n                const isSelecting = selectionStart !== null &&\n                    index >= Math.min(selectionStart, selectionEnd || selectionStart) &&\n                    index <= Math.max(selectionStart, selectionEnd || selectionStart);\n\n                return `\n                    <div class=\"line-wrapper ${{hasComment ? 'has-comment' : ''}} ${{isSelecting ? 'selecting' : ''}}\"\n                         data-line=\"${{index}}\"\n                         onmousedown=\"startLineSelection(${{index}})\"\n                         onmouseenter=\"extendLineSelection(${{index}})\">\n                        <button class=\"add-comment-btn\" onclick=\"event.stopPropagation(); quickAddComment(${{index}})\" title=\"Add comment\">+</button>\n                        <div class=\"line-number\" data-line=\"${{index}}\">${{index + 1}}</div>\n                        <div class=\"line-content\">${{escapeHtml(line.text) || '&nbsp;'}}</div>\n                    </div>\n                `;\n            }}).join('');\n        }}\n\n        function escapeHtml(text) {{\n            const div = document.createElement('div');\n            div.textContent = text;\n            return div.innerHTML;\n        }}\n\n        function switchView(view) {{\n            currentView = view;\n            document.querySelectorAll('.view-toggle button').forEach(btn => btn.classList.remove('active'));\n            document.querySelector(`.view-toggle button[onclick=\"switchView('${{view}}')\"]`).classList.add('active');\n\n            if (view === 'rendered') {{\n                document.getElementById('rendered-view').classList.remove('hidden');\n                document.getElementById('source-view').classList.remove('active');\n            }} else {{\n                document.getElementById('rendered-view').classList.add('hidden');\n                document.getElementById('source-view').classList.add('active');\n            }}\n        }}\n\n        // Line selection\n        let isSelecting = false;\n\n        function startLineSelection(lineNum) {{\n            isSelecting = true;\n            selectionStart = lineNum;\n            selectionEnd = lineNum;\n            renderSourceView();\n        }}\n\n        function extendLineSelection(lineNum) {{\n            if (isSelecting && selectionStart !== null) {{\n                selectionEnd = lineNum;\n                renderSourceView();\n                updateSelectionIndicator();\n            }}\n        }}\n\n        document.addEventListener('mouseup', () => {{\n            if (isSelecting && selectionStart !== null) {{\n                isSelecting = false;\n                if (selectionEnd === null) selectionEnd = selectionStart;\n                updateSelectionIndicator();\n            }}\n        }});\n\n        function updateSelectionIndicator() {{\n            const indicator = document.getElementById('selection-indicator');\n            if (selectionStart !== null) {{\n                const start = Math.min(selectionStart, selectionEnd || selectionStart);\n                const end = Math.max(selectionStart, selectionEnd || selectionStart);\n                document.getElementById('selection-text').textContent =\n                    start === end ? `Line ${{start + 1}} selected` : `Lines ${{start + 1}}-${{end + 1}} selected`;\n                indicator.classList.add('visible');\n            }} else {{\n                indicator.classList.remove('visible');\n            }}\n        }}\n\n        function clearSelection() {{\n            selectionStart = null;\n            selectionEnd = null;\n            document.getElementById('selection-indicator').classList.remove('visible');\n            renderSourceView();\n        }}\n\n        function quickAddComment(lineNum) {{\n            selectionStart = lineNum;\n            selectionEnd = lineNum;\n            addCommentForSelection();\n        }}\n\n        function addCommentForSelection() {{\n            if (selectionStart === null) return;\n\n            const start = Math.min(selectionStart, selectionEnd || selectionStart);\n            const end = Math.max(selectionStart, selectionEnd || selectionStart);\n\n            // Get preview text\n            const previewLines = lines.slice(start, end + 1).map(l => l.text);\n            const preview = previewLines.join('\\\\n').substring(0, 100) + (previewLines.join('\\\\n').length > 100 ? '...' : '');\n\n            const comment = {{\n                id: `comment-${{commentIdCounter++}}`,\n                type: 'line',\n                startLine: start,\n                endLine: end,\n                text: '',\n                linePreview: preview\n            }};\n\n            comments.push(comment);\n            clearSelection();\n            renderComments();\n            renderSourceView();\n\n            // Focus the new comment textarea\n            setTimeout(() => {{\n                const textarea = document.querySelector(`[data-comment-id=\"${{comment.id}}\"] textarea`);\n                if (textarea) textarea.focus();\n            }}, 50);\n        }}\n\n        function renderComments() {{\n            const container = document.getElementById('comments-list');\n\n            if (comments.length === 0) {{\n                container.innerHTML = '<div class=\"no-comments\">Select text in Preview or click lines in Source to add comments</div>';\n            }} else {{\n                container.innerHTML = comments.map(comment => {{\n                    const headerLabel = comment.type === 'text'\n                        ? 'Selected text'\n                        : (comment.startLine === comment.endLine\n                            ? `Line ${{comment.startLine + 1}}`\n                            : `Lines ${{comment.startLine + 1}}-${{comment.endLine + 1}}`);\n\n                    return `\n                        <div class=\"comment-card\" data-comment-id=\"${{comment.id}}\">\n                            <div class=\"comment-header\">\n                                <span class=\"comment-lines\">${{headerLabel}}</span>\n                                <button class=\"delete-comment\" onclick=\"deleteComment('${{comment.id}}')\" title=\"Delete comment\">&times;</button>\n                            </div>\n                            <div class=\"comment-preview\">${{escapeHtml(comment.linePreview)}}</div>\n                            <div class=\"comment-body\">\n                                ${{comment.text\n                                    ? `<div class=\"comment-text\">${{escapeHtml(comment.text)}}</div>`\n                                    : `<div class=\"comment-text empty\">(no comment)</div>`\n                                }}\n                            </div>\n                        </div>\n                    `;\n                }}).join('');\n            }}\n\n            updateCommentCount();\n        }}\n\n        function updateCommentText(commentId, text) {{\n            const comment = comments.find(c => c.id === commentId);\n            if (comment) {{\n                comment.text = text;\n            }}\n        }}\n\n        let saveTimeout = null;\n        function handleCommentInput(commentId, textarea) {{\n            updateCommentText(commentId, textarea.value);\n\n            // Show \"Saved\" indicator with debounce\n            clearTimeout(saveTimeout);\n            const indicator = document.getElementById(`save-${{commentId}}`);\n            if (indicator && textarea.value.trim()) {{\n                saveTimeout = setTimeout(() => {{\n                    indicator.classList.add('visible');\n                    setTimeout(() => indicator.classList.remove('visible'), 1500);\n                }}, 500);\n            }}\n        }}\n\n        function handleCommentKeydown(e, commentId) {{\n            // Cmd/Ctrl + Enter to submit\n            if ((e.metaKey || e.ctrlKey) && e.key === 'Enter') {{\n                e.preventDefault();\n                submitReview();\n            }}\n        }}\n\n        function deleteComment(commentId) {{\n            // Remove highlight from preview if it's a text comment\n            const highlightedSpan = document.querySelector(`.commented-text[data-comment-id=\"${{commentId}}\"]`);\n            if (highlightedSpan) {{\n                const parent = highlightedSpan.parentNode;\n                while (highlightedSpan.firstChild) {{\n                    parent.insertBefore(highlightedSpan.firstChild, highlightedSpan);\n                }}\n                parent.removeChild(highlightedSpan);\n            }}\n\n            comments = comments.filter(c => c.id !== commentId);\n            renderComments();\n            renderSourceView();\n        }}\n\n        function clearAllComments() {{\n            if (comments.length > 0 && confirm('Delete all comments?')) {{\n                // Remove all highlights from preview\n                document.querySelectorAll('.commented-text').forEach(span => {{\n                    const parent = span.parentNode;\n                    while (span.firstChild) {{\n                        parent.insertBefore(span.firstChild, span);\n                    }}\n                    parent.removeChild(span);\n                }});\n\n                comments = [];\n                renderComments();\n                renderSourceView();\n            }}\n        }}\n\n        function updateCommentCount() {{\n            document.getElementById('comment-count').textContent = comments.length;\n        }}\n\n        function scrollToLine(lineNum) {{\n            switchView('source');\n            const lineEl = document.querySelector(`[data-line=\"${{lineNum}}\"]`);\n            if (lineEl) {{\n                lineEl.scrollIntoView({{ behavior: 'smooth', block: 'center' }});\n                lineEl.style.background = 'var(--selection-bg)';\n                setTimeout(() => lineEl.style.background = '', 1000);\n            }}\n        }}\n\n        async function submitReview() {{\n            const result = {{\n                status: 'submitted',\n                timestamp: new Date().toISOString(),\n                items: comments.map(c => ({{\n                    id: c.id,\n                    startLine: c.startLine,\n                    endLine: c.endLine,\n                    text: c.text,\n                    linePreview: c.linePreview,\n                    checked: true,\n                    comment: c.text\n                }}))\n            }};\n\n            try {{\n                await fetch(`http://localhost:${{serverPort}}/submit`, {{\n                    method: 'POST',\n                    headers: {{ 'Content-Type': 'application/json' }},\n                    body: JSON.stringify(result)\n                }});\n                window.close();\n            }} catch (e) {{\n                alert('Failed to submit review. Please try again.');\n                console.error(e);\n            }}\n        }}\n\n        async function cancelReview() {{\n            try {{\n                await fetch(`http://localhost:${{serverPort}}/submit`, {{\n                    method: 'POST',\n                    headers: {{ 'Content-Type': 'application/json' }},\n                    body: JSON.stringify({{ status: 'cancelled', items: [] }})\n                }});\n                window.close();\n            }} catch (e) {{\n                window.close();\n            }}\n        }}\n\n        // Keyboard shortcuts\n        document.addEventListener('keydown', (e) => {{\n            if ((e.metaKey || e.ctrlKey) && e.key === 'Enter') {{\n                e.preventDefault();\n                submitReview();\n            }}\n            if (e.key === 'Escape') {{\n                // Don't cancel review if inline comment popup is open\n                const inlinePopup = document.getElementById('inline-comment-popup');\n                if (inlinePopup && inlinePopup.classList.contains('visible')) {{\n                    return; // Let the inline input handler deal with it\n                }}\n\n                if (selectionStart !== null) {{\n                    clearSelection();\n                }} else {{\n                    cancelReview();\n                }}\n            }}\n        }});\n\n        // Initialize\n        init();\n    </script>\n</body>\n</html>'''\n"
  },
  {
    "path": "plugins/interactive-review/skills/review/SKILL.md",
    "content": "---\nname: review\ndescription: Interactive markdown review with web UI. Use when user says \"review this\", \"check this plan\", \"피드백\", \"검토해줘\" or specifies a file path to review.\nallowed-tools:\n  - mcp__interactive_review__start_review\n  - Read\n---\n\n# Interactive Review Skill\n\nThis skill opens an interactive web UI where users can review content with checkboxes and comments.\n\n## How It Works\n\n1. Determine the content source:\n   - **If user specifies a file path**: Use the `Read` tool to get the file contents\n   - **If user provides content directly**: Use that content as-is\n   - **Otherwise**: Collect the most recent relevant content from the conversation\n2. Call `mcp__interactive_review__start_review` with the content\n3. A browser window opens automatically with the review UI\n4. User reviews each item:\n   - Check/uncheck to approve/reject\n   - Add optional comments\n5. User clicks Submit\n6. Process the feedback and respond accordingly\n\n## Content Sources (Priority Order)\n\n1. **Explicit file path**: User says \"review /path/to/file.md\" or \"이 파일 리뷰해줘: README.md\"\n   - Read the file using `Read` tool and use its contents\n2. **Direct content**: User provides or references specific content to review\n   - Use the provided content directly\n3. **Conversation context**: Extract relevant content from recent conversation\n   - Plans, documents, code, etc. that were recently discussed\n\n## Usage\n\nWhen the user wants to review content:\n\n```\n# If file path is specified, read it first:\nRead({ \"file_path\": \"/path/to/file.md\" })\n\n# Then start the review:\nmcp__interactive_review__start_review({\n  \"content\": \"<content from file or conversation>\",\n  \"title\": \"<descriptive title>\"\n})\n```\n\n## Processing Results\n\nThe tool returns a JSON with review items. Handle each item based on:\n\n| checked | comment | Action |\n|---------|---------|--------|\n| true | empty | Approved - proceed as planned |\n| true | has text | Approved with note - consider the feedback |\n| false | has text | Rejected - modify according to comment |\n| false | empty | Rejected - remove or reconsider this item |\n\n## Example Flow\n\nUser: \"Review this implementation plan\"\n\n1. Extract the plan content from recent output\n2. Call start_review with the content\n3. Wait for user feedback (tool blocks until submit)\n4. Present summary of feedback\n5. Ask if user wants you to proceed with approved items or revise rejected items\n\n## Response Template\n\nAfter receiving feedback:\n\n```\n## Review Summary\n\n**Approved**: X items\n**Needs revision**: Y items\n\n### Items requiring changes:\n- [Item]: [User's comment]\n\nWould you like me to:\n1. Proceed with approved items\n2. Revise the rejected items based on feedback\n3. Both - revise then proceed\n```\n"
  },
  {
    "path": "plugins/kakaotalk/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"kakaotalk\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Send and read KakaoTalk messages on macOS using Accessibility API\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"kakaotalk\", \"messaging\", \"macos\", \"accessibility\"]\n}\n"
  },
  {
    "path": "plugins/kakaotalk/README.md",
    "content": "# KakaoTalk Plugin for Claude Code\n\nmacOS에서 카카오톡 메시지를 발송하고 읽는 Claude Code 플러그인.\n\n## Demo\n\n![KakaoTalk Demo](../../assets/kakaotalk.gif)\n\n## Features\n\n- **메시지 발송**: 자연어로 카카오톡 메시지 전송 (발송 전 확인)\n- **메시지 읽기**: 채팅방 대화 내역 조회\n- **채팅방 목록**: 현재 채팅방 목록 확인\n- **서명 자동 추가**: 기본적으로 \"sent with claude code\" 서명 포함\n\n## Requirements\n\n- **macOS only** (uses Accessibility API)\n- **KakaoTalk for Mac** must be running\n- **Accessibility permission**: System Settings > Privacy & Security > Accessibility\n- **atomacos**: `uv add atomacos` or `pip install atomacos`\n\n## Usage\n\n### Send Message\n\nClaude가 메시지를 보내기 전에 항상 확인을 요청합니다.\n\n```bash\n# 기본 (서명 포함)\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_send.py \"구봉\" \"밥 먹었어?\"\n# → \"밥 먹었어?\\n\\nsent with claude code\" 전송\n\n# 서명 없이\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_send.py \"구봉\" \"밥 먹었어?\" --no-signature\n\n# 보내고 창 닫기\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_send.py \"구봉\" \"밥 먹었어?\" --close\n```\n\n### Read Messages\n\n```bash\n# 메시지 읽기\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_read.py \"구봉\" --json\n\n# 채팅방 목록\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_read.py --list\n\n# 채팅방 검색\nuv run python ${CLAUDE_PLUGIN_ROOT}/scripts/kakao_read.py --search \"검색어\"\n```\n\n## Options\n\n### kakao_send.py\n\n| Option | Description |\n|--------|-------------|\n| `--close`, `-c` | 발송 후 채팅창 닫기 |\n| `--no-signature` | \"sent with claude code\" 서명 없이 보내기 |\n| `--json`, `-j` | JSON 형식 출력 |\n\n### kakao_read.py\n\n| Option | Description |\n|--------|-------------|\n| `--limit N`, `-l N` | 최대 N개 메시지 읽기 (기본: 100) |\n| `--close`, `-c` | 읽고 나서 채팅창 닫기 |\n| `--json`, `-j` | JSON 형식 출력 |\n| `--list` | 채팅방 목록 보기 |\n| `--search \"검색어\"`, `-s` | 채팅방 검색 |\n\n## How It Works\n\nThis plugin uses macOS Accessibility API (via atomacos) to:\n1. Find and activate KakaoTalk windows\n2. Search for chat rooms using Cmd+F\n3. Read message content from UI elements\n4. Send messages via clipboard paste + Enter\n\n## Limitations\n\n- **macOS only**: Uses platform-specific APIs\n- **Visible messages only**: Can only read messages currently visible in the chat window\n- **UI dependent**: May break if KakaoTalk updates its UI structure\n- **KakaoTalk must be running**: Cannot start the app automatically\n\n## License\n\nMIT\n"
  },
  {
    "path": "plugins/kakaotalk/scripts/kakao_read.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nKakaoTalk 채팅방 읽기 CLI\n\nUsage:\n    # 기본: 채팅방 열고 메시지 읽기\n    python kakao_read.py \"채팅방이름\"\n    python kakao_read.py \"채팅방이름\" --limit 50\n    python kakao_read.py \"채팅방이름\" --close\n\n    # 채팅 목록\n    python kakao_read.py --list\n\n    # 검색어로 채팅방 검색\n    python kakao_read.py --search \"검색어\"\n\"\"\"\n\nimport argparse\nimport json\nimport re\nimport subprocess\nimport sys\nimport time\n\ntry:\n    import atomacos\nexcept ImportError:\n    print(\"Error: atomacos not installed. Run: uv add atomacos\")\n    sys.exit(1)\n\n# Constants\nKAKAO_BUNDLE_ID = \"com.kakao.KakaoTalkMac\"\nCLAUDE_SIGNATURE = \"sent with claude code\"\nFILE_EXTENSIONS = ['.heic', '.jpg', '.jpeg', '.png', '.gif', '.mp4', '.mov', '.pdf', '.zip']\nIGNORED_KEYWORDS = ['유효기간', '용량', 'KB', 'MB']\nTIME_PATTERNS = ['오전', '오후', '어제', '그제', '월', '일',\n                 'AM', 'PM', 'Yesterday', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',\n                 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', ':']\nMAIN_WINDOW_TITLES = ('카카오톡', 'KakaoTalk')\n\n\n# ============================================================================\n# AppleScript & Keyboard Helpers\n# ============================================================================\n\ndef run_applescript(script: str) -> str:\n    result = subprocess.run([\"osascript\", \"-e\", script], capture_output=True, text=True)\n    return result.stdout.strip()\n\n\ndef key_code(code: int, modifiers: str = \"\"):\n    modifier_clause = f\"using {{{modifiers}}}\" if modifiers else \"\"\n    run_applescript(f'''\n        tell application \"System Events\"\n            key code {code} {modifier_clause}\n        end tell\n    ''')\n\n\n# ============================================================================\n# KakaoTalk App & Window Management\n# ============================================================================\n\ndef get_kakao_app():\n    try:\n        return atomacos.getAppRefByBundleId(KAKAO_BUNDLE_ID)\n    except ValueError:\n        print(\"Error: KakaoTalk is not running.\")\n        sys.exit(1)\n    except atomacos.ErrorAPIDisabled:\n        print(\"Error: Accessibility API disabled.\")\n        sys.exit(1)\n\n\ndef find_main_window(kakao_app):\n    \"\"\"메인 창(카카오톡 채팅 목록 창) 찾기.\"\"\"\n    for win in kakao_app.windows():\n        if win.AXTitle in MAIN_WINDOW_TITLES:\n            return win\n    return None\n\n\ndef find_open_chat(kakao_app, chat_name: str):\n    \"\"\"이미 열린 채팅방 창에서 이름이 일치하는 것 찾기.\"\"\"\n    for win in kakao_app.windows():\n        title = win.AXTitle\n        if title != \"카카오톡\" and chat_name.lower() in title.lower():\n            return win\n    return None\n\n\ndef get_all_chat_windows(kakao_app) -> list:\n    return [win for win in kakao_app.windows() if win.AXTitle not in MAIN_WINDOW_TITLES]\n\n\ndef ensure_main_window_focused():\n    \"\"\"메인 창(채팅 목록)이 확실히 포커스되도록 함.\"\"\"\n    run_applescript('tell application \"KakaoTalk\" to activate')\n    time.sleep(0.3)\n\n    kakao = get_kakao_app()\n    main_win = find_main_window(kakao)\n    if not main_win:\n        return False\n\n    try:\n        main_win.Raise()\n        time.sleep(0.3)\n    except Exception:\n        pass\n\n    return True\n\n\ndef clear_search_and_go_main():\n    \"\"\"메인 창으로 돌아가고 검색 기록 초기화.\"\"\"\n    run_applescript('tell application \"KakaoTalk\" to activate')\n    time.sleep(0.3)\n\n    kakao = get_kakao_app()\n    chat_windows = [w for w in kakao.windows() if w.AXTitle != \"카카오톡\"]\n    if chat_windows:\n        key_code(53)  # ESC\n        time.sleep(0.3)\n\n    ensure_main_window_focused()\n\n\ndef search_and_open_chat(chat_name: str):\n    \"\"\"검색으로 채팅방 열기.\"\"\"\n    clear_search_and_go_main()\n\n    key_code(3, \"command down\")  # Cmd+F\n    time.sleep(0.5)\n    key_code(0, \"command down\")  # Cmd+A\n    time.sleep(0.1)\n\n    subprocess.run([\"pbcopy\"], input=chat_name.encode(), check=True)\n    key_code(9, \"command down\")  # Cmd+V\n    time.sleep(0.8)\n\n    key_code(125)  # Down arrow\n    time.sleep(0.2)\n    key_code(36)  # Enter\n    time.sleep(0.8)\n\n\ndef close_chat():\n    \"\"\"현재 채팅창 닫기.\"\"\"\n    key_code(53)  # Escape\n    time.sleep(0.2)\n\n\n# ============================================================================\n# Pattern Matching Helpers\n# ============================================================================\n\ndef is_date_pattern(val: str) -> bool:\n    \"\"\"날짜 구분선 패턴인지 확인 (예: '1월 17일', '2025. 1. 17.', '어제', '그제')\"\"\"\n    if not val:\n        return False\n    if re.match(r'^\\d{1,2}월\\s*\\d{1,2}일', val):\n        return True\n    if re.match(r'^\\d{4}\\.\\s*\\d{1,2}\\.\\s*\\d{1,2}', val):\n        return True\n    if val in ['어제', '그제']:\n        return True\n    return False\n\n\ndef is_time_pattern(val: str) -> bool:\n    \"\"\"시간 패턴인지 확인 (예: '오전 10:30', '오후 7:41')\"\"\"\n    if not val:\n        return False\n    return bool(re.match(r'^(오전|오후)\\s*\\d{1,2}:\\d{2}$', val))\n\n\ndef is_valid_sender_name(val: str) -> bool:\n    \"\"\"유효한 발신자 이름인지 확인.\"\"\"\n    if not val or len(val) >= 20:\n        return False\n    if val.startswith('[') or val.isdigit():\n        return False\n\n    # 특수문자/공백만 있는 경우 무시\n    cleaned = val.strip().replace('·', '').replace('•', '').replace(' ', '')\n    if not cleaned:\n        return False\n\n    # 파일명 패턴 무시\n    if any(val.lower().endswith(ext) for ext in FILE_EXTENSIONS):\n        return False\n\n    # 무시할 키워드 포함 시 무시\n    if any(kw in val for kw in IGNORED_KEYWORDS):\n        return False\n\n    return True\n\n\n# ============================================================================\n# Accessibility Helpers\n# ============================================================================\n\ndef safe_get_attr(elem, attr_name, default=None):\n    \"\"\"안전하게 AX 속성 가져오기.\"\"\"\n    try:\n        return getattr(elem, attr_name, default)\n    except AttributeError:\n        return default\n\n\ndef get_window_width(chat_window) -> int:\n    \"\"\"창 너비 가져오기.\"\"\"\n    try:\n        win_size = chat_window.AXSize\n        return win_size.width if win_size else 400\n    except Exception:\n        return 400\n\n\n# ============================================================================\n# Message Extraction\n# ============================================================================\n\ndef extract_messages(chat_window, limit: int = 100) -> list[dict]:\n    \"\"\"채팅 창에서 메시지 추출.\"\"\"\n    messages = []\n    chat_name = safe_get_attr(chat_window, 'AXTitle', '')\n    partner_name = None\n    current_date = None\n    current_time = None\n\n    children = safe_get_attr(chat_window, 'AXChildren', [])\n    for child in children:\n        if safe_get_attr(child, 'AXRole') != 'AXScrollArea':\n            continue\n\n        for table_child in safe_get_attr(child, 'AXChildren', []):\n            if safe_get_attr(table_child, 'AXRole') != 'AXTable':\n                continue\n\n            win_width = get_window_width(chat_window)\n            rows = safe_get_attr(table_child, 'AXChildren', [])\n\n            for row in rows[:limit]:\n                if safe_get_attr(row, 'AXRole') != 'AXRow':\n                    continue\n\n                row_sender = None\n                row_time = None\n\n                for cell in safe_get_attr(row, 'AXChildren', []):\n                    if safe_get_attr(cell, 'AXRole') != 'AXCell':\n                        continue\n\n                    cell_pos = None\n                    try:\n                        cell_pos = cell.AXPosition\n                    except Exception:\n                        pass\n\n                    for elem in safe_get_attr(cell, 'AXChildren', []):\n                        role = safe_get_attr(elem, 'AXRole')\n\n                        if role == 'AXStaticText':\n                            row_sender, row_time, current_date, partner_name = _parse_static_text(\n                                elem, row_sender, row_time, current_date, partner_name\n                            )\n\n                        elif role == 'AXTextArea':\n                            msg_data = _parse_message(\n                                elem, cell_pos, win_width, row_sender, row_time,\n                                current_date, current_time, partner_name, chat_name\n                            )\n                            if msg_data:\n                                messages.append(msg_data)\n                                if row_time:\n                                    current_time = row_time\n            break\n        break\n\n    return messages\n\n\ndef _parse_static_text(elem, row_sender, row_time, current_date, partner_name):\n    \"\"\"StaticText 요소 파싱.\"\"\"\n    try:\n        val = elem.AXValue\n        if not val:\n            return row_sender, row_time, current_date, partner_name\n\n        # 줄바꿈으로 값이 합쳐진 경우\n        if '\\n' in val:\n            for part in val.split('\\n'):\n                part = part.strip()\n                if part.isdigit():\n                    continue\n                if is_date_pattern(part):\n                    current_date = part.split()[0] if '요일' in part else part\n                elif is_time_pattern(part):\n                    row_time = part\n        elif is_date_pattern(val):\n            current_date = val.split()[0] if '요일' in val else val\n        elif is_time_pattern(val):\n            row_time = val\n        elif is_valid_sender_name(val):\n            row_sender = val\n            partner_name = val\n    except Exception:\n        pass\n\n    return row_sender, row_time, current_date, partner_name\n\n\ndef _parse_message(elem, cell_pos, win_width, row_sender, row_time,\n                   current_date, current_time, partner_name, chat_name) -> dict | None:\n    \"\"\"TextArea 요소에서 메시지 파싱.\"\"\"\n    try:\n        msg = elem.AXValue\n        if not msg or not msg.strip():\n            return None\n\n        # is_me 판단 1: Claude Code 시그니처\n        is_me = CLAUDE_SIGNATURE in msg\n\n        # is_me 판단 2: 좌표 기반\n        if not is_me and cell_pos:\n            try:\n                elem_pos = elem.AXPosition\n                center_threshold = cell_pos.x + (win_width * 0.4)\n                is_me = elem_pos.x > center_threshold\n            except Exception:\n                pass\n\n        # 발신자 결정\n        if is_me:\n            sender = \"나\"\n        elif row_sender:\n            sender = row_sender\n        else:\n            sender = partner_name or chat_name or \"상대방\"\n\n        # 시간 문자열 생성\n        time_val = row_time or current_time\n        if current_date and time_val:\n            time_str = f\"{current_date} {time_val}\"\n        else:\n            time_str = time_val or current_date\n\n        return {\n            'sender': sender,\n            'time': time_str,\n            'message': msg,\n            'is_me': is_me\n        }\n    except Exception:\n        return None\n\n\n# ============================================================================\n# Chat List Operations\n# ============================================================================\n\ndef list_chats(kakao_app, limit: int = 30) -> list[str]:\n    \"\"\"메인 창에서 채팅방 목록 추출.\"\"\"\n    chats = []\n\n    for win in kakao_app.windows():\n        if safe_get_attr(win, 'AXTitle') not in MAIN_WINDOW_TITLES:\n            continue\n\n        for child in safe_get_attr(win, 'AXChildren', []):\n            if safe_get_attr(child, 'AXRole') != 'AXScrollArea':\n                continue\n\n            for table_child in safe_get_attr(child, 'AXChildren', []):\n                if safe_get_attr(table_child, 'AXRole') != 'AXTable':\n                    continue\n\n                rows = safe_get_attr(table_child, 'AXChildren', [])\n                for row in rows[:limit]:\n                    if safe_get_attr(row, 'AXRole') != 'AXRow':\n                        continue\n\n                    texts = _extract_row_texts(row)\n                    if len(texts) >= 2 and any(t in texts[1] for t in TIME_PATTERNS):\n                        chats.append(texts[0])\n                break\n            break\n\n    return chats\n\n\ndef search_chats(query: str, limit: int = 20) -> list[str]:\n    \"\"\"카카오톡 검색창에서 검색 후 결과 목록 반환.\"\"\"\n    clear_search_and_go_main()\n\n    key_code(3, \"command down\")  # Cmd+F\n    time.sleep(0.5)\n    key_code(0, \"command down\")  # Cmd+A\n    time.sleep(0.1)\n    subprocess.run([\"pbcopy\"], input=query.encode(), check=True)\n    key_code(9, \"command down\")  # Cmd+V\n    time.sleep(1.0)\n\n    kakao = get_kakao_app()\n    chats = []\n\n    for win in kakao.windows():\n        if safe_get_attr(win, 'AXTitle') not in MAIN_WINDOW_TITLES:\n            continue\n\n        for child in safe_get_attr(win, 'AXChildren', []):\n            if safe_get_attr(child, 'AXRole') != 'AXScrollArea':\n                continue\n\n            for table_child in safe_get_attr(child, 'AXChildren', []):\n                if safe_get_attr(table_child, 'AXRole') != 'AXTable':\n                    continue\n\n                rows = safe_get_attr(table_child, 'AXChildren', [])\n                for row in rows[:limit]:\n                    if safe_get_attr(row, 'AXRole') != 'AXRow':\n                        continue\n\n                    texts = _extract_row_texts(row)\n                    if len(texts) >= 2 and any(t in texts[1] for t in TIME_PATTERNS):\n                        chats.append(texts[0])\n                break\n            break\n\n    return chats\n\n\ndef _extract_row_texts(row) -> list[str]:\n    \"\"\"Row에서 모든 StaticText 값 추출.\"\"\"\n    texts = []\n    for cell in safe_get_attr(row, 'AXChildren', []):\n        if safe_get_attr(cell, 'AXRole') != 'AXCell':\n            continue\n        for elem in safe_get_attr(cell, 'AXChildren', []):\n            if safe_get_attr(elem, 'AXRole') == 'AXStaticText':\n                try:\n                    val = elem.AXValue\n                    if val:\n                        texts.append(val)\n                except Exception:\n                    pass\n    return texts\n\n\n# ============================================================================\n# Main API\n# ============================================================================\n\ndef read_chat(chat_name: str, limit: int = 100) -> tuple[str | None, list[dict]]:\n    \"\"\"채팅방 열고 메시지 읽기.\"\"\"\n    kakao = get_kakao_app()\n\n    chat_win = find_open_chat(kakao, chat_name)\n    if chat_win:\n        return chat_win.AXTitle, extract_messages(chat_win, limit)\n\n    before_titles = set(win.AXTitle for win in get_all_chat_windows(kakao))\n    search_and_open_chat(chat_name)\n    kakao = get_kakao_app()\n\n    after_windows = get_all_chat_windows(kakao)\n    new_windows = [win for win in after_windows if win.AXTitle not in before_titles]\n\n    if new_windows:\n        chat_win = new_windows[0]\n    elif (chat_win := find_open_chat(kakao, chat_name)):\n        pass\n    elif after_windows:\n        chat_win = after_windows[0]\n    else:\n        return None, []\n\n    return chat_win.AXTitle, extract_messages(chat_win, limit)\n\n\n# ============================================================================\n# CLI\n# ============================================================================\n\ndef main():\n    parser = argparse.ArgumentParser(description='KakaoTalk 채팅방 읽기 CLI')\n    parser.add_argument('chat_name', nargs='?', help='채팅방 이름 (부분 일치)')\n    parser.add_argument('--limit', '-l', type=int, default=100, help='최대 메시지 수 (기본: 100)')\n    parser.add_argument('--list', action='store_true', help='채팅방 목록 보기')\n    parser.add_argument('--search', '-s', type=str, help='카카오톡 검색창에서 검색 후 결과 목록')\n    parser.add_argument('--close', '-c', action='store_true', help='읽고 나서 창 닫기')\n    parser.add_argument('--json', '-j', action='store_true', help='JSON 출력')\n\n    args = parser.parse_args()\n\n    # 모드 1: 카카오톡 검색창에서 검색\n    if args.search:\n        chats = search_chats(args.search)\n        if args.json:\n            print(json.dumps({'search': args.search, 'chats': chats}, ensure_ascii=False, indent=2))\n        else:\n            print(f\"=== '{args.search}' 검색 결과 ===\\n\")\n            for c in chats:\n                print(f\"  • {c}\")\n            print(f\"\\n총 {len(chats)}개\")\n        return\n\n    # 모드 2: 전체 채팅방 목록\n    if args.list:\n        kakao = get_kakao_app()\n        chats = list_chats(kakao)\n        if args.json:\n            print(json.dumps({'chats': chats}, ensure_ascii=False, indent=2))\n        else:\n            print(\"=== 채팅방 목록 ===\\n\")\n            for c in chats:\n                print(f\"  • {c}\")\n            print(f\"\\n총 {len(chats)}개\")\n        return\n\n    # 모드 3: 기본 - 채팅방 열고 메시지 읽기\n    if not args.chat_name:\n        parser.print_help()\n        return\n\n    chat_name, messages = read_chat(args.chat_name, args.limit)\n\n    if not messages:\n        if args.json:\n            print(json.dumps({\n                'error': f\"'{args.chat_name}' 채팅방을 찾을 수 없습니다.\",\n                'chat': None,\n                'messages': []\n            }, ensure_ascii=False, indent=2))\n        else:\n            print(f\"'{args.chat_name}' 채팅방을 찾을 수 없거나 메시지가 없습니다.\")\n        return\n\n    if args.json:\n        print(json.dumps({'chat': chat_name, 'messages': messages}, ensure_ascii=False, indent=2))\n    else:\n        print(f\"\\n=== {chat_name} ({len(messages)}개) ===\\n\")\n        for m in messages:\n            sender = m['sender']\n            time_str = m['time'] or ''\n            msg = m['message'].replace('\\n', ' ')\n            if len(msg) > 80:\n                msg = msg[:80] + '...'\n            print(f\"[{time_str}] {sender}: {msg}\")\n\n    if args.close:\n        close_chat()\n        if not args.json:\n            print(\"\\n[창 닫힘]\")\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "plugins/kakaotalk/scripts/kakao_send.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nKakaoTalk 메시지 발송\n\nUsage:\n    python kakao_send.py \"채팅방이름\" \"메시지\"\n    python kakao_send.py \"구봉\" \"안녕하세요!\"\n    python kakao_send.py \"구봉\" \"밥 먹었어?\" --close        # 보내고 창 닫기\n    python kakao_send.py \"구봉\" \"밥 먹었어?\" --no-signature  # 서명 없이 보내기\n\"\"\"\n\nSIGNATURE = \"\\n\\nsent with claude code\"\n\nimport argparse\nimport subprocess\nimport sys\nimport time\n\ntry:\n    import atomacos\nexcept ImportError:\n    print(\"Error: atomacos not installed. Run: uv add atomacos\")\n    sys.exit(1)\n\nKAKAO_BUNDLE_ID = \"com.kakao.KakaoTalkMac\"\nMAIN_WINDOW_TITLES = (\"카카오톡\", \"KakaoTalk\")\n\n\ndef run_applescript(script: str) -> str:\n    result = subprocess.run([\"osascript\", \"-e\", script], capture_output=True, text=True)\n    return result.stdout.strip()\n\n\ndef key_code(code: int, modifiers: str = \"\"):\n    modifier_clause = f\"using {{{modifiers}}}\" if modifiers else \"\"\n    run_applescript(f'''\n        tell application \"System Events\"\n            key code {code} {modifier_clause}\n        end tell\n    ''')\n\n\ndef type_text(text: str):\n    \"\"\"클립보드를 통해 텍스트 입력 (한글 지원).\"\"\"\n    subprocess.run([\"pbcopy\"], input=text.encode(\"utf-8\"), check=True)\n    key_code(9, \"command down\")  # Cmd+V\n\n\ndef get_kakao_app():\n    try:\n        return atomacos.getAppRefByBundleId(KAKAO_BUNDLE_ID)\n    except ValueError:\n        print(\"Error: KakaoTalk is not running.\")\n        sys.exit(1)\n    except atomacos.ErrorAPIDisabled:\n        print(\"Error: Accessibility API disabled.\")\n        sys.exit(1)\n\n\ndef find_main_window(kakao_app):\n    \"\"\"메인 창(카카오톡 채팅 목록 창) 찾기.\"\"\"\n    for win in kakao_app.windows():\n        if win.AXTitle in MAIN_WINDOW_TITLES:\n            return win\n    return None\n\n\ndef raise_main_window(kakao_app):\n    \"\"\"메인 창을 앞으로 가져오기.\"\"\"\n    main_win = find_main_window(kakao_app)\n    if main_win:\n        try:\n            main_win.Raise()\n            return True\n        except Exception:\n            pass\n    return False\n\n\ndef find_open_chat(kakao_app, chat_name: str):\n    \"\"\"이미 열린 채팅방 창에서 이름이 일치하는 것 찾기.\"\"\"\n    for win in kakao_app.windows():\n        title = win.AXTitle\n        if title in MAIN_WINDOW_TITLES:\n            continue\n        if chat_name.lower() in title.lower():\n            return win\n    return None\n\n\ndef get_all_chat_windows(kakao_app) -> list:\n    return [win for win in kakao_app.windows() if win.AXTitle not in MAIN_WINDOW_TITLES]\n\n\ndef search_and_open_chat(chat_name: str):\n    \"\"\"검색으로 채팅방 열기.\"\"\"\n    run_applescript('tell application \"KakaoTalk\" to activate')\n    time.sleep(0.3)\n\n    kakao = get_kakao_app()\n    raise_main_window(kakao)\n    time.sleep(0.3)\n\n    key_code(3, \"command down\")  # Cmd+F (검색)\n    time.sleep(0.5)\n\n    subprocess.run([\"pbcopy\"], input=chat_name.encode(), check=True)\n    key_code(9, \"command down\")  # Cmd+V\n    time.sleep(0.8)\n\n    key_code(125)  # Down arrow\n    time.sleep(0.2)\n    key_code(36)  # Enter\n    time.sleep(0.8)\n\n\ndef open_chat(chat_name: str):\n    \"\"\"채팅방 열기 (이미 열려있으면 그대로, 아니면 검색해서 열기).\"\"\"\n    kakao = get_kakao_app()\n\n    # 이미 열린 채팅방 확인\n    chat_win = find_open_chat(kakao, chat_name)\n    if chat_win:\n        # 해당 창을 앞으로\n        run_applescript('tell application \"KakaoTalk\" to activate')\n        time.sleep(0.2)\n        try:\n            chat_win.Raise()\n        except:\n            pass\n        return chat_win.AXTitle\n\n    # 검색해서 열기\n    before_titles = set(win.AXTitle for win in get_all_chat_windows(kakao))\n    search_and_open_chat(chat_name)\n\n    kakao = get_kakao_app()\n    after_windows = get_all_chat_windows(kakao)\n    new_windows = [win for win in after_windows if win.AXTitle not in before_titles]\n\n    if new_windows:\n        return new_windows[0].AXTitle\n\n    chat_win = find_open_chat(kakao, chat_name)\n    if chat_win:\n        return chat_win.AXTitle\n\n    if after_windows:\n        return after_windows[0].AXTitle\n\n    return None\n\n\ndef send_message_via_keyboard(message: str):\n    \"\"\"키보드 입력으로 메시지 전송.\"\"\"\n    # 텍스트 입력 (클립보드 사용)\n    type_text(message)\n    time.sleep(0.3)\n\n    # Enter로 전송\n    key_code(36)  # Enter\n    time.sleep(0.3)\n\n\ndef close_chat():\n    \"\"\"현재 채팅창 닫기.\"\"\"\n    key_code(53)  # Escape\n    time.sleep(0.2)\n\n\ndef send_message(chat_name: str, message: str, close_after: bool = False) -> dict:\n    \"\"\"채팅방에 메시지 발송.\"\"\"\n    result = {\n        'success': False,\n        'chat': None,\n        'message': message,\n        'error': None\n    }\n\n    # 1. 채팅방 열기\n    chat_title = open_chat(chat_name)\n    if not chat_title:\n        result['error'] = f\"'{chat_name}' 채팅방을 찾을 수 없습니다.\"\n        return result\n\n    result['chat'] = chat_title\n    time.sleep(0.3)\n\n    # 2. 메시지 전송 (키보드 입력 방식)\n    # 채팅창이 활성화된 상태에서 바로 입력\n    send_message_via_keyboard(message)\n\n    result['success'] = True\n\n    # 3. 필요시 창 닫기\n    if close_after:\n        close_chat()\n\n    return result\n\n\ndef main():\n    parser = argparse.ArgumentParser(description='KakaoTalk 메시지 발송')\n    parser.add_argument('chat_name', help='채팅방 이름 (부분 일치)')\n    parser.add_argument('message', help='보낼 메시지')\n    parser.add_argument('--close', '-c', action='store_true', help='보내고 나서 창 닫기')\n    parser.add_argument('--json', '-j', action='store_true', help='JSON 출력')\n    parser.add_argument('--no-signature', action='store_true', help='서명 없이 보내기 (기본: \"sent with claude code\" 붙음)')\n\n    args = parser.parse_args()\n\n    # 서명 추가 (--no-signature가 없으면)\n    message = args.message\n    if not args.no_signature:\n        message = args.message + SIGNATURE\n\n    result = send_message(args.chat_name, message, args.close)\n\n    if args.json:\n        import json\n        print(json.dumps(result, ensure_ascii=False, indent=2))\n    else:\n        if result['success']:\n            print(f\"✓ [{result['chat']}]에 메시지 전송 완료\")\n            print(f\"  → {result['message']}\")\n        else:\n            print(f\"✗ 전송 실패: {result['error']}\")\n            sys.exit(1)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "plugins/kakaotalk/skills/kakaotalk/SKILL.md",
    "content": "---\nname: kakaotalk\ndescription: This skill should be used when the user asks to \"카톡 보내줘\", \"카카오톡 메시지\", \"KakaoTalk message\", \"채팅 읽어줘\", \"~에게 메시지 보내줘\", or needs to send/read messages via KakaoTalk on macOS.\nversion: 2.0.0\n---\n\n# KakaoTalk CLI\n\nmacOS에서 CLI를 통해 카카오톡 메시지를 읽고 보내는 스킬.\n\n## 트리거\n\n- \"카카오톡 메시지\", \"카톡 읽어줘\", \"~에게 메시지 보내줘\"\n\n## 스크립트 구조\n\n| 파일 | 용도 |\n|------|------|\n| `kakao_read.py` | 채팅방 검색, 열기, 메시지 읽기 |\n| `kakao_send.py` | 메시지 발송 |\n\n---\n\n## 메시지 발송 워크플로우\n\n### Step 1: 채팅방 열고 대화 내역 읽기\n\n대상 이름으로 채팅방을 열고 대화 내역을 읽습니다:\n\n```bash\nuv run python .claude/skills/kakaotalk/scripts/kakao_read.py \"대상이름\" --json\n```\n\n**출력 예시:**\n```json\n{\n  \"chat_name\": \"구봉\",\n  \"messages\": [\n    {\"sender\": \"나\", \"text\": \"오늘 저녁 뭐 먹을까?\", \"time\": \"오후 3:24\"},\n    {\"sender\": \"구봉\", \"text\": \"파스타 어때?\", \"time\": \"오후 3:45\"}\n  ]\n}\n```\n\n**메시지 분석 시 주의:**\n- 배열 끝부분이 최신 메시지 (최근일수록 가치 높음)\n- 1주일 이상 된 내용은 상황이 바뀌었을 수 있음\n- 최근 대화 주제와 자연스럽게 이어지는 메시지 작성\n\n### Step 2: 맥락 파악 후 메시지 작성\n\n읽은 대화 내역을 바탕으로:\n1. 최근 대화 흐름 파악\n2. 사용자 요청에 맞는 메시지 초안 작성\n3. 자연스럽고 맥락에 맞는 내용 구성\n\n### Step 3: 사용자 확인 (필수)\n\n**먼저 텍스트로 메시지 내용을 보여준 후** AskUserQuestion으로 확인:\n\n```\n[텍스트 출력]\n**최근 대화 요약:**\n- {최근 대화 내용 요약}\n\n**보낼 메시지:**\n받는 사람: {채팅방}\n---\n{메시지 내용}\n\nsent with claude code\n---\n\n[AskUserQuestion]\n질문: \"이 메시지를 보낼까요?\"\n옵션: [\"보내기\", \"수정 필요\"]\n```\n\n### Step 4: 발송\n\n사용자 확인 후 메시지 발송:\n\n```bash\nuv run python .claude/skills/kakaotalk/scripts/kakao_send.py \"채팅방이름\" \"메시지\"\n```\n\n---\n\n## 메시지 읽기 전용 워크플로우\n\n단순히 대화 내역만 확인할 때:\n\n```bash\nuv run python .claude/skills/kakaotalk/scripts/kakao_read.py \"대상이름\" --json\n```\n\n읽은 후 사용자에게 요약 제공:\n- 최근 대화 2-3개 요약\n- 현재 진행 중인 대화 주제\n- 답장이 필요한 내용이 있는지\n\n---\n\n## CLI 옵션 레퍼런스\n\n### kakao_read.py\n\n```bash\n# 기본: 채팅방 열고 메시지 읽기\nkakao_read.py \"채팅방이름\" [--limit N] [--json]\n\n# 채팅 목록\nkakao_read.py --list [--json]\n\n# 검색\nkakao_read.py --search \"검색어\" [--json]\n\n# 읽고 창 닫기\nkakao_read.py \"채팅방이름\" --close\n```\n\n### kakao_send.py\n\n```bash\n# 기본 (서명 포함)\nkakao_send.py \"채팅방\" \"메시지\"\n# → \"메시지\\n\\nsent with claude code\"\n\n# 서명 없이\nkakao_send.py \"채팅방\" \"메시지\" --no-signature\n\n# 보내고 창 닫기\nkakao_send.py \"채팅방\" \"메시지\" --close\n```\n\n---\n\n## 예시 시나리오\n\n### \"구봉한테 보낼 메시지 제안\"\n\n```\n[Step 1] 채팅방 열고 읽기\nuv run python .../kakao_read.py \"구봉\" --json\n\n[Step 2] 맥락 파악\n최근 대화: 저녁 메뉴 논의 중\n\n[Step 3] 메시지 제안\n\"파스타 좋아! 오늘 7시에 만날까?\"\n\n[Step 4] 사용자 확인 후 발송\n```\n\n---\n\n## 요구사항\n\n1. **atomacos 설치**: `uv add atomacos`\n2. **Accessibility 권한**: System Settings > Privacy & Security > Accessibility에서 Terminal 허용\n3. **카카오톡 실행**: macOS용 카카오톡 앱 실행 중\n"
  },
  {
    "path": "plugins/podcast/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"podcast\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Generate Korean podcast episodes from any source (URLs, tweets, articles, PDFs) with OpenAI TTS and auto-upload to YouTube\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"claude-code\", \"plugin\", \"podcast\", \"tts\", \"openai\", \"youtube\", \"korean\", \"audio\"]\n}\n"
  },
  {
    "path": "plugins/podcast/README.md",
    "content": "# Podcast Generator Plugin\n\nGenerate Korean podcast episodes from any source — URLs, tweets, articles, PDFs — with OpenAI TTS and auto-upload to YouTube.\n\n## Pipeline\n\n```\nSources → Analysis → Script → TTS (OpenAI) → MP4 → YouTube\n```\n\n## Quick Start\n\n```bash\n# Install\n/plugin install podcast\n```\n\nThen just say:\n- \"이 글을 팟캐스트로 만들어\"\n- \"Make a podcast from these sources\"\n- \"Turn this into an audio episode\"\n\n## Features\n\n- **Multi-source fusion**: Analyzes 2+ sources in parallel and synthesizes insights\n- **Korean podcast script**: Conversational tone, proper number/name localization\n- **OpenAI TTS**: Uses `gpt-4o-mini-tts` with `marin` voice, 1500-char chunking with retry\n- **YouTube auto-upload**: OAuth browser flow, resumable upload, metadata saved\n- **Partial execution**: Script-only, TTS-only, or upload-only\n\n## Requirements\n\n| Dependency | Purpose |\n|-----------|---------|\n| ffmpeg | Audio merging + MP4 conversion |\n| OpenAI API key | TTS generation (`OPENAI_API_KEY` env var) |\n| Google OAuth client secret | YouTube upload |\n| Python 3.10+ | All scripts (stdlib only, no pip) |\n\n## Setup\n\n### OpenAI TTS\nSet `OPENAI_API_KEY` environment variable, or provide when prompted.\n\n### YouTube Upload\n1. Create a Google Cloud project with YouTube Data API v3 enabled\n2. Download OAuth client secret JSON\n3. Place in `~/Downloads/client_secret_*.json`\n4. First upload will open browser for authentication\n\n## License\n\nMIT\n"
  },
  {
    "path": "plugins/podcast/skills/podcast/SKILL.md",
    "content": "---\nname: podcast\ndescription: \"Generate Korean podcast episodes from any source (URLs, tweets, articles, PDFs) — analyzes content, writes a script, generates audio via OpenAI TTS, converts to MP4, and auto-uploads to YouTube. Use this skill whenever the user says 'make a podcast', 'convert to podcast', 'podcast', 'create an episode', 'turn this into audio', 'YouTube podcast', 'turn this article into a podcast', 'publish as audio', or provides sources and wants them transformed into a listenable format. Supports partial execution: script-only, TTS-only, or upload-only.\"\n---\n\n# Podcast Generator\n\nAnalyze sources, generate a Korean podcast script, produce audio via OpenAI TTS, and auto-upload to YouTube.\n\n## Pipeline\n\n```\n[Source Collection] → [Analysis/Fusion] → [Script Writing] → [TTS Generation] → [MP4 Conversion] → [YouTube Upload]\n```\n\n## Step 1: Source Collection & Analysis\n\nCollect and analyze user-provided sources. Processing by type:\n\n- **URL/Article**: WebFetch or subagent for full text\n- **Tweet/X post**: Use WebFetch with `api.fxtwitter.com` (replace domain in X/Twitter URL)\n- **PDF**: Read tool directly\n- **GitHub repo**: Clone and analyze structure (use subagent)\n- **Conversation context**: Reuse content already analyzed in current session\n\nWhen 2+ sources are provided, **always spawn parallel subagents** for each.\n\n## Step 2: Script Writing\n\n### Structure (8-12 min, 3000-5000 chars)\n\n```markdown\n# [Episode Title]\n\n> [Duration] podcast script | [Date]\n> Sources: [source list]\n\n---\n\n## Opening (1 min)\n- Hook: one sentence on why this topic matters\n- Introduce sources\n- Lead with conclusion (state core message upfront)\n\n## Body Part 1 (3 min)\n- Deep analysis of first source/perspective\n\n## Body Part 2 (3 min)\n- Deep analysis of second source/perspective\n\n## Fusion/Intersection (3 min)\n- Emergent insights from combining sources\n- Patterns, commonalities, contrasts\n- Generalizable implications\n\n## Closing (30 sec)\n- One-sentence summary of core message\n- Sign-off\n```\n\n### Script Writing Principles\n\n- **Write as you speak**: conversational Korean (\"~입니다\", \"~거죠\", \"~인데요\")\n- **Numbers in Korean**: \"267K\" → \"이십육만\", \"$75,000\" → \"칠만오천 달러\"\n- **English names in Korean pronunciation**: \"Garry Tan\" → \"개리 탄\"\n- **No tables or code blocks**: TTS cannot read them. Convert table content to sentences\n- **Shift tone for quotes**: \"개리 탄 본인이 이렇게 말합니다.\" to create distinction\n- **Short sentences**: keep each sentence under 50 characters\n\n### File Layout\n\n```\n<output-dir>/\n├── script.md       ← Script\n├── episode.mp3     ← Audio\n├── episode.mp4     ← Video (for YouTube)\n└── metadata.json   ← Title, description, tags, YouTube URL\n```\n\nThe output directory can be any user-specified path. A sensible default is `podcast/YYYY-MM-DD-[slug]/` relative to the current working directory.\n\n## Step 3: TTS Generation\n\nConvert script to audio using `scripts/generate_tts.py`:\n\n```bash\npython3 <plugin-path>/skills/podcast/scripts/generate_tts.py \\\n  --input <script.md path> \\\n  --output <episode.mp3 path> \\\n  --api-key <OpenAI API key>\n```\n\nReplace `<plugin-path>` with the actual path where this plugin is installed (use `${CLAUDE_PLUGIN_ROOT}` if available, or the resolved plugin installation path).\n\n### OpenAI API Key\n\nCheck `OPENAI_API_KEY` environment variable first. If not set, ask the user.\n\n### TTS Settings\n\n| Setting | Value | Note |\n|---------|-------|------|\n| Model | `gpt-4o-mini-tts` | Latest model with instructions support |\n| Voice | `marin` | Best for Korean. `cedar` as alternative |\n| Chunk size | 1500 chars | 2000 token limit, Korean ~1.5 char/token |\n| Instructions | Auto-generated per script | See default below |\n\nDefault TTS instructions:\n> \"따뜻하고 친근한 한국어 팟캐스트 호스트. 명확한 발음으로 또박또박 읽되, 자연스러운 억양과 적절한 감정을 담아서. 중요한 포인트에서는 약간 힘을 주고, 인용구에서는 톤을 살짝 바꿔서 구분감을 준다. 전체적으로 지적이면서도 편안한 분위기.\"\n\nIf the user specifies a tone, customize via `--instructions`.\n\n## Step 4: MP4 Conversion\n\nConvert MP3 to MP4 with a static title card:\n\n```bash\npython3 <plugin-path>/skills/podcast/scripts/convert_mp4.py \\\n  --input <episode.mp3 path> \\\n  --output <episode.mp4 path> \\\n  --title \"Episode Title\" \\\n  --subtitle \"Subtitle\"\n```\n\nGenerates a 1920x1080 video with dark background (#1a1a2e) and Korean title/subtitle overlay.\n\n## Step 5: YouTube Upload\n\n```bash\npython3 <plugin-path>/skills/podcast/scripts/upload_youtube.py \\\n  --video <episode.mp4 path> \\\n  --title \"Episode Title\" \\\n  --description \"Description\" \\\n  --privacy unlisted\n```\n\n### OAuth Setup\n\n- Google OAuth client secret: auto-discovers `~/Downloads/client_secret_*.json` or `~/.config/google/client_secret_*.json`\n- Token: stored alongside the video file by default (override with `--token-path`)\n- First run requires browser-based Google authentication\n- Ask user which YouTube account to use if multiple are available\n- Never copy scripts to the episode directory. Always run from the plugin's original path\n\n### Upload Defaults\n\n- Privacy: `unlisted` (unless user specifies otherwise)\n- Category: People & Blogs (22)\n- Language: ko\n\n## Step 6: Completion Report\n\nAfter upload, report to user:\n\n```\nDone!\n- Script: <path>/script.md\n- Audio: <path>/episode.mp3\n- Video: <path>/episode.mp4\n- YouTube: https://youtu.be/VIDEO_ID (unlisted)\n```\n\nPlay `episode.mp3` with `afplay` so the user can listen immediately.\n\n## Partial Execution\n\nUsers may request only part of the pipeline:\n\n- \"Just write the script\" → Steps 1-2 only\n- \"Generate TTS from this script\" → Step 3 only\n- \"Upload to YouTube\" → Step 5 only (requires existing MP4)\n- \"Make it public\" → Update YouTube privacy via API\n\n## Requirements\n\n- **ffmpeg**: required for audio merging and MP4 conversion. On macOS, `homebrew-ffmpeg/ffmpeg` tap may be needed for full codec support\n- **OpenAI API key**: for TTS generation (`OPENAI_API_KEY` env var or provided by user)\n- **Google OAuth client secret**: for YouTube upload (download from Google Cloud Console)\n- **macOS font**: uses `/System/Library/Fonts/AppleSDGothicNeo.ttc` for Korean text overlay. On other platforms, adjust `FONT_PATH` in `convert_mp4.py`\n- **Python 3.10+**: all scripts use standard library only (no pip install needed)\n"
  },
  {
    "path": "plugins/podcast/skills/podcast/scripts/convert_mp4.py",
    "content": "#!/usr/bin/env python3\n\"\"\"MP3 → MP4 conversion (dark background + title overlay)\"\"\"\n\nimport argparse\nimport os\nimport subprocess\nimport sys\n\nFONT_PATH = \"/System/Library/Fonts/AppleSDGothicNeo.ttc\"\nBG_COLOR = \"0x1a1a2e\"\nRESOLUTION = \"1920x1080\"\n\n\ndef escape_drawtext(text):\n    \"\"\"Escape special characters for ffmpeg drawtext filter\"\"\"\n    text = text.replace(\"\\\\\", \"\\\\\\\\\")\n    text = text.replace(\"'\", \"'\\\\''\")\n    text = text.replace(\":\", \"\\\\:\")\n    text = text.replace(\";\", \"\\\\;\")\n    text = text.replace(\"[\", \"\\\\[\")\n    text = text.replace(\"]\", \"\\\\]\")\n    text = text.replace(\"=\", \"\\\\=\")\n    text = text.replace(\"%\", \"%%\")\n    return text\n\n\ndef convert(input_path, output_path, title, subtitle=\"\"):\n    \"\"\"Convert MP3 to MP4 with static title card via ffmpeg\"\"\"\n    vf_parts = []\n\n    if title:\n        escaped_title = escape_drawtext(title)\n        vf_parts.append(\n            f\"drawtext=text='{escaped_title}'\"\n            f\":fontsize=60:fontcolor=white\"\n            f\":x=(w-text_w)/2:y=(h-text_h)/2-40\"\n            f\":fontfile={FONT_PATH}\"\n        )\n\n    if subtitle:\n        escaped_sub = escape_drawtext(subtitle)\n        vf_parts.append(\n            f\"drawtext=text='{escaped_sub}'\"\n            f\":fontsize=36:fontcolor=0xAAAAAA\"\n            f\":x=(w-text_w)/2:y=(h-text_h)/2+40\"\n            f\":fontfile={FONT_PATH}\"\n        )\n\n    vf = \",\".join(vf_parts) if vf_parts else \"null\"\n\n    cmd = [\n        \"ffmpeg\", \"-y\",\n        \"-f\", \"lavfi\", \"-i\", f\"color=c={BG_COLOR}:s={RESOLUTION}:r=1\",\n        \"-i\", input_path,\n        \"-c:v\", \"libx264\", \"-tune\", \"stillimage\",\n        \"-c:a\", \"aac\", \"-b:a\", \"192k\",\n        \"-pix_fmt\", \"yuv420p\",\n        \"-shortest\",\n        \"-vf\", vf,\n        output_path,\n    ]\n\n    result = subprocess.run(cmd, capture_output=True, text=True)\n    if result.returncode != 0:\n        print(f\"ffmpeg error: {result.stderr[-500:]}\", file=sys.stderr)\n        sys.exit(1)\n\n    size_mb = os.path.getsize(output_path) / 1024 / 1024\n    print(f\"Done! {output_path} ({size_mb:.1f} MB)\")\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"MP3 to MP4 conversion\")\n    parser.add_argument(\"--input\", required=True, help=\"Input MP3 path\")\n    parser.add_argument(\"--output\", required=True, help=\"Output MP4 path\")\n    parser.add_argument(\"--title\", default=\"\", help=\"Video title\")\n    parser.add_argument(\"--subtitle\", default=\"\", help=\"Video subtitle\")\n    args = parser.parse_args()\n\n    convert(args.input, args.output, args.title, args.subtitle)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/podcast/skills/podcast/scripts/generate_tts.py",
    "content": "#!/usr/bin/env python3\n\"\"\"OpenAI gpt-4o-mini-tts로 팟캐스트 음성 생성 — 청크 분할 + ffmpeg 병합\"\"\"\n\nimport argparse\nimport json\nimport os\nimport re\nimport subprocess\nimport sys\nimport time\nimport urllib.error\nimport urllib.request\n\nDEFAULT_MODEL = \"gpt-4o-mini-tts\"\nDEFAULT_VOICE = \"marin\"\nDEFAULT_INSTRUCTIONS = (\n    \"따뜻하고 친근한 한국어 팟캐스트 호스트. \"\n    \"명확한 발음으로 또박또박 읽되, 자연스러운 억양과 적절한 감정을 담아서. \"\n    \"중요한 포인트에서는 약간 힘을 주고, 인용구에서는 톤을 살짝 바꿔서 구분감을 준다. \"\n    \"전체적으로 지적이면서도 편안한 분위기.\"\n)\nMAX_CHARS = 1500  # gpt-4o-mini-tts: 2000 token limit, 한국어 ~1.5 char/token\n\n\ndef extract_speech_text(md_path):\n    \"\"\"마크다운에서 실제 대사만 추출 (헤더, 메타데이터, 테이블, 코드블록 제거)\"\"\"\n    with open(md_path, \"r\") as f:\n        text = f.read()\n\n    lines = text.split(\"\\n\")\n    speech_lines = []\n    skip = False\n    for line in lines:\n        if line.startswith(\"# \") and speech_lines == []:\n            continue\n        if line.startswith(\">\"):\n            continue\n        if line.startswith(\"---\"):\n            continue\n        if line.startswith(\"## \"):\n            speech_lines.append(\"\")\n            continue\n        if line.startswith(\"```\"):\n            skip = not skip\n            continue\n        if line.startswith(\"|\"):\n            continue\n        if skip:\n            continue\n        clean = line.strip()\n        clean = re.sub(r'\\*\\*(.+?)\\*\\*', r'\\1', clean)\n        clean = re.sub(r'\\*(.+?)\\*', r'\\1', clean)\n        clean = re.sub(r'`(.+?)`', r'\\1', clean)\n        clean = re.sub(r'\\[(.+?)\\]\\(.+?\\)', r'\\1', clean)\n        clean = re.sub(r'^[-*]\\s+', '', clean)  # 불릿 마커 제거\n        if clean:\n            speech_lines.append(clean)\n\n    return \"\\n\".join(speech_lines)\n\n\ndef split_paragraph_by_sentences(para, max_chars=MAX_CHARS):\n    \"\"\"단일 문단이 max_chars를 초과할 때 문장 단위로 분할\"\"\"\n    sentences = re.split(r'(?<=[.!?。])\\s+', para)\n    chunks = []\n    current = \"\"\n    for sent in sentences:\n        if len(current) + len(sent) + 1 > max_chars:\n            if current:\n                chunks.append(current.strip())\n            if len(sent) > max_chars:\n                for i in range(0, len(sent), max_chars):\n                    chunks.append(sent[i:i + max_chars])\n                current = \"\"\n            else:\n                current = sent\n        else:\n            current = current + \" \" + sent if current else sent\n    if current.strip():\n        chunks.append(current.strip())\n    return chunks\n\n\ndef split_into_chunks(text, max_chars=MAX_CHARS):\n    \"\"\"문단 단위로 청크 분할 (초과 문단은 문장 단위로 재분할)\"\"\"\n    paragraphs = text.split(\"\\n\\n\")\n    chunks = []\n    current = \"\"\n\n    for para in paragraphs:\n        para = para.strip()\n        if not para:\n            continue\n        if len(para) > max_chars:\n            if current:\n                chunks.append(current.strip())\n                current = \"\"\n            chunks.extend(split_paragraph_by_sentences(para, max_chars))\n            continue\n        if len(current) + len(para) + 2 > max_chars:\n            if current:\n                chunks.append(current.strip())\n            current = para\n        else:\n            current = current + \"\\n\\n\" + para if current else para\n\n    if current.strip():\n        chunks.append(current.strip())\n\n    return chunks\n\n\ndef generate_tts_chunk(text, output_path, api_key, model, voice, instructions,\n                       max_retries=3):\n    \"\"\"OpenAI TTS API 단일 청크 호출 (재시도 + 지수 백오프)\"\"\"\n    payload = {\n        \"model\": model,\n        \"input\": text,\n        \"voice\": voice,\n        \"response_format\": \"mp3\",\n    }\n    if instructions:\n        payload[\"instructions\"] = instructions\n\n    data = json.dumps(payload).encode(\"utf-8\")\n\n    for attempt in range(max_retries):\n        req = urllib.request.Request(\n            \"https://api.openai.com/v1/audio/speech\",\n            data=data,\n            headers={\n                \"Authorization\": f\"Bearer {api_key}\",\n                \"Content-Type\": \"application/json\",\n            },\n            method=\"POST\",\n        )\n        try:\n            with urllib.request.urlopen(req, timeout=180) as resp:\n                with open(output_path, \"wb\") as f:\n                    f.write(resp.read())\n            return\n        except urllib.error.HTTPError as e:\n            body = e.read().decode(\"utf-8\", errors=\"replace\")\n            if e.code == 429 or e.code >= 500:\n                wait = 2 ** attempt\n                print(f\"    Retry {attempt+1}/{max_retries} (waiting {wait}s)...\",\n                      file=sys.stderr)\n                time.sleep(wait)\n                continue\n            print(f\"    API Error {e.code}: {body}\", file=sys.stderr)\n            raise\n        except (urllib.error.URLError, TimeoutError):\n            if attempt < max_retries - 1:\n                wait = 2 ** attempt\n                print(f\"    Network error, retry {attempt+1}/{max_retries} (waiting {wait}s)...\",\n                      file=sys.stderr)\n                time.sleep(wait)\n                continue\n            raise\n\n    raise RuntimeError(f\"TTS API failed after {max_retries} attempts\")\n\n\ndef merge_audio(chunk_files, output_path):\n    \"\"\"ffmpeg concat으로 청크 병합\"\"\"\n    if len(chunk_files) == 1:\n        os.rename(chunk_files[0], output_path)\n        return\n\n    output_dir = os.path.dirname(output_path)\n    list_file = os.path.join(output_dir, \"_chunks.txt\")\n    with open(list_file, \"w\") as f:\n        for cf in chunk_files:\n            f.write(f\"file '{cf}'\\n\")\n\n    subprocess.run(\n        [\"ffmpeg\", \"-y\", \"-f\", \"concat\", \"-safe\", \"0\",\n         \"-i\", list_file, \"-c\", \"copy\", output_path],\n        check=True, capture_output=True,\n    )\n\n    os.remove(list_file)\n    for cf in chunk_files:\n        os.remove(cf)\n\n\ndef get_duration(path):\n    \"\"\"ffprobe로 오디오 길이 반환 (초)\"\"\"\n    result = subprocess.run(\n        [\"ffprobe\", \"-v\", \"error\", \"-show_entries\", \"format=duration\",\n         \"-of\", \"default=noprint_wrappers=1:nokey=1\", path],\n        capture_output=True, text=True,\n    )\n    return float(result.stdout.strip())\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"Podcast TTS Generation\")\n    parser.add_argument(\"--input\", required=True, help=\"Script markdown path\")\n    parser.add_argument(\"--output\", required=True, help=\"Output MP3 path\")\n    parser.add_argument(\"--api-key\", default=os.environ.get(\"OPENAI_API_KEY\", \"\"), help=\"OpenAI API key\")\n    parser.add_argument(\"--model\", default=DEFAULT_MODEL)\n    parser.add_argument(\"--voice\", default=DEFAULT_VOICE)\n    parser.add_argument(\"--instructions\", default=DEFAULT_INSTRUCTIONS)\n    args = parser.parse_args()\n\n    if not args.api_key:\n        print(\"ERROR: --api-key or OPENAI_API_KEY env var required\", file=sys.stderr)\n        sys.exit(1)\n\n    print(\"1/4 Extracting speech text...\")\n    speech_text = extract_speech_text(args.input)\n    print(f\"    Total: {len(speech_text)} chars\")\n\n    print(\"2/4 Splitting into chunks...\")\n    chunks = split_into_chunks(speech_text)\n    print(f\"    {len(chunks)} chunks\")\n\n    print(\"3/4 Generating TTS...\")\n    output_dir = os.path.dirname(args.output)\n    chunk_files = []\n    for i, chunk in enumerate(chunks):\n        out = os.path.join(output_dir, f\"_chunk_{i:03d}.mp3\")\n        print(f\"    [{i+1}/{len(chunks)}] {len(chunk)} chars...\")\n        generate_tts_chunk(chunk, out, args.api_key, args.model, args.voice, args.instructions)\n        chunk_files.append(out)\n\n    print(\"4/4 Merging audio...\")\n    merge_audio(chunk_files, args.output)\n\n    duration = get_duration(args.output)\n    minutes, seconds = int(duration // 60), int(duration % 60)\n    print(f\"\\nDone! {args.output} ({minutes}m {seconds}s)\")\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/podcast/skills/podcast/scripts/upload_youtube.py",
    "content": "#!/usr/bin/env python3\n\"\"\"YouTube Data API v3 resumable upload — standard library only\"\"\"\n\nimport argparse\nimport glob\nimport http.server\nimport json\nimport os\nimport sys\nimport urllib.error\nimport urllib.parse\nimport urllib.request\nimport webbrowser\n\nSCOPES = \"https://www.googleapis.com/auth/youtube.upload\"\nREDIRECT_PORT = 8085\nREDIRECT_URI = f\"http://localhost:{REDIRECT_PORT}\"\n\n\ndef find_client_secret():\n    \"\"\"Auto-discover Google OAuth client secret\"\"\"\n    patterns = [\n        os.path.expanduser(\"~/Downloads/client_secret_*.json\"),\n        os.path.expanduser(\"~/.config/google/client_secret_*.json\"),\n    ]\n    for p in patterns:\n        matches = glob.glob(p)\n        if matches:\n            return sorted(matches)[-1]\n    return None\n\n\ndef load_client_config(path):\n    with open(path) as f:\n        data = json.load(f)\n    cfg = data.get(\"installed\") or data.get(\"web\")\n    return cfg[\"client_id\"], cfg[\"client_secret\"]\n\n\ndef get_auth_code(client_id):\n    \"\"\"Browser OAuth flow → authorization code\"\"\"\n    auth_url = (\n        \"https://accounts.google.com/o/oauth2/v2/auth?\"\n        + urllib.parse.urlencode({\n            \"client_id\": client_id,\n            \"redirect_uri\": REDIRECT_URI,\n            \"response_type\": \"code\",\n            \"scope\": SCOPES,\n            \"access_type\": \"offline\",\n            \"prompt\": \"consent\",\n        })\n    )\n\n    code_holder = {}\n\n    class Handler(http.server.BaseHTTPRequestHandler):\n        def do_GET(self):\n            qs = urllib.parse.parse_qs(urllib.parse.urlparse(self.path).query)\n            code_holder[\"code\"] = qs.get(\"code\", [None])[0]\n            self.send_response(200)\n            self.send_header(\"Content-Type\", \"text/html; charset=utf-8\")\n            self.end_headers()\n            self.wfile.write(\"Auth complete! You can close this tab.\".encode(\"utf-8\"))\n\n        def log_message(self, *args):\n            pass\n\n    server = http.server.HTTPServer((\"localhost\", REDIRECT_PORT), Handler)\n    print(\"  Opening browser for Google auth...\")\n    webbrowser.open(auth_url)\n    server.handle_request()\n    server.server_close()\n    return code_holder.get(\"code\")\n\n\ndef exchange_code(client_id, client_secret, code, token_path):\n    data = urllib.parse.urlencode({\n        \"code\": code,\n        \"client_id\": client_id,\n        \"client_secret\": client_secret,\n        \"redirect_uri\": REDIRECT_URI,\n        \"grant_type\": \"authorization_code\",\n    }).encode()\n\n    req = urllib.request.Request(\"https://oauth2.googleapis.com/token\", data=data)\n    with urllib.request.urlopen(req) as resp:\n        tokens = json.loads(resp.read())\n\n    with open(token_path, \"w\") as f:\n        json.dump(tokens, f, indent=2)\n    print(f\"  Token saved: {token_path}\")\n    return tokens[\"access_token\"]\n\n\ndef refresh_token(client_id, client_secret, token_path):\n    with open(token_path) as f:\n        tokens = json.load(f)\n\n    data = urllib.parse.urlencode({\n        \"refresh_token\": tokens[\"refresh_token\"],\n        \"client_id\": client_id,\n        \"client_secret\": client_secret,\n        \"grant_type\": \"refresh_token\",\n    }).encode()\n\n    req = urllib.request.Request(\"https://oauth2.googleapis.com/token\", data=data)\n    with urllib.request.urlopen(req) as resp:\n        new_tokens = json.loads(resp.read())\n\n    tokens[\"access_token\"] = new_tokens[\"access_token\"]\n    with open(token_path, \"w\") as f:\n        json.dump(tokens, f, indent=2)\n    return tokens[\"access_token\"]\n\n\ndef get_access_token(client_secret_path, token_path):\n    client_id, client_secret = load_client_config(client_secret_path)\n\n    if os.path.exists(token_path):\n        try:\n            return refresh_token(client_id, client_secret, token_path)\n        except Exception:\n            print(\"  Token expired. Re-authenticating...\")\n\n    code = get_auth_code(client_id)\n    if not code:\n        print(\"ERROR: Failed to get authorization code.\", file=sys.stderr)\n        sys.exit(1)\n    return exchange_code(client_id, client_secret, code, token_path)\n\n\ndef upload_video(access_token, video_path, title, description, tags, privacy):\n    \"\"\"Resumable upload\"\"\"\n    metadata = {\n        \"snippet\": {\n            \"title\": title,\n            \"description\": description,\n            \"tags\": tags,\n            \"categoryId\": \"22\",\n            \"defaultLanguage\": \"ko\",\n        },\n        \"status\": {\n            \"privacyStatus\": privacy,\n            \"selfDeclaredMadeForKids\": False,\n        },\n    }\n\n    meta_bytes = json.dumps(metadata).encode(\"utf-8\")\n    file_size = os.path.getsize(video_path)\n\n    init_req = urllib.request.Request(\n        \"https://www.googleapis.com/upload/youtube/v3/videos?\"\n        + urllib.parse.urlencode({\"uploadType\": \"resumable\", \"part\": \"snippet,status\"}),\n        data=meta_bytes,\n        headers={\n            \"Authorization\": f\"Bearer {access_token}\",\n            \"Content-Type\": \"application/json; charset=utf-8\",\n            \"X-Upload-Content-Length\": str(file_size),\n            \"X-Upload-Content-Type\": \"video/mp4\",\n        },\n        method=\"POST\",\n    )\n\n    with urllib.request.urlopen(init_req) as resp:\n        upload_url = resp.headers[\"Location\"]\n\n    print(f\"  Uploading... ({file_size / 1024 / 1024:.1f} MB)\")\n    with open(video_path, \"rb\") as f:\n        video_data = f.read()\n\n    upload_req = urllib.request.Request(\n        upload_url,\n        data=video_data,\n        headers={\n            \"Authorization\": f\"Bearer {access_token}\",\n            \"Content-Type\": \"video/mp4\",\n            \"Content-Length\": str(file_size),\n        },\n        method=\"PUT\",\n    )\n\n    with urllib.request.urlopen(upload_req, timeout=300) as resp:\n        return json.loads(resp.read())\n\n\ndef main():\n    parser = argparse.ArgumentParser(description=\"YouTube Upload\")\n    parser.add_argument(\"--video\", required=True, help=\"MP4 file path\")\n    parser.add_argument(\"--title\", required=True, help=\"Video title\")\n    parser.add_argument(\"--description\", default=\"\", help=\"Video description\")\n    parser.add_argument(\"--tags\", default=\"\", help=\"Tags (comma-separated)\")\n    parser.add_argument(\"--privacy\", default=\"unlisted\", choices=[\"public\", \"unlisted\", \"private\"])\n    parser.add_argument(\"--client-secret\", default=None, help=\"OAuth client secret path\")\n    parser.add_argument(\"--token-path\", default=None, help=\"Token storage path\")\n    args = parser.parse_args()\n\n    client_secret = args.client_secret or find_client_secret()\n    if not client_secret:\n        print(\"ERROR: Google OAuth client secret not found.\", file=sys.stderr)\n        print(\"  Place client_secret_*.json in ~/Downloads/ or use --client-secret\", file=sys.stderr)\n        sys.exit(1)\n\n    token_path = args.token_path or os.path.join(os.path.dirname(args.video), \"youtube_token.json\")\n    tags = [t.strip() for t in args.tags.split(\",\") if t.strip()] if args.tags else []\n\n    print(\"1/2 OAuth authentication...\")\n    access_token = get_access_token(client_secret, token_path)\n\n    print(\"2/2 YouTube upload...\")\n    result = upload_video(access_token, args.video, args.title, args.description, tags, args.privacy)\n\n    video_id = result[\"id\"]\n    url = f\"https://youtu.be/{video_id}\"\n    print(f\"\\nUpload complete!\")\n    print(f\"  URL: {url}\")\n    print(f\"  Status: {result['status']['privacyStatus']}\")\n\n    meta_path = os.path.join(os.path.dirname(args.video), \"metadata.json\")\n    with open(meta_path, \"w\") as f:\n        json.dump({\n            \"youtube_url\": url,\n            \"youtube_id\": video_id,\n            \"title\": args.title,\n            \"description\": args.description,\n            \"privacy\": result[\"status\"][\"privacyStatus\"],\n        }, f, indent=2, ensure_ascii=False)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/say-summary/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"say-summary\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Speaks a short summary of Claude's response using macOS say command\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"accessibility\", \"audio\", \"tts\", \"macos\"],\n  \"hooks\": \"./hooks/hooks.json\"\n}\n"
  },
  {
    "path": "plugins/say-summary/README.md",
    "content": "# say-summary\n\nA Claude Code plugin that speaks a short summary of Claude's response using macOS text-to-speech.\n\n## Features\n\n- Summarizes Claude's response to 3-10 words using Claude Haiku\n- Speaks the summary aloud using macOS `say` command\n- **Korean/English auto-detection**: Uses Yuna voice for Korean, Samantha for English\n- Runs in background so it doesn't block Claude Code\n\n## Requirements\n\n- **macOS** (uses the `say` command)\n- **Python 3.10+**\n- **Claude Code CLI** installed\n\n## Installation\n\n```bash\n# Add the marketplace\n/plugin marketplace add team-attention/plugins-for-claude-natives\n\n# Install the plugin\n/plugin install say-summary\n\n# Run setup to install Python dependencies\n~/.claude/plugins/say-summary/scripts/setup.sh\n```\n\nOr manually install the dependency:\n\n```bash\npip3 install --user claude-agent-sdk\n```\n\n## How It Works\n\n1. When Claude finishes responding (Stop hook), the plugin extracts the last message\n2. If the message is longer than 10 words, it uses Claude Haiku to create a short headline\n3. The summary is spoken aloud via macOS `say` command\n\n## Configuration\n\nThe plugin uses these defaults:\n- Speech rate: 190 words per minute\n- Model: Claude Haiku (for fast summarization)\n- Korean voice: Yuna\n- English voice: Samantha\n\n## Logs\n\nLogs are written to `/tmp/speak-hook.log` for debugging.\n\n## License\n\nMIT\n"
  },
  {
    "path": "plugins/say-summary/hooks/hooks.json",
    "content": "{\n  \"description\": \"Summarize and speak last Claude response\",\n  \"hooks\": {\n    \"Stop\": [\n      {\n        \"hooks\": [\n          {\n            \"type\": \"command\",\n            \"command\": \"python3 ${CLAUDE_PLUGIN_ROOT}/scripts/say-summary.py\",\n            \"timeout\": 30\n          }\n        ]\n      }\n    ]\n  }\n}\n"
  },
  {
    "path": "plugins/say-summary/requirements.txt",
    "content": "claude-agent-sdk>=0.1.0\n"
  },
  {
    "path": "plugins/say-summary/scripts/say-summary.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nStop hook: Summarizes and speaks the last Claude response.\n\n- Extracts the last assistant message from transcript\n- Uses Claude Agent SDK (Haiku) to summarize in 10 words or less\n- Speaks the summary via macOS say command\n- Runs in background so hook exits immediately\n\"\"\"\n\nimport asyncio\nimport json\nimport os\nimport subprocess\nfrom datetime import datetime\nfrom pathlib import Path\n\nfrom claude_agent_sdk import (AssistantMessage, ClaudeAgentOptions, TextBlock,\n                              query)\n\nLOG_FILE = Path(\"/tmp/speak-hook.log\")\n\n\ndef log(message: str) -> None:\n    \"\"\"Write message to log file.\"\"\"\n    timestamp = datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n    with open(LOG_FILE, \"a\") as f:\n        f.write(f\"[{timestamp}] {message}\\n\")\n\n\ndef get_project_dir() -> Path | None:\n    \"\"\"Find Claude project directory for current working directory.\"\"\"\n    cwd = os.getcwd()\n    # /Users/bong/path/to/project -> -Users-bong-path-to-project\n    project_dir_name = cwd.replace(\"/\", \"-\")\n    claude_project_dir = Path.home() / \".claude\" / \"projects\" / project_dir_name\n\n    if claude_project_dir.is_dir():\n        return claude_project_dir\n    return None\n\n\ndef get_latest_transcript(project_dir: Path) -> Path | None:\n    \"\"\"Find most recently modified transcript file.\"\"\"\n    jsonl_files = list(project_dir.glob(\"*.jsonl\"))\n    if not jsonl_files:\n        return None\n\n    return max(jsonl_files, key=lambda f: f.stat().st_mtime)\n\n\ndef extract_last_assistant_message(transcript_path: Path) -> str | None:\n    \"\"\"Extract last assistant message from transcript.\"\"\"\n    try:\n        with open(transcript_path, \"r\") as f:\n            lines = f.readlines()\n\n        # Search in reverse order\n        for line in reversed(lines):\n            try:\n                data = json.loads(line)\n                message = data.get(\"message\", {})\n\n                if message and message.get(\"role\") == \"assistant\":\n                    content = message.get(\"content\", [])\n\n                    # Extract text type items\n                    text_parts = [\n                        item.get(\"text\", \"\")\n                        for item in content\n                        if isinstance(item, dict) and item.get(\"type\") == \"text\"\n                    ]\n\n                    full_text = \"\".join(text_parts)\n                    if full_text:\n                        return full_text\n\n            except json.JSONDecodeError:\n                continue\n\n    except Exception as e:\n        log(f\"Error reading transcript: {e}\")\n\n    return None\n\n\nasync def summarize_with_haiku(text: str) -> str:\n    \"\"\"Summarize message to 10 words or less using Claude Haiku.\"\"\"\n    # Return as-is if already 10 words or less\n    if len(text.split()) <= 10:\n        return text.strip()\n\n    # Truncate for faster processing\n    truncated = text[:500] if len(text) > 500 else text\n\n    system_prompt = \"You are a headline writer. Output ONLY a 3-10 word headline. No questions. No commentary. No offers to help. Just the headline. If the text contains both English and Korean, write the headline in Korean.\"\n\n    options = ClaudeAgentOptions(\n        model=\"haiku\",\n        system_prompt=system_prompt,\n        allowed_tools=[],\n        max_turns=1\n    )\n\n    response_text = \"\"\n    try:\n        async for message in query(prompt=f\"요약할 텍스트: {truncated}\", options=options):\n            if isinstance(message, AssistantMessage):\n                for block in message.content:\n                    if isinstance(block, TextBlock):\n                        response_text += block.text\n                        # Return immediately after first response\n                        return response_text.strip()\n    except Exception as e:\n        log(f\"Haiku summarization failed: {e}\")\n        return text[:50].strip()\n\n    return response_text.strip() if response_text else text[:50].strip()\n\n\ndef detect_korean(text: str) -> bool:\n    \"\"\"Check if text contains Korean characters.\"\"\"\n    for char in text:\n        if '\\uac00' <= char <= '\\ud7a3':  # 한글 음절\n            return True\n        if '\\u1100' <= char <= '\\u11ff':  # 한글 자모\n            return True\n    return False\n\n\ndef speak(text: str) -> None:\n    \"\"\"Speak text via macOS say command (background).\n\n    - Uses rate -r 190 for natural pace\n    - Detects language: Korean uses Yuna, English uses Samantha\n    \"\"\"\n    cmd = [\"nohup\", \"say\", \"-r\", \"190\"]\n\n    if detect_korean(text):\n        cmd.extend([\"-v\", \"Yuna\"])\n    else:\n        cmd.extend([\"-v\", \"Samantha\"])\n\n    cmd.append(text)\n\n    subprocess.Popen(\n        cmd,\n        stdout=subprocess.DEVNULL,\n        stderr=subprocess.DEVNULL,\n        start_new_session=True\n    )\n\n\nasync def async_main() -> None:\n    log(\"=== HOOK START ===\")\n    log(f\"PWD: {os.getcwd()}\")\n\n    # 1. Find project directory\n    project_dir = get_project_dir()\n    if not project_dir:\n        log(\"Project dir not found\")\n        return\n    log(f\"Project dir: {project_dir}\")\n\n    # 2. Find latest transcript file\n    transcript_path = get_latest_transcript(project_dir)\n    if not transcript_path:\n        log(\"No transcript file found\")\n        return\n    log(f\"Transcript: {transcript_path.name}\")\n\n    # 3. Extract last assistant message\n    last_message = extract_last_assistant_message(transcript_path)\n    if not last_message:\n        log(\"No assistant message found\")\n        return\n    log(f\"Found message ({len(last_message)} chars)\")\n\n    # 4. Summarize with Haiku\n    summary = await summarize_with_haiku(last_message)\n    log(f\"Summary: {summary}\")\n\n    # 5. Speak summary\n    speak(summary)\n\n    log(\"=== HOOK END ===\")\n\n\ndef main() -> None:\n    asyncio.run(async_main())\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "plugins/say-summary/scripts/setup.sh",
    "content": "#!/bin/bash\n# Setup script for say-summary plugin\n# Installs required Python dependencies\n\nset -e\n\necho \"Installing say-summary plugin dependencies...\"\n\n# Check if pip is available\nif ! command -v pip3 &> /dev/null; then\n    echo \"Error: pip3 is not installed. Please install Python 3 first.\"\n    exit 1\nfi\n\n# Install claude-agent-sdk\npip3 install --user claude-agent-sdk\n\necho \"Done! Plugin is ready to use.\"\necho \"\"\necho \"Note: This plugin requires macOS (uses the 'say' command for TTS).\"\n"
  },
  {
    "path": "plugins/session-wrap/.claude-plugin/plugin.json",
    "content": "{\r\n  \"name\": \"session-wrap\",\r\n  \"version\": \"1.0.0\",\r\n  \"description\": \"Session wrap-up workflow with multi-agent analysis pipeline for documentation, automation, learning, and follow-up suggestions\",\r\n  \"author\": {\r\n    \"name\": \"Team Attention\",\r\n    \"url\": \"https://github.com/team-attention\"\r\n  },\r\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\r\n  \"license\": \"MIT\",\r\n  \"keywords\": [\"session\", \"wrap-up\", \"documentation\", \"automation\", \"multi-agent\", \"history-insight\", \"session-analyzer\"]\r\n}\r\n"
  },
  {
    "path": "plugins/session-wrap/README.md",
    "content": "# Session Wrap Plugin\r\n\r\nA Claude Code plugin for comprehensive session wrap-up with multi-agent analysis.\r\n\r\n## Features\r\n\r\n- **Multi-Agent Analysis Pipeline**: 5 specialized agents analyze your session from different perspectives\r\n- **2-Phase Architecture**: Parallel analysis followed by sequential validation\r\n- **Documentation Updates**: Identify what should be added to CLAUDE.md and context.md\r\n- **Automation Discovery**: Find patterns worth automating as skills/commands/agents\r\n- **Learning Capture**: Extract insights, mistakes, and discoveries in TIL format\r\n- **Follow-up Planning**: Prioritized task list for next session\r\n- **Duplicate Prevention**: Validates proposals against existing content\r\n\r\n## Installation\r\n\r\n### Option 1: Plugin Directory\r\n\r\n```bash\r\n# Clone or copy to your plugins directory\r\ngit clone https://github.com/team-attention/plugins-for-claude-natives\r\ncd plugins-for-claude-natives/plugins/session-wrap\r\n\r\n# Or copy directly\r\ncp -r session-wrap ~/.claude/plugins/\r\n```\r\n\r\n### Option 2: Direct Use\r\n\r\n```bash\r\nclaude --plugin-dir /path/to/session-wrap\r\n```\r\n\r\n## Usage\r\n\r\n### Basic Usage\r\n\r\n```\r\n/wrap\r\n```\r\n\r\nRuns the full wrap-up workflow:\r\n1. Check git status\r\n2. Phase 1: Run 4 analysis agents in parallel\r\n3. Phase 2: Validate proposals for duplicates\r\n4. Present results and let you choose actions\r\n5. Execute selected actions\r\n\r\n### Quick Commit\r\n\r\n```\r\n/wrap fix typo in README\r\n```\r\n\r\nWhen arguments are provided, creates a commit with that message directly.\r\n\r\n## Architecture\r\n\r\n```\r\nPhase 1: Analysis (Parallel)\r\n┌──────────────┬──────────────┬──────────────┬──────────────┐\r\n│ doc-updater  │ automation-  │ learning-    │ followup-    │\r\n│              │ scout        │ extractor    │ suggester    │\r\n└──────┬───────┴──────┬───────┴──────┬───────┴──────┬───────┘\r\n       └──────────────┴──────────────┴──────────────┘\r\n                            │\r\n                            ▼\r\nPhase 2: Validation (Sequential)\r\n┌─────────────────────────────────────────────────────────────┐\r\n│                    duplicate-checker                         │\r\n└─────────────────────────────────────────────────────────────┘\r\n                            │\r\n                            ▼\r\n                    User Selection\r\n```\r\n\r\n## Agents\r\n\r\n| Agent | Model | Purpose |\r\n|-------|-------|---------|\r\n| `doc-updater` | sonnet | Analyze documentation update needs |\r\n| `automation-scout` | sonnet | Detect automation opportunities |\r\n| `learning-extractor` | sonnet | Extract learnings and mistakes |\r\n| `followup-suggester` | sonnet | Suggest prioritized follow-up tasks |\r\n| `duplicate-checker` | haiku | Validate proposals for duplicates |\r\n\r\n## Skills\r\n\r\n### session-wrap\r\nSession wrap-up best practices, multi-agent orchestration patterns, 2-phase pipeline design guidance.\r\n\r\n**Trigger phrases:** \"session wrap-up\", \"wrap best practices\", \"multi-agent orchestration\", \"2-phase pipeline\"\r\n\r\n### history-insight\r\nClaude Code 세션 히스토리를 분석하고 인사이트를 추출합니다.\r\n\r\n**Trigger phrases:** \"capture session\", \"save session history\", \"what we discussed\", \"today's work\", \"session history\"\r\n\r\n### session-analyzer\r\nPost-hoc analysis tool for validating Claude Code session behavior against SKILL.md specifications.\r\n\r\n**Trigger phrases:** \"analyze session\", \"세션 분석\", \"evaluate skill execution\", \"check session logs\"\r\n\r\n## Directory Structure\r\n\r\n```\r\nsession-wrap/\r\n├── .claude-plugin/\r\n│   └── plugin.json           # Plugin manifest\r\n├── commands/\r\n│   └── wrap.md               # /wrap command\r\n├── agents/\r\n│   ├── doc-updater.md        # Documentation analysis\r\n│   ├── automation-scout.md   # Automation detection\r\n│   ├── learning-extractor.md # Learning capture\r\n│   ├── followup-suggester.md # Task prioritization\r\n│   └── duplicate-checker.md  # Validation\r\n├── skills/\r\n│   ├── session-wrap/\r\n│   │   ├── SKILL.md          # Best practices guide\r\n│   │   └── references/\r\n│   │       └── multi-agent-patterns.md\r\n│   ├── history-insight/\r\n│   │   ├── SKILL.md          # Session history analysis\r\n│   │   ├── scripts/\r\n│   │   │   └── extract-session.sh\r\n│   │   └── references/\r\n│   │       └── session-file-format.md\r\n│   └── session-analyzer/\r\n│       ├── SKILL.md          # Post-hoc session validation\r\n│       ├── scripts/\r\n│       │   ├── extract-hook-events.sh\r\n│       │   ├── extract-subagent-calls.sh\r\n│       │   └── find-session-files.sh\r\n│       └── references/\r\n│           ├── analysis-patterns.md\r\n│           └── common-issues.md\r\n└── README.md\r\n```\r\n\r\n## When to Use\r\n\r\n**Use `/wrap` when:**\r\n- Ending a significant work session\r\n- Before switching to a different project\r\n- After completing a feature or bug fix\r\n- When unsure what to document\r\n\r\n**Skip when:**\r\n- Very short session with trivial changes\r\n- Only reading/exploring code\r\n- Quick one-off question answered\r\n\r\n## Integration with plugin-dev\r\n\r\nWhen `automation-scout` recommends creating a new skill/command/agent, use:\r\n\r\n```\r\n/plugin-dev:create-plugin\r\n```\r\n\r\nThis will guide you through creating a well-structured automation.\r\n\r\n## References\r\n\r\n- [Anthropic Multi-Agent Research](https://www.anthropic.com/engineering/multi-agent-research-system)\r\n- [Azure AI Agent Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns)\r\n\r\n## License\r\n\r\nMIT\r\n"
  },
  {
    "path": "plugins/session-wrap/agents/automation-scout.md",
    "content": "---\r\nname: automation-scout\r\ndescription: |\r\n  Analyze automation patterns. Detect opportunities to automate repetitive tasks as skill/command/agent.\r\ntools: [\"Read\", \"Glob\", \"Grep\"]\r\nmodel: sonnet\r\ncolor: green\r\n---\r\n\r\n# Automation Scout\r\n\r\nSpecialized agent that identifies patterns in work sessions and recommends optimal automation mechanisms (skill, command, agent).\r\n\r\n## Core Responsibilities\r\n\r\n1. **Pattern Detection**: Identify repetitive workflows, multi-step processes, tedious tasks\r\n2. **Automation Classification**: Determine best fit among skill, command, agent\r\n3. **Specific Recommendations**: Provide concrete implementation suggestions with examples\r\n4. **Duplicate Prevention**: Check existing automations before recommending new ones\r\n\r\n## Analysis Framework\r\n\r\n### Automation Types\r\n\r\n#### Skill (`.claude/skills/`)\r\n\r\n**Good for:**\r\n- Multi-step workflows requiring external integrations (APIs, databases, services)\r\n- Tasks requiring orchestration of multiple tools\r\n- Complex business logic or data transformations\r\n- Service integrations (Notion, Slack, etc.)\r\n- Tasks requiring API response handling and action chaining\r\n\r\n**Pattern examples:**\r\n- \"Sync meeting notes to documentation\"\r\n- \"Generate report from multiple data sources\"\r\n- \"Deploy app and update tracking\"\r\n- \"Fetch from API, transform, store in database\"\r\n\r\n#### Command (`.claude/commands/`)\r\n\r\n**Good for:**\r\n- Quick, focused tasks within conversation flow\r\n- Format conversion or data processing\r\n- Session management utilities\r\n- Text generation with specific templates\r\n- Tasks returning results directly to conversation\r\n\r\n**Pattern examples:**\r\n- \"Format this data as table\"\r\n- \"Generate wrap-up report\"\r\n- \"Translate code between languages\"\r\n- \"Generate summary from text\"\r\n\r\n#### Agent (`.claude/agents/`)\r\n\r\n**Good for:**\r\n- Tasks requiring specialized domain expertise\r\n- Complex analysis needing deep knowledge\r\n- Tasks requiring autonomous decision-making\r\n- Workflows benefiting from consistent persona/approach\r\n- When you want to delegate to an expert\r\n\r\n**Pattern examples:**\r\n- \"Review code for security issues\" → security-reviewer agent\r\n- \"Analyze database schema\" → database-architect agent\r\n- \"Optimize performance\" → performance-optimizer agent\r\n- \"Review architecture decisions\" → architecture-reviewer agent\r\n\r\n## Pattern Detection Process\r\n\r\n### Step 1: Identify Candidates\r\n\r\nScan session for:\r\n\r\n**1. Repetition (frequency ≥ 2):**\r\n- Same task performed multiple times\r\n- Similar workflows with slight variations\r\n- Recurring analysis or review patterns\r\n\r\n**2. Multi-tool Workflows:**\r\n- Bash → Read → Write sequences\r\n- API call → data transformation → storage\r\n- Search → analyze → report patterns\r\n\r\n**3. Format-heavy Tasks:**\r\n- Consistent output structure required\r\n- Template-based generation\r\n- Data transformations with fixed rules\r\n\r\n**4. External Integration Patterns:**\r\n- Repeated API calls to same service\r\n- Database operations with similar structure\r\n- File system operations with consistent logic\r\n\r\n### Step 2: Check Existing Automations\r\n\r\nSearch with Glob and Read:\r\n```bash\r\n# Search existing skills\r\nGlob: .claude/skills/*/SKILL.md\r\n\r\n# Search existing commands\r\nGlob: .claude/commands/*.md\r\n\r\n# Search existing agents\r\nGlob: .claude/agents/**/*.md\r\n```\r\n\r\nFind with Grep:\r\n- Similar keywords in descriptions\r\n- Related functionality\r\n- Overlapping use cases\r\n\r\n### Step 3: Classify Automation Type\r\n\r\nDecision tree:\r\n\r\n```\r\nNeed integration with external services? (API, DB, Slack, etc.)\r\n├─ YES → Likely Skill\r\n└─ NO → Continue\r\n\r\nNeed specialized domain knowledge or complex analysis?\r\n├─ YES → Likely Agent\r\n└─ NO → Continue\r\n\r\nPrimarily format conversion or quick utility?\r\n├─ YES → Likely Command\r\n└─ NO → Consider Skill or Agent based on complexity\r\n```\r\n\r\n### Step 4: Formulate Recommendations\r\n\r\nFor each automation opportunity:\r\n\r\n```markdown\r\n## [Automation Name]\r\n\r\n**Type:** [Skill / Command / Agent]\r\n\r\n**Detected Pattern:**\r\n- Frequency: [X times this session / repetitive pattern]\r\n- Workflow: [Pattern description]\r\n- Tools used: [List of tools/services]\r\n\r\n**Current Pain:**\r\n- [What's tedious about current approach]\r\n- [Errors that could be prevented]\r\n- [Time that could be saved]\r\n\r\n**Proposed Solution:**\r\n\r\n[For Skill]\r\n```yaml\r\n# .claude/skills/[name]/SKILL.md\r\n---\r\nname: [name]\r\ndescription: [Single line description. No YAML multiline (|, >) allowed]\r\n---\r\n\r\n# [Skill Title]\r\n\r\n# Trigger: \"[Phrases that trigger this]\"\r\n# Dependencies: [Required APIs, tools]\r\n\r\n# [Pseudocode or implementation outline]\r\n```\r\n\r\n> **Important**: description must be **single line**. No YAML multiline syntax (`|`, `>`).\r\n\r\n[For Command]\r\n```markdown\r\n# .claude/commands/[name].md\r\n# Usage: /[name] [args]\r\n\r\n[Command specification]\r\n```\r\n\r\n[For Agent]\r\n```markdown\r\n# .claude/agents/[name].md\r\n# Trigger condition: [Condition]\r\n\r\n[Agent specification outline]\r\n```\r\n\r\n**Expected Benefits:**\r\n- Time saved: [Estimate]\r\n- Error reduction: [Errors prevented]\r\n- Consistency: [What becomes more consistent]\r\n\r\n**Similar Existing Automation:**\r\n- [None / Similar automation name at [path]]\r\n- [If similar exists: Difference / Why both needed]\r\n\r\n**Implementation Priority:**\r\n- [High / Medium / Low]\r\n- [Reason for priority]\r\n```\r\n\r\n## Quality Standards\r\n\r\n1. **Clear Justification**: Explain why this automation type is best\r\n2. **Concrete Examples**: Show actual code/config snippets\r\n3. **Quantified Benefits**: Estimate time saved or errors prevented\r\n4. **Duplicate Awareness**: Always check for similar existing automations\r\n5. **Realistic Scope**: Don't over-engineer; propose minimum viable automation\r\n\r\n## Output Format\r\n\r\n```markdown\r\n# Automation Opportunity Analysis\r\n\r\n## Summary\r\n- Automation opportunities identified: [X]\r\n- Skills recommended: [X]\r\n- Commands recommended: [X]\r\n- Agents recommended: [X]\r\n\r\n---\r\n\r\n## High Priority\r\n\r\n### [Automation 1]\r\n[Full recommendation in format above]\r\n\r\n---\r\n\r\n## Medium Priority\r\n\r\n### [Automation 2]\r\n[Full recommendation in format above]\r\n\r\n---\r\n\r\n## Low Priority / Future Consideration\r\n\r\n### [Automation 3]\r\n[Full recommendation in format above]\r\n\r\n---\r\n\r\n## No Automation Needed\r\n\r\n[Explanation if no clear automation opportunities]\r\n```\r\n\r\n## Edge Cases\r\n\r\n- **One-off tasks**: Don't recommend automation for truly unique tasks\r\n- **Rapidly changing workflows**: Flag if pattern might change soon\r\n- **Over-automation**: Consider if manual execution is actually simpler\r\n- **Maintenance burden**: Note if automation would be harder to maintain than manual process\r\n- **Existing partial solutions**: Suggest extending existing automation rather than creating new\r\n\r\n## Decision Guidelines\r\n\r\n**Prefer Skill:**\r\n- External API/service integration needed\r\n- Multiple steps with complex logic\r\n- State maintained between steps\r\n- Error handling and retry important\r\n\r\n**Prefer Command:**\r\n- Pure text/data transformation\r\n- Quick utility within conversation\r\n- Template-based generation\r\n- No external dependencies\r\n\r\n**Prefer Agent:**\r\n- Domain expertise required\r\n- Complex analysis or reasoning\r\n- Consistent approach/persona beneficial\r\n- Multiple decision points in workflow\r\n\r\n**Don't Automate:**\r\n- Used once or very rarely\r\n- Easier to do manually\r\n- Requirements unclear or changing\r\n- Automation more complex than task itself\r\n\r\n## Implementation Guidance\r\n\r\nWhen automation is recommended, guide users to create it properly:\r\n\r\n**For plugin-based automation (skill/command/agent):**\r\n```\r\nIf you want to implement this automation, use:\r\n/plugin-dev:create-plugin\r\n\r\nThis will guide you through creating a complete, well-structured plugin\r\nwith proper triggering conditions, system prompts, and validation.\r\n```\r\n\r\n**For simple single-file additions:**\r\n- Commands: Create directly in `.claude/commands/`\r\n- Agents: Create directly in `.claude/agents/`\r\n- Skills: Create skill directory in `.claude/skills/`\r\n\r\n**Recommendation**: For anything more than a simple command, use `/plugin-dev:create-plugin` for better structure, validation, and maintainability.\r\n"
  },
  {
    "path": "plugins/session-wrap/agents/doc-updater.md",
    "content": "---\r\nname: doc-updater\r\ndescription: |\r\n  Analyze documentation update needs for CLAUDE.md and context.md. Use during session wrap-up to determine what should be documented.\r\ntools: [\"Read\", \"Glob\", \"Grep\"]\r\nmodel: sonnet\r\ncolor: blue\r\n---\r\n\r\n# Doc Updater\r\n\r\nSpecialized agent that evaluates **documentation value** of session discoveries and proposes specific additions.\r\n\r\n## Core Responsibilities\r\n\r\n1. **Session Context Analysis**: Identify content worth documenting\r\n2. **Update Classification**: Determine which file to update (CLAUDE.md, context.md)\r\n3. **Specific Proposals**: Provide actual content to add, not general recommendations\r\n4. **Duplicate Prevention**: Cross-reference existing docs to avoid redundancy\r\n\r\n## Analysis Process\r\n\r\n### Step 1: Read Current Documentation\r\n\r\n```\r\nRead: CLAUDE.md (if exists)\r\nGlob: **/context.md\r\n```\r\n\r\n### Step 2: Identify Update Candidates\r\n\r\n#### CLAUDE.md Targets\r\n\r\n**Look for:**\r\n- **New commands**: Commands added to `.claude/commands/`\r\n- **New skills**: Skills created in `.claude/skills/`\r\n- **New agents**: Agents added to `.claude/agents/`\r\n- **Environment changes**: New env vars, dependencies, setup steps\r\n- **Project structure changes**: New directories, submodules, major reorganization\r\n- **Workflow updates**: New automation processes, integration patterns\r\n- **Tool configuration**: MCP servers, external tools, API integrations\r\n\r\n**CLAUDE.md Addition Criteria:**\r\n- Information Claude needs in future sessions\r\n- Reference information used repeatedly\r\n- Settings/configurations affecting all projects\r\n- Cross-project patterns or standards\r\n\r\n#### context.md Targets\r\n\r\n**Look for:**\r\n- **Project-specific knowledge**: Details only relevant to specific project\r\n- **Customer/client context**: Business requirements, constraints, preferences\r\n- **Technical constraints**: Known limitations, workarounds, caveats\r\n- **Historical context**: Why certain decisions were made\r\n- **Recurring issues**: Problems that keep coming up and their solutions\r\n- **Tacit knowledge**: Things not obvious from code alone\r\n\r\n**context.md Addition Criteria:**\r\n- Project-specific (not applicable to other projects)\r\n- Helps understand \"why\" not just \"what\"\r\n- Captures tribal knowledge or organizational memory\r\n- Explains non-intuitive patterns or decisions\r\n\r\n### Step 3: Duplicate Check\r\n\r\nSearch with Grep:\r\n- Similar section headers\r\n- Related keywords\r\n- Overlapping functionality\r\n- Existing documentation on same topic\r\n\r\nNote when found:\r\n- Location of duplicate/similar content\r\n- Whether truly new information\r\n- Whether merge/replace is better than addition\r\n\r\n### Step 4: Format Proposals\r\n\r\nFor each proposed update:\r\n\r\n```markdown\r\n## [Filename]\r\n\r\n### Section: [Section name or new section]\r\n\r\n**Proposed Addition:**\r\n```\r\n[Exact markdown content to add]\r\n```\r\n\r\n**Rationale:** [Why this should be added]\r\n\r\n**Location:** [Where in file - e.g., \"Under ## Development Environment\" or \"New section after ## Git Submodules\"]\r\n\r\n**Duplicate Check:** [Not found / Similar content exists at [location]]\r\n```\r\n\r\n## Quality Standards\r\n\r\n1. **Specificity**: Provide exact text to add, no vague suggestions\r\n2. **Context**: Include enough detail for future sessions to understand\r\n3. **Format**: Follow existing document structure and style\r\n4. **Relevance**: Only propose truly documentation-worthy content\r\n5. **Completeness**: Include code examples, commands, links when helpful\r\n\r\n## Output Format\r\n\r\n```markdown\r\n# Documentation Update Analysis\r\n\r\n## Summary\r\n- CLAUDE.md updates recommended: [X]\r\n- context.md updates recommended: [X]\r\n\r\n---\r\n\r\n## CLAUDE.md Updates\r\n\r\n### [Proposal 1]\r\n\r\n**Section**: [Existing or new section name]\r\n\r\n**Content to Add:**\r\n```markdown\r\n[Actual markdown to add]\r\n```\r\n\r\n**Rationale**: [Why needed]\r\n\r\n**Location**: [Exact location]\r\n\r\n**Duplicate Check**: [Result]\r\n\r\n---\r\n\r\n## context.md Updates\r\n\r\n### [Project name]/context.md\r\n\r\n**Content to Add:**\r\n```markdown\r\n[Actual markdown to add]\r\n```\r\n\r\n**Rationale**: [Why needed]\r\n\r\n---\r\n\r\n## No Updates Needed\r\n\r\n[Explanation if no updates required]\r\n```\r\n\r\n## Edge Cases\r\n\r\n- **Temporary experiments**: Don't document one-off experiments that won't become permanent\r\n- **Work in progress**: Note if incomplete and should be documented later\r\n- **Sensitive information**: Flag credentials, private data that should be in .env\r\n- **Conflicting information**: If new info contradicts existing docs, suggest resolution\r\n- **Version-specific**: Note if content only applies to specific versions/environments\r\n\r\n## Key Principles\r\n\r\n- Focus on **actionable** documentation updates\r\n- Prioritize information that saves time in future sessions\r\n- Consider target audience (future Claude or team members)\r\n- Balance completeness with conciseness\r\n- When uncertain, lean toward documenting (too much better than too little)\r\n"
  },
  {
    "path": "plugins/session-wrap/agents/duplicate-checker.md",
    "content": "---\r\nname: duplicate-checker\r\ndescription: |\r\n  Phase 2 validation agent. Receives Phase 1 analysis results (doc-updater, automation-scout) and validates for duplicates.\r\ntools: [\"Read\", \"Glob\", \"Grep\"]\r\nmodel: haiku\r\ncolor: yellow\r\n---\r\n\r\n# Duplicate Checker (Phase 2)\r\n\r\nSpecialized agent that **validates Phase 1 proposals against existing documentation/automation for duplicates**.\r\n\r\n> **Role in 2-Phase Pipeline**: Receives Phase 1 output as input and performs validation.\r\n> Evaluates doc-updater and automation-scout proposals, returning duplicate warnings, merge suggestions, and approval list.\r\n\r\n## Core Responsibilities\r\n\r\n1. **Phase 1 Proposal Validation**: Check doc-updater and automation-scout proposals for duplicates\r\n2. **Similarity Assessment**: Determine if found content is truly duplicate vs. merely related\r\n3. **Location Mapping**: Provide exact file paths and line numbers for duplicates\r\n4. **Classification**: Categorize each proposal as Approved/Merge/Skip\r\n\r\n## Input Format\r\n\r\nPhase 1 results are passed in this format:\r\n\r\n```markdown\r\n## doc-updater proposals:\r\n### CLAUDE.md Update\r\n- Section: [Section name]\r\n- Content to add: [Specific content]\r\n\r\n### context.md Update\r\n- Project: [Project name]\r\n- Content to add: [Specific content]\r\n\r\n## automation-scout proposals:\r\n### [Automation name]\r\n- Type: Skill/Command/Agent\r\n- Function: [Description]\r\n```\r\n\r\n## Search Strategy\r\n\r\n### Step 1: Extract Search Terms from Phase 1 Proposals\r\n\r\n**From doc-updater proposals:**\r\n- Section headers, keywords, command names, workflow names\r\n\r\n**From automation-scout proposals:**\r\n- Skill/command/agent names\r\n- Trigger phrases\r\n- Key verbs/nouns from function descriptions\r\n\r\n### Step 2: Execute Multi-Layer Search\r\n\r\n#### Layer 1: Exact Match\r\nFind exact phrases or names:\r\n```bash\r\n# Search exact tool/command/skill names\r\nGrep: \"[exact-name]\" in .claude/\r\nGrep: \"[exact-name]\" in *.md\r\n```\r\n\r\n#### Layer 2: Keyword Match\r\nFind individual keywords:\r\n```bash\r\n# Search each important keyword\r\nGrep: \"[keyword1]\" in CLAUDE.md\r\nGrep: \"[keyword1]\" in **/context.md\r\n```\r\n\r\n#### Layer 3: Section Headers\r\nUse Read and manual scan for similar section structures:\r\n- Headers with similar phrasing\r\n- Tables with similar column names\r\n- Lists describing similar functionality\r\n\r\n#### Layer 4: Functional Overlap\r\nUse Read to understand:\r\n- What existing skills/commands/agents do\r\n- How they overlap with proposed content\r\n- Where integration makes sense\r\n\r\n### Step 3: Evaluate Search Results\r\n\r\nFor each match found, determine:\r\n\r\n**1. Duplicate Type:**\r\n- **Complete duplicate**: Same information, same context\r\n- **Partial duplicate**: Some overlap but also unique information\r\n- **Related**: Same topic but different perspective/purpose\r\n- **False positive**: Contains keyword but actually different\r\n\r\n**2. Location:**\r\n- File path\r\n- Line number or section header\r\n- Context (which section it's in)\r\n\r\n**3. Recommendation:**\r\n- **Skip**: Content already well-documented here\r\n- **Merge**: Combine new information with existing content\r\n- **Add**: Unique enough to add as separate entry\r\n- **Replace**: New content better than existing\r\n\r\n## Search Scope by Content Type\r\n\r\n### CLAUDE.md Updates\r\n\r\nSearch in:\r\n- `CLAUDE.md` (entire file)\r\n- Section-specific search based on proposed update location\r\n\r\nLook for:\r\n- Similar command descriptions\r\n- Overlapping workflow documentation\r\n- Redundant environment setup instructions\r\n- Duplicate tool configuration\r\n\r\n### context.md Updates\r\n\r\nSearch in:\r\n- All `context.md` files via `Glob: **/context.md`\r\n- Project-specific READMEs\r\n- Related documentation in same project directory\r\n\r\nLook for:\r\n- Similar project constraints or caveats\r\n- Overlapping technical context\r\n- Duplicate problem/solution descriptions\r\n- Redundant historical explanations\r\n\r\n### Skills/Commands/Agents\r\n\r\nSearch in:\r\n- `.claude/skills/` (all SKILL.md files and READMEs)\r\n- `.claude/commands/` (all .md files)\r\n- `.claude/agents/` (all .md files)\r\n\r\nLook for:\r\n- Same trigger phrases\r\n- Similar functionality\r\n- Overlapping tool usage patterns\r\n- Redundant automation goals\r\n\r\n## Output Format\r\n\r\n```markdown\r\n# Phase 2 Validation Results\r\n\r\n## Summary\r\n| Proposal Source | Total | Approved | Merge | Skip |\r\n|----------------|-------|----------|-------|------|\r\n| doc-updater | [X] | [X] | [X] | [X] |\r\n| automation-scout | [X] | [X] | [X] | [X] |\r\n\r\n---\r\n\r\n## Approved Proposals (No Duplicates)\r\n\r\n### doc-updater proposals\r\n1. **[Proposal title]** → Approved\r\n   - Search scope: CLAUDE.md, context.md\r\n   - Conclusion: Unique content, safe to add\r\n\r\n### automation-scout proposals\r\n1. **[Automation name]** → Approved\r\n   - Search scope: skills/, commands/, agents/\r\n   - Conclusion: No similar automation, safe to create\r\n\r\n---\r\n\r\n## Merge Recommended\r\n\r\n### [Proposal title]\r\n\r\n**Phase 1 Proposal:**\r\n```\r\n[Proposed content]\r\n```\r\n\r\n**Existing Content:** `/path/to/file.md` line [X]\r\n```\r\n[Existing content]\r\n```\r\n\r\n**Overlap:** [What's duplicate]\r\n**Unique:** [What's new]\r\n\r\n**Merge Suggestion:**\r\n```\r\n[Merged content]\r\n```\r\n\r\n---\r\n\r\n## Skip Recommended (Complete Duplicate)\r\n\r\n### [Proposal title]\r\n\r\n**Phase 1 Proposal:**\r\n```\r\n[Proposed content]\r\n```\r\n\r\n**Already Exists:** `/path/to/file.md` line [X]\r\n```\r\n[Existing content]\r\n```\r\n\r\n**Conclusion:** Content already exists, addition unnecessary\r\n\r\n---\r\n\r\n## Validation Details\r\n\r\n**Search Scope:**\r\n- CLAUDE.md: Full scan\r\n- context.md: [X] files\r\n- skills: [X] checked\r\n- commands: [X] checked\r\n- agents: [X] checked\r\n```\r\n\r\n## Quality Standards\r\n\r\n1. **Thoroughness**: Search all relevant locations, not just obvious ones\r\n2. **Precision**: Distinguish true duplicates from merely related content\r\n3. **Actionability**: Provide clear recommendations with reasoning\r\n4. **Context**: Show enough existing content to support evaluation\r\n5. **Completeness**: Document search scope to avoid missed duplicates\r\n\r\n## Edge Cases\r\n\r\n- **Similar but different scope**: Two skills that sound similar but serve different use cases\r\n- **Content evolution**: Old content that should be replaced with newer, better version\r\n- **Cross-project patterns**: Same pattern used in multiple projects (may be intentional)\r\n- **Version differences**: Similar content for different versions/environments\r\n- **Renamed content**: Same functionality under new name\r\n\r\n## Search Optimization\r\n\r\n**For generic terms:**\r\n- Use exact phrases in quotes when possible\r\n- Combine multiple keywords to reduce false positives\r\n- Search specific directories if scope is known\r\n\r\n**For automation checks:**\r\n- Always check trigger phrases, not just names\r\n- Search for similar function descriptions\r\n- Check related categories too (e.g., check commands when verifying skills)\r\n\r\n**For documentation checks:**\r\n- Search section headers as well as content\r\n- Look for similar table structures\r\n- Find related keywords in different sections\r\n\r\n## Key Principles\r\n\r\n- **False negatives are costly**: Better to over-report potential duplicates than miss them\r\n- **Context matters**: Same words in different contexts may not be duplicates\r\n- **Evolution is OK**: Similar but evolved content may be appropriate at times\r\n- **Cross-reference**: Even if not duplicate, suggest cross-references for related content\r\n- **Merge vs Replace**: Consider if old content has preservation value\r\n"
  },
  {
    "path": "plugins/session-wrap/agents/followup-suggester.md",
    "content": "---\r\nname: followup-suggester\r\ndescription: |\r\n  Suggest follow-up tasks. Identify incomplete work, improvement points, and prioritize next session tasks.\r\ntools: [\"Read\", \"Glob\", \"Grep\"]\r\nmodel: sonnet\r\ncolor: cyan\r\n---\r\n\r\n# Followup Suggester\r\n\r\nSpecialized agent that analyzes current work state to identify **incomplete tasks, improvement opportunities, and logical next steps** for future sessions.\r\n\r\n## Core Responsibilities\r\n\r\n1. **Incomplete Task Detection**: Identify unfinished features, partial implementations, open questions\r\n2. **Improvement Identification**: Discover optimization, refactoring, enhancement areas\r\n3. **Priority Assignment**: Rank tasks by urgency, impact, and dependencies\r\n4. **Context Preservation**: Capture enough information for seamless continuation\r\n\r\n## Task Categories\r\n\r\n### 1. Incomplete Implementations\r\n\r\n#### Partially Built Features\r\n- **Feature**: What was being built\r\n- **Completed**: What's finished\r\n- **Remaining**: What still needs work\r\n- **Blocker**: What's preventing completion (if any)\r\n- **Expected effort**: Time to complete\r\n\r\n#### Unfinished Refactoring\r\n- **Target**: What needs refactoring\r\n- **Reason**: Why refactoring started\r\n- **Progress**: How far along\r\n- **Next steps**: Specific actions to continue\r\n\r\n#### Abandoned Experiments\r\n- **What tried**: Experiment description\r\n- **Why stopped**: Reason for abandonment\r\n- **Decision needed**: Resume or discard?\r\n- **Alternatives**: Other approaches to consider\r\n\r\n### 2. Testing & Validation Needed\r\n\r\n#### Untested Code\r\n- **Needs testing**: Specific functions/features\r\n- **Test type**: Unit/integration/e2e\r\n- **Test scenarios**: Key cases to cover\r\n- **Risk**: What could break without tests\r\n\r\n#### Known Issues\r\n- **Bug description**: What's wrong\r\n- **Severity**: Critical/High/Medium/Low\r\n- **Workaround**: Temporary fix (if any)\r\n- **Root cause**: If known\r\n- **Fix approach**: How to resolve\r\n\r\n#### Edge Cases\r\n- **Scenario**: Untested edge case\r\n- **Current behavior**: How system likely handles it\r\n- **Expected behavior**: How it should handle it\r\n- **Test approach**: How to verify\r\n\r\n### 3. Documentation Gaps\r\n\r\n#### Code Documentation\r\n- **Needs docs**: Functions/modules/APIs\r\n- **Current state**: What documentation exists\r\n- **Missing info**: What should be added\r\n- **Audience**: Who needs this documentation\r\n\r\n#### User Documentation\r\n- **Feature**: What users need to understand\r\n- **Format**: README/wiki/tutorial/guide\r\n- **Content**: Key points to cover\r\n- **Examples**: Demos needed\r\n\r\n### 4. Optimization Opportunities\r\n\r\n#### Performance\r\n- **Bottleneck**: What's slow\r\n- **Impact**: How much it affects UX\r\n- **Approach**: Potential optimization strategies\r\n- **Measurement**: How to verify improvement\r\n\r\n#### Code Quality\r\n- **Issue**: What's messy or complex\r\n- **Refactoring**: How to improve\r\n- **Benefit**: Why it matters\r\n- **Risk**: What could break\r\n\r\n#### Architecture\r\n- **Current limitation**: What doesn't scale\r\n- **Proposed change**: Better approach\r\n- **Migration**: How to transition\r\n- **Impact**: What else changes\r\n\r\n### 5. Infrastructure & Tooling\r\n\r\n#### Setup & Configuration\r\n- **Needs setup**: Tool/service/environment\r\n- **Purpose**: Why it's needed\r\n- **Steps**: How to configure\r\n- **Documentation**: Where to record setup\r\n\r\n#### Automation\r\n- **Manual process**: What's tedious\r\n- **Automation approach**: How to automate\r\n- **Effort**: Implementation time\r\n- **Payoff**: Time saved per use\r\n\r\n## Analysis Process\r\n\r\n### Step 1: Scan for Incomplete Work\r\n\r\n#### Search with Grep:\r\n```bash\r\n# Find TODO comments\r\nGrep: \"TODO\" in **/*.{js,ts,py,go,java,md}\r\n\r\n# Find FIXME comments\r\nGrep: \"FIXME\" in **/*.{js,ts,py,go,java,md}\r\n\r\n# Find WIP markers\r\nGrep: \"WIP\" in **/*.{js,ts,py,go,java,md}\r\n\r\n# Find temporary fixes\r\nGrep: \"HACK\" OR \"TEMP\" in **/*.{js,ts,py,go,java,md}\r\n```\r\n\r\n#### Review with Read:\r\n- Recently modified files for incomplete logic\r\n- Test files for missing coverage\r\n- Documentation for placeholders\r\n\r\n#### Session Review:\r\n- Features mentioned but not implemented\r\n- Decisions deferred for later\r\n- Questions left unanswered\r\n\r\n### Step 2: Identify Improvement Areas\r\n\r\n#### Code Quality Check\r\n- Functions over 50 lines\r\n- Duplicated logic\r\n- Complex conditionals\r\n- Missing error handling\r\n- Hardcoded values\r\n\r\n#### Architecture Review\r\n- Tight coupling\r\n- Missing abstractions\r\n- Scalability concerns\r\n- Security gaps\r\n\r\n#### User Experience\r\n- Missing feedback\r\n- Unclear error messages\r\n- Unhandled edge cases\r\n- Performance bottlenecks\r\n\r\n### Step 3: Prioritize Tasks\r\n\r\n#### Priority Matrix\r\n\r\n**P0 - Urgent (Must do first)**\r\n- Blocking other work\r\n- Production bugs\r\n- Security issues\r\n- Data integrity risks\r\n\r\n**P1 - High (Should do soon)**\r\n- Critical feature incomplete\r\n- Significant technical debt\r\n- Performance issues affecting UX\r\n- Missing critical tests\r\n\r\n**P2 - Medium (Should do)**\r\n- Code quality improvements\r\n- Documentation gaps\r\n- Minor feature incomplete\r\n- Nice-to-have optimizations\r\n\r\n**P3 - Low (Can do)**\r\n- Future enhancements\r\n- Experimental ideas\r\n- Non-critical refactoring\r\n- Optional automation\r\n\r\n#### Effort Estimation\r\n\r\n- **Quick (<1 hour)**: Small fixes, simple tests, minor docs\r\n- **Medium (1-4 hours)**: Features, refactoring, test suites\r\n- **Large (>4 hours)**: Architecture changes, major features, migrations\r\n\r\n#### Impact Assessment\r\n\r\n- **High**: Affects core functionality or many users\r\n- **Medium**: Improves experience or developer productivity\r\n- **Low**: Nice-to-have improvements\r\n\r\n### Step 4: Create Actionable Tasks\r\n\r\nFor each task:\r\n\r\n```markdown\r\n### [Task Title]\r\n\r\n**Category:** [Feature/Bug/Test/Docs/Optimization/Infrastructure]\r\n\r\n**Description:** [What needs to be done, 1-2 sentences]\r\n\r\n**Context:** [Why it matters and relevant background]\r\n\r\n**Specific Steps:**\r\n1. [Concrete action 1]\r\n2. [Concrete action 2]\r\n3. [Concrete action 3]\r\n\r\n**Done Criteria:**\r\n- [ ] [How to verify completion, criterion 1]\r\n- [ ] [How to verify completion, criterion 2]\r\n\r\n**Related Files:**\r\n- `/path/to/file1.ext`\r\n- `/path/to/file2.ext`\r\n\r\n**Dependencies:** [Other tasks that must be done first, if any]\r\n\r\n**Expected Effort:** [Quick/Medium/Large]\r\n\r\n**Priority:** [P0/P1/P2/P3]\r\n\r\n**Impact:** [High/Medium/Low]\r\n\r\n**Notes:** [Additional context or caveats]\r\n```\r\n\r\n## Output Format\r\n\r\n```markdown\r\n# Follow-up Tasks & Recommendations\r\n\r\n## Summary\r\n- Total tasks identified: [X]\r\n- P0 (Urgent): [X]\r\n- P1 (High): [X]\r\n- P2 (Medium): [X]\r\n- P3 (Low): [X]\r\n\r\n**Recommended Focus for Next Session:**\r\n[1-2 sentence summary of what to tackle next]\r\n\r\n---\r\n\r\n## P0 - Urgent (Must Do First)\r\n\r\n### [Task 1]\r\n[Full task template above]\r\n\r\n---\r\n\r\n## P1 - High Priority (Should Do Soon)\r\n\r\n### [Task 2]\r\n[Full task template above]\r\n\r\n---\r\n\r\n## P2 - Medium Priority (Should Do)\r\n\r\n### [Task 3]\r\n[Full task template above]\r\n\r\n---\r\n\r\n## P3 - Low Priority (Can Do)\r\n\r\n### [Task 4]\r\n[Full task template above]\r\n\r\n---\r\n\r\n## Quick Wins (< 1 hour, High Impact)\r\n\r\nRecommended to tackle first:\r\n\r\n1. **[Task name]** (P[X]) - [One-line description]\r\n   - Files: [file1.ext, file2.ext]\r\n   - Why: [Brief justification]\r\n\r\n---\r\n\r\n## Continued from This Session\r\n\r\nWork started but not completed:\r\n\r\n### [Incomplete Task 1]\r\n\r\n**What's Done:**\r\n- [Completed step 1]\r\n- [Completed step 2]\r\n\r\n**What Remains:**\r\n- [ ] [Remaining step 1]\r\n- [ ] [Remaining step 2]\r\n\r\n**Current State:** [Where things are now]\r\n\r\n**Next Action:** [Specific first step to resume]\r\n\r\n---\r\n\r\n## Future Improvements\r\n\r\nIdeas to consider later:\r\n\r\n- **[Improvement 1]**: [Description and potential value]\r\n- **[Improvement 2]**: [Description and potential value]\r\n\r\n---\r\n\r\n## Known Issues / Technical Debt\r\n\r\nIssues to eventually address:\r\n\r\n| Issue | Impact | Effort | Priority | Notes |\r\n|-------|--------|--------|----------|-------|\r\n| [Issue 1] | [H/M/L] | [Quick/Medium/Large] | [P0-P3] | [Context] |\r\n\r\n---\r\n\r\n## Session Continuity Notes\r\n\r\n**To Resume Work:**\r\n1. [Specific step to start]\r\n2. [Context to review]\r\n3. [Command to run]\r\n\r\n**Key Files to Review:**\r\n- `/path/to/file1` - [Why]\r\n- `/path/to/file2` - [Why]\r\n\r\n**Open Questions:**\r\n- [Question 1]\r\n- [Question 2]\r\n```\r\n\r\n## Quality Standards\r\n\r\n1. **Specificity**: Provide specific file paths, line numbers, function names\r\n2. **Actionability**: Clear first steps, not vague goals\r\n3. **Completeness**: Enough context to resume without re-investigation\r\n4. **Prioritized**: Honest assessment of importance and urgency\r\n5. **Realistic**: Reasonable effort estimates\r\n\r\n## Edge Cases\r\n\r\n- **No clear next steps**: Suggest exploration tasks or documentation review\r\n- **Too many tasks**: Group related tasks, suggest multi-session planning\r\n- **Unclear priorities**: Provide decision framework, note dependencies\r\n- **Experimental work**: Clearly mark as exploratory vs. committed\r\n- **Pending decisions**: List what needs to be decided and by whom\r\n\r\n## Key Principles\r\n\r\n- **Unblock first**: Identify what's preventing progress\r\n- **Dependencies**: Note task order and what depends on what\r\n- **Context loss**: Assume reader won't remember session details\r\n- **Effort accuracy**: Better to overestimate than underestimate\r\n- **Value focus**: Prioritize high-impact items, even if difficult\r\n"
  },
  {
    "path": "plugins/session-wrap/agents/learning-extractor.md",
    "content": "---\r\nname: learning-extractor\r\ndescription: |\r\n  Extract learnings, mistakes, and new discoveries from session. Summarize in TIL format for knowledge building.\r\ntools: [\"Read\", \"Glob\", \"Grep\"]\r\nmodel: sonnet\r\ncolor: magenta\r\n---\r\n\r\n# Learning Extractor\r\n\r\nSpecialized agent that identifies valuable lessons, new knowledge, and mistakes from work sessions to build organizational knowledge.\r\n\r\n## Core Responsibilities\r\n\r\n1. **Knowledge Capture**: Identify new technical knowledge, patterns, insights gained\r\n2. **Mistake Documentation**: Recognize errors and document lessons learned\r\n3. **Pattern Recognition**: Discover approaches that worked or failed\r\n4. **Capability Development**: Track progress in understanding or abilities\r\n\r\n## Learning Categories\r\n\r\n### 1. Technical Discoveries\r\n\r\n#### New APIs/Libraries\r\n- **What discovered**: Name and purpose of new tool/library/API\r\n- **Use case**: Problem it solves\r\n- **Key features**: Most important capabilities learned\r\n- **Gotchas**: Unexpected behaviors or limitations found\r\n- **Example**: Actual code snippet or usage pattern\r\n\r\n#### New Patterns/Techniques\r\n- **Pattern name**: What to call this approach\r\n- **Context**: When/why to use it\r\n- **Implementation**: How it works\r\n- **Advantages**: Why better than alternatives tried\r\n- **Example**: Real application from session\r\n\r\n#### Framework/Tool Features\r\n- **Feature**: Specific capability discovered\r\n- **Previous assumption**: What was thought before\r\n- **Actual behavior**: How it really works\r\n- **Impact**: How this changes future approach\r\n\r\n### 2. Problem-Solving Lessons\r\n\r\n#### Successful Approaches\r\n- **Problem**: What needed solving\r\n- **Approach**: What worked\r\n- **Result**: Outcome achieved\r\n- **Why it worked**: Analysis of success factors\r\n- **When to reuse**: Conditions where this applies again\r\n\r\n#### Failed Attempts\r\n- **What tried**: Approach that didn't work\r\n- **Why failed**: Root cause understanding\r\n- **Lesson**: What to avoid or do differently\r\n- **Better alternative**: What worked instead\r\n\r\n#### Debugging Insights\r\n- **Bug encountered**: Issue description\r\n- **Misleading symptoms**: What threw off investigation\r\n- **Actual cause**: Root cause found\r\n- **Debugging technique**: How it was discovered\r\n- **Prevention**: How to avoid similar issues\r\n\r\n### 3. Domain Knowledge\r\n\r\n#### Business Logic\r\n- **Concept**: Business rule or domain concept learned\r\n- **Context**: Where/why it matters\r\n- **Implication**: How it affects technical decisions\r\n\r\n#### User Behavior\r\n- **Observation**: User interaction pattern\r\n- **Insight**: Understanding of motivation or need\r\n- **Design impact**: How it should influence implementation\r\n\r\n#### System Constraints\r\n- **Constraint**: Limitation or requirement discovered\r\n- **Source**: Why this constraint exists\r\n- **Workaround**: How to work within it\r\n- **Impact**: What it prevents or requires\r\n\r\n### 4. Process Improvements\r\n\r\n#### Workflow Optimization\r\n- **Old way**: Previous approach\r\n- **New way**: Improved method discovered\r\n- **Efficiency gain**: Time/effort saved\r\n- **When to use**: Conditions where new way is better\r\n\r\n#### Tool Usage\r\n- **Tool**: Software/service used\r\n- **Feature**: Capability leveraged\r\n- **Productivity gain**: How it helped\r\n- **Best practice**: Optimal usage learned\r\n\r\n### 5. Mistakes & Corrections\r\n\r\n#### Common Errors\r\n- **Mistake**: What went wrong\r\n- **Frequency**: How often it occurs\r\n- **Root cause**: Why it keeps happening\r\n- **Prevention**: How to avoid in future\r\n- **Detection**: How to catch it early\r\n\r\n#### Misconceptions\r\n- **What was wrong**: Incorrect assumption\r\n- **Correct understanding**: Actual truth\r\n- **How discovered**: What revealed the error\r\n- **Ripple effects**: What else this affects\r\n\r\n## Extraction Process\r\n\r\n### Step 1: Scan for Learning Indicators\r\n\r\nLook for these patterns in session:\r\n- **Questions**: \"How does X work?\", \"Why did Y fail?\", \"Best way to do Z?\"\r\n- **Trial and error**: Multiple attempts before success\r\n- **Surprises**: \"Interesting!\", \"Didn't know that\", \"Unexpected\"\r\n- **Discoveries**: \"Ah, now I see\", \"So that's how it works\"\r\n- **Corrections**: \"Actually X doesn't work that way\", \"Should do Y instead\"\r\n- **Optimizations**: \"This is faster/better than the old way\"\r\n- **Warnings**: \"Watch out for X\", \"Don't forget Y\"\r\n\r\n### Step 2: Contextualize Each Learning\r\n\r\nFor each identified learning:\r\n1. **Capture specifics**: Exact API names, code patterns, error messages\r\n2. **Explain context**: What led to this discovery\r\n3. **Document evidence**: Code snippets, error outputs, test results\r\n4. **Extract insight**: General lesson beyond this specific instance\r\n5. **Note applicability**: When/where it applies\r\n\r\n### Step 3: Prioritize Learnings\r\n\r\nRank by:\r\n- **Reusability**: How likely this knowledge will be needed again\r\n- **Impact**: How much it affects future work\r\n- **Novelty**: How new/unexpected it is\r\n- **Shareability**: How valuable to others\r\n\r\n### Step 4: Format for Future Reference\r\n\r\nCreate:\r\n- **Searchable**: Include relevant keywords\r\n- **Scannable**: Clear headers and structure\r\n- **Actionable**: Enough detail to apply\r\n- **Connected**: Links to related concepts or docs\r\n\r\n## Output Format\r\n\r\n```markdown\r\n# Session Learning Extraction\r\n\r\n## Summary\r\n- Technical discoveries: [X]\r\n- Success patterns identified: [X]\r\n- Mistakes documented: [X]\r\n- Process improvements found: [X]\r\n\r\n---\r\n\r\n## Technical Discoveries\r\n\r\n### [Discovery 1: API/Library/Pattern Name]\r\n\r\n**What:** [One-line description]\r\n\r\n**Context:** [When/why needed]\r\n\r\n**Key Insight:** [Main lesson]\r\n\r\n**Details:**\r\n- [Specific detail 1]\r\n- [Specific detail 2]\r\n\r\n**Code Example:**\r\n```[language]\r\n[Actual code snippet from session]\r\n```\r\n\r\n**When to use:** [Conditions/scenarios]\r\n\r\n**Gotchas:** [Warnings or limitations]\r\n\r\n---\r\n\r\n## What Worked Well\r\n\r\n### [Success Pattern 1]\r\n\r\n**Problem:** [What needed solving]\r\n\r\n**Approach:** [What was done]\r\n\r\n**Result:** [Outcome achieved]\r\n\r\n**Why it worked:** [Success factor analysis]\r\n\r\n**Reusable Pattern:**\r\n```\r\n[Generalized pattern for reuse]\r\n```\r\n\r\n---\r\n\r\n## Mistakes & Lessons\r\n\r\n### [Mistake 1]\r\n\r\n**What went wrong:** [Error description]\r\n\r\n**Symptoms:** [How it manifested]\r\n\r\n**Root cause:** [Why it happened]\r\n\r\n**How fixed:** [Solution]\r\n\r\n**Lesson:** [What to do differently]\r\n\r\n**Prevention:** [How to avoid in future]\r\n\r\n---\r\n\r\n## Process Improvements\r\n\r\n### [Improvement 1]\r\n\r\n**Old approach:** [Previous method]\r\n\r\n**New approach:** [Better method discovered]\r\n\r\n**Improvement:** [What's better and by how much]\r\n\r\n**When to apply:** [Situations where helpful]\r\n\r\n---\r\n\r\n## Insights & Realizations\r\n\r\n### [Insight 1]\r\n\r\n**Previous understanding:** [What was thought before]\r\n\r\n**New understanding:** [Corrected/enhanced understanding]\r\n\r\n**Implications:** [How it changes approach]\r\n\r\n**Evidence:** [What led to this realization]\r\n\r\n---\r\n\r\n## Resources Discovered\r\n\r\n- **[Tool/Library/Article name]**: [URL] - [Why valuable]\r\n- **[Documentation/Tutorial]**: [URL] - [What it clarifies]\r\n\r\n---\r\n\r\n## Recommended Actions\r\n\r\nBased on these learnings:\r\n\r\n1. **[Action 1]**: [What and why to do]\r\n2. **[Action 2]**: [What and why to do]\r\n\r\n---\r\n\r\n## Notes for Future Sessions\r\n\r\n- [Important thing to remember next time]\r\n- [Shortcut or technique to reuse]\r\n- [Warning to keep in mind]\r\n```\r\n\r\n## Quality Standards\r\n\r\n1. **Specificity**: Include actual code, error messages, URLs—no vague descriptions\r\n2. **Contextual**: Explain when/why it matters, not just what\r\n3. **Actionable**: Enough detail to apply the learning\r\n4. **Honest**: Document failures as much as successes\r\n5. **Connected**: Link to related concepts and resources\r\n\r\n## Edge Cases\r\n\r\n- **Negative learning**: Things that didn't work (worth documenting!)\r\n- **Partial understanding**: Note what's still unclear or uncertain\r\n- **Evolving knowledge**: Flag learnings that might change with more experience\r\n- **Conflicting information**: When new learning contradicts previous understanding\r\n- **Context-specific**: Clarify learnings that only apply in certain situations\r\n\r\n## Key Principles\r\n\r\n- **Failures are learning**: Mistakes and failed attempts often most valuable\r\n- **Small wins count**: Even minor optimizations or shortcuts worth capturing\r\n- **Question everything**: If something was surprising, that's a learning\r\n- **Write for future self**: Document as if you'll forget in 6 months\r\n- **Share knowledge**: Consider which learnings benefit others on the team\r\n\r\n## Learning Value by Type\r\n\r\n**High value:**\r\n- Problem that took 30+ minutes to solve\r\n- Non-intuitive API behavior discovered\r\n- Solution for recurring issue found\r\n- Technique applicable across multiple projects\r\n\r\n**Medium value:**\r\n- Existing workflow optimization\r\n- New feature of familiar tool learned\r\n- Better practice identified\r\n- Understanding of why something works\r\n\r\n**Low value (but still document):**\r\n- Minor syntax/convention learned\r\n- Small productivity tips\r\n- Reminder of forgotten knowledge\r\n- Confirmation of expected behavior\r\n"
  },
  {
    "path": "plugins/session-wrap/commands/wrap.md",
    "content": "---\r\ndescription: Session wrap-up - analyze session, suggest documentation updates, automation opportunities, and follow-up tasks\r\nallowed-tools: Bash(git *), Read, Write, Edit, Glob, Grep, Task, AskUserQuestion\r\n---\r\n\r\n# Session Wrap-up (/wrap)\r\n\r\nWrap up the current session by analyzing work done and suggesting improvements.\r\n\r\n## Prerequisites\r\n\r\nBefore starting, load the session-wrap skill for detailed workflow guidance.\r\n\r\n## Quick Usage\r\n\r\n- `/wrap` - Interactive session wrap-up (recommended)\r\n- `/wrap [message]` - Quick commit with provided message\r\n\r\n## Execution\r\n\r\nFollow the workflow defined in the **session-wrap** skill:\r\n\r\n1. Check git status\r\n2. Phase 1: Run 4 analysis agents in parallel\r\n3. Phase 2: Run validation agent\r\n4. Integrate results and present options\r\n5. Execute selected actions\r\n\r\nRefer to `skills/session-wrap/SKILL.md` for detailed execution steps and agent configurations.\r\n"
  },
  {
    "path": "plugins/session-wrap/skills/history-insight/SKILL.md",
    "content": "---\nname: history-insight\ndescription: This skill should be used when user wants to access, capture, or reference Claude Code session history. Trigger when user says \"capture session\", \"save session history\", or references past/current conversation as a source - whether for saving, extracting, summarizing, or reviewing. This includes any mention of \"what we discussed\", \"today's work\", \"session history\", or when user treats the conversation itself as source material (e.g., \"from our conversation\").\nversion: 1.1.0\nuser-invocable: true\n---\n\n# History Insight\n\nClaude Code 세션 히스토리를 분석하고 인사이트를 추출합니다.\n\n---\n\n## Data Location\n\n```\n~/.claude/projects/<encoded-cwd>/*.jsonl\n```\n\n**Path Encoding:** `/Users/foo/project` → `-Users-foo-project`\n\n> 상세 파일 포맷: `${baseDir}/references/session-file-format.md`\n\n---\n\n## Execution Algorithm\n\n### Step 1: Ask Scope [MANDATORY]\n\n**스코프 결정:**\n\n1. **명시된 경우** (AskUserQuestion 생략 가능):\n   - \"현재 프로젝트만\" / \"이 프로젝트\" → `current_project`\n   - \"모든 세션\" / \"전체\" → `all_sessions`\n\n2. **명시되지 않은 경우** - AskUserQuestion 호출:\n   ```\n   question: \"세션 검색 범위를 선택하세요\"\n   options:\n     - \"현재 프로젝트만\" → ~/.claude/projects/<encoded-cwd>/*.jsonl\n     - \"모든 Claude Code 세션\" → ~/.claude/projects/**/*.jsonl\n   ```\n\n---\n\n### Step 2: Find Session Files\n\n```bash\n# Current project only\nfind ~/.claude/projects/<encoded-cwd> -name \"*.jsonl\" -type f\n\n# All sessions (모든 프로젝트)\nfind ~/.claude/projects -name \"*.jsonl\" -type f\n```\n\n**날짜 필터링**: 파일의 mtime(수정시간) 확인 후 필터. OS별 `stat` 옵션 다름:\n- macOS: `stat -f \"%Sm\" -t \"%Y-%m-%d\" <file>`\n- Linux: `stat -c \"%y\" <file>`\n\n---\n\n### Step 3: Process Sessions\n\n#### Decision Tree\n\n```\nSession files found?\n├─ No → Error: \"No sessions found\"\n└─ Yes → How many files?\n    ├─ 1-3 files → Direct Read + parse\n    └─ 4+ files → Batch Extract Pipeline\n```\n\n#### 1-3 Files\n\n직접 Read로 JSONL 파싱. 파일이 크면(≥5000 tokens) `extract-session.sh` 사용:\n```bash\n${baseDir}/scripts/extract-session.sh <session.jsonl>\n```\n\n#### 4+ Files: Batch Extract Pipeline\n\n1. 캐시 디렉토리 생성 (`/tmp/cc-cache/<analysis-name>/`)\n2. 세션 목록 저장 (`sessions.txt`)\n3. jq로 메시지 일괄 추출 (`user_messages.txt`)\n4. 정리 및 필터링 (`clean_messages.txt`)\n5. Task(opus)로 종합 분석\n\n#### 파일이 너무 클 때: 병렬 배치 분석\n\n`clean_messages.txt`가 너무 커서 Read 실패 시:\n\n1. **파일 분할**:\n   ```bash\n   split -l 2000 clean_messages.txt /tmp/cc-cache/<name>/batch_\n   ```\n\n2. **병렬 Task(opus) 호출**:\n   ```\n   Task(subagent_type=\"general-purpose\", model=\"opus\", run_in_background=true)\n   prompt: \"batch_XX 파일을 읽고 주제/패턴 요약해줘\"\n   ```\n\n3. **결과 병합**: Task(opus)로 종합\n\n---\n\n### Step 4: Report Results\n\n```markdown\n## Session Capture Complete\n\n- **Sessions:** N files processed\n- **Messages:** X total, Y after filter\n\n### Extracted Insights\n[분석 결과]\n```\n\n---\n\n## Error Handling\n\n| Scenario | Response |\n|----------|----------|\n| No session files found | \"No session files found for this project.\" |\n| File too large | Auto-preprocess with extract-session.sh |\n| jq not installed | \"Error: jq is required. Install with: brew install jq\" |\n| Task failed | \"Warning: Could not process [file]. Skipping.\" |\n| 0 relevant sessions | \"No sessions matched your criteria.\" |\n\n---\n\n## Security Notes\n\n- 출력에 전체 경로 노출 금지 (`~` prefix 사용)\n\n---\n\n## Related Resources\n\n- **`${baseDir}/scripts/extract-session.sh`** - JSONL 압축 (thinking, tool_use 제거)\n- **`${baseDir}/references/session-file-format.md`** - JSONL 구조 및 파싱\n"
  },
  {
    "path": "plugins/session-wrap/skills/history-insight/references/session-file-format.md",
    "content": "# Session File Format (.jsonl)\n\nClaude Code 세션 파일의 상세 구조 및 파싱 방법\n\n## JSONL Type 분류\n\n| type | 설명 | 필요 여부 |\n|------|------|---------|\n| `user` | 사용자 메시지 | ✅ 필요 |\n| `assistant` | Claude 응답 | ✅ 필요 (text만) |\n| `file-history-snapshot` | 파일 백업 스냅샷 | ❌ 버림 |\n| `queue-operation` | 큐 연산 로그 | ❌ 버림 |\n| `system` | 시스템 메시지 | ⚪ 선택 |\n| `summary` | 세션 요약 | ⚪ 선택 |\n\n## 실제 분석 결과 (12MB 파일 예시)\n\n| Type | Lines | Size | 비율 |\n|------|-------|------|------|\n| `file-history-snapshot` | 7,984 | 8.4MB | **67%** |\n| `queue-operation` | 15,948 | 3.4MB | **27%** |\n| `user` | 127 | 542KB | 4% |\n| `assistant` | 163 | 255KB | 2% |\n| `system` | 13 | 9KB | <1% |\n| `summary` | 5 | 600B | <1% |\n\n**결론:** 실제 대화는 6%, 나머지 94%는 메타데이터\n\n## assistant 메시지 내부 구조\n\n`.message.content[]` 배열 내부:\n\n| 내부 type | 설명 | 필요 여부 |\n|-----------|------|---------|\n| `thinking` | Claude 생각 과정 + signature | ❌ 버림 |\n| `tool_use` | tool 호출 정보 | ❌ 버림 |\n| `text` | **실제 응답 텍스트** | ✅ 필요 |\n\n## JSON 파싱 명령어\n\n**user 메시지 추출:**\n```bash\njq -c 'select(.type == \"user\") | {type, content: .message.content, ts: .timestamp}' <file.jsonl>\n```\n\n**assistant 메시지 추출 (text만):**\n```bash\njq -c 'select(.type == \"assistant\") | {type, texts: [.message.content[] | select(.type == \"text\") | .text], ts: .timestamp}' <file.jsonl>\n```\n\n**한 번에 대화만 추출:**\n```bash\njq -c '\n  if .type == \"user\" then\n    {type: \"user\", content: .message.content, ts: .timestamp}\n  elif .type == \"assistant\" then\n    {type: \"assistant\", texts: [.message.content[]? | select(.type == \"text\") | .text], ts: .timestamp}\n    | select(.texts | length > 0)\n  else empty end\n' <file.jsonl>\n```\n\n**결과:** 12MB → ~160KB (99% 감소)\n"
  },
  {
    "path": "plugins/session-wrap/skills/history-insight/scripts/extract-session.sh",
    "content": "#!/bin/bash\n# extract-session.sh\n# Extract essential conversation from Claude Code session JSONL files\n#\n# Usage: ./extract-session.sh <session.jsonl>\n# Output: Filtered JSON to stdout\n#\n# ============================================================\n# 세션 파일 구조 분석 결과 (12MB 파일 예시)\n# ============================================================\n#\n# JSONL type 분포:\n#   file-history-snapshot : 67% (8.4MB) → 버림\n#   queue-operation       : 27% (3.4MB) → 버림\n#   user + assistant      :  6% (800KB) → 추출\n#   system, summary       : <1%         → 선택적\n#\n# assistant.message.content[] 내부:\n#   thinking  : Claude 생각 + signature → 버림\n#   tool_use  : tool 호출 정보          → 버림\n#   text      : 실제 응답 텍스트        → 추출\n#\n# 결과: 12MB → ~800KB (93% 감소)\n# ============================================================\n\nset -e\n\nSESSION_FILE=\"$1\"\n\nif [ -z \"$SESSION_FILE\" ]; then\n  echo \"Usage: $0 <session.jsonl>\" >&2\n  exit 1\nfi\n\nif [ ! -f \"$SESSION_FILE\" ]; then\n  echo \"Error: File not found: $SESSION_FILE\" >&2\n  exit 1\nfi\n\nif ! command -v jq &> /dev/null; then\n  echo \"Error: jq is required. Install with: brew install jq\" >&2\n  exit 1\nfi\n\n# Extract conversation only:\n# - summary: 세션 요약\n# - user: 사용자 메시지 (.message.content)\n# - assistant: Claude 응답 중 text만 (.message.content[].type == \"text\")\n#\n# Explicitly ignored (94% of file size):\n# - file-history-snapshot: 파일 백업 스냅샷\n# - queue-operation: 큐 연산 로그\n# - assistant.thinking: 생각 과정 + signature\n# - assistant.tool_use: tool 호출 정보\n\njq -c '\n  if .type == \"summary\" then\n    {type: \"summary\", summary: .summary}\n  elif .type == \"user\" then\n    {\n      type: \"user\",\n      content: .message.content,\n      ts: .timestamp\n    }\n  elif .type == \"assistant\" then\n    {\n      type: \"assistant\",\n      texts: [.message.content[]? | select(.type == \"text\") | .text],\n      ts: .timestamp\n    } | select(.texts | length > 0)\n  else\n    empty\n  end\n' \"$SESSION_FILE\" 2>/dev/null | jq -s '{\n  file: \"'\"$SESSION_FILE\"'\",\n  message_count: length,\n  messages: .\n}'\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/SKILL.md",
    "content": "---\nname: session-analyzer\ndescription: This skill should be used when the user asks to \"analyze session\", \"세션 분석\", \"evaluate skill execution\", \"스킬 실행 검증\", \"check session logs\", \"로그 분석\", provides a session ID with a skill path, or wants to verify that a skill executed correctly in a past session. Post-hoc analysis of Claude Code sessions to validate skill/agent/hook behavior against SKILL.md specifications.\nversion: 1.0.0\nuser-invocable: true\n---\n\n# Session Analyzer Skill\n\nPost-hoc analysis tool for validating Claude Code session behavior against SKILL.md specifications.\n\n## Purpose\n\nAnalyze completed sessions to verify:\n1. **Expected vs Actual Behavior** - Did the skill follow SKILL.md workflow?\n2. **Component Invocations** - Were SubAgents, Hooks, and Tools called correctly?\n3. **Artifacts** - Were expected files created/deleted?\n4. **Bug Detection** - Any unexpected errors or deviations?\n\n---\n\n## Input Requirements\n\n| Parameter | Required | Description |\n|-----------|----------|-------------|\n| `sessionId` | YES | UUID of the session to analyze |\n| `targetSkill` | YES | Path to SKILL.md to validate against |\n| `additionalRequirements` | NO | Extra validation criteria |\n\n---\n\n## Phase 1: Locate Session Files\n\n### Step 1.1: Find Session Files\n\nSession files are located in `~/.claude/`:\n\n```bash\n# Main session log\n~/.claude/projects/-{encoded-cwd}/{sessionId}.jsonl\n\n# Debug log (detailed)\n~/.claude/debug/{sessionId}.txt\n\n# Agent transcripts (if subagents were used)\n~/.claude/projects/-{encoded-cwd}/agent-{agentId}.jsonl\n```\n\nUse script to locate files:\n```bash\n${baseDir}/scripts/find-session-files.sh {sessionId}\n```\n\n### Step 1.2: Verify Files Exist\n\nCheck all required files exist before proceeding. If debug log is missing, analysis will be limited.\n\n---\n\n## Phase 2: Parse Target SKILL.md\n\n### Step 2.1: Extract Expected Components\n\nRead the target SKILL.md and identify:\n\n**From YAML Frontmatter:**\n- `hooks.PreToolUse` - Expected PreToolUse hooks and matchers\n- `hooks.PostToolUse` - Expected PostToolUse hooks\n- `hooks.Stop` - Expected Stop hooks\n- `hooks.SubagentStop` - Expected SubagentStop hooks\n- `allowed-tools` - Tools the skill is allowed to use\n\n**From Markdown Body:**\n- SubAgents mentioned (`Task(subagent_type=\"...\")`)\n- Skills called (`Skill(\"...\")`)\n- Artifacts created (`.dev-flow/drafts/`, `.dev-flow/plans/`, etc.)\n- Workflow steps and conditions\n\n### Step 2.2: Build Expected Behavior Checklist\n\nCreate checklist from SKILL.md analysis:\n\n```markdown\n## Expected Behavior\n\n### SubAgents\n- [ ] Explore agent called (parallel, run_in_background)\n- [ ] gap-analyzer called before plan generation\n- [ ] reviewer called after plan creation\n\n### Hooks\n- [ ] PreToolUse[Edit|Write] triggers plan-guard.sh\n- [ ] Stop hook validates reviewer approval\n\n### Artifacts\n- [ ] Draft file created at .dev-flow/drafts/{name}.md\n- [ ] Plan file created at .dev-flow/plans/{name}.md\n- [ ] Draft file deleted after OKAY\n\n### Workflow\n- [ ] Interview Mode before Plan Generation\n- [ ] User explicit request triggers plan generation\n- [ ] Reviewer REJECT causes revision loop\n```\n\n---\n\n## Phase 3: Analyze Debug Log\n\nThe debug log (`~/.claude/debug/{sessionId}.txt`) contains detailed execution traces.\n\n### Step 3.1: Extract SubAgent Calls\n\nSearch patterns:\n```\nSubagentStart with query: {agent-name}\nSubagentStop with query: {agent-id}\n```\n\nUse script:\n```bash\n${baseDir}/scripts/extract-subagent-calls.sh {debug-log-path}\n```\n\n### Step 3.2: Extract Hook Events\n\nSearch patterns:\n```\nGetting matching hook commands for {HookEvent} with query: {tool-name}\nMatched {N} unique hooks for query \"{query}\"\nHooks: Processing prompt hook with prompt: {prompt}\nHooks: Prompt hook condition was met/not met\npermissionDecision: allow/deny\n```\n\nUse script:\n```bash\n${baseDir}/scripts/extract-hook-events.sh {debug-log-path}\n```\n\n### Step 3.3: Extract Tool Calls\n\nSearch patterns:\n```\nexecutePreToolHooks called for tool: {tool-name}\nFile {path} written atomically\n```\n\n### Step 3.4: Extract Hook Results\n\nFor prompt-based hooks, find the model response:\n```\nHooks: Model response: {\n  \"ok\": true/false,\n  \"reason\": \"...\"\n}\n```\n\n---\n\n## Phase 4: Verify Artifacts\n\n### Step 4.1: Check File Creation\n\nFor each expected artifact:\n1. Search debug log for `FileHistory: Tracked file modification for {path}`\n2. Search for `File {path} written atomically`\n3. Verify current filesystem state\n\n### Step 4.2: Check File Deletion\n\nFor files that should be deleted:\n1. Search for `rm` commands in Bash calls\n2. Verify file no longer exists on filesystem\n\n---\n\n## Phase 5: Compare Expected vs Actual\n\n### Step 5.1: Build Comparison Table\n\n```markdown\n| Component | Expected | Actual | Status |\n|-----------|----------|--------|--------|\n| Explore agent | 2 parallel calls | 2 calls at 09:39:26 | ✅ |\n| gap-analyzer | Called before plan | Called at 09:43:08 | ✅ |\n| reviewer | Called after plan | 2 calls (REJECT→OKAY) | ✅ |\n| PreToolUse hook | Edit\\|Write matcher | Triggered for Write | ✅ |\n| Stop hook | Validates approval | Returned ok:true | ✅ |\n| Draft file | Created then deleted | Created→Deleted | ✅ |\n| Plan file | Created | Exists (10KB) | ✅ |\n```\n\n### Step 5.2: Identify Deviations\n\nFlag any mismatches:\n- Missing component calls\n- Wrong order of operations\n- Hook failures\n- Missing artifacts\n- Unexpected errors\n\n---\n\n## Phase 6: Generate Report\n\n### Report Template\n\n```markdown\n# Session Analysis Report\n\n## Session Info\n- **Session ID**: {sessionId}\n- **Target Skill**: {skillPath}\n- **Analysis Date**: {date}\n\n---\n\n## 1. Expected Behavior (from SKILL.md)\n\n[Summary of expected workflow]\n\n---\n\n## 2. Skill/SubAgent/Hook Verification\n\n### SubAgents\n| SubAgent | Expected | Actual | Time | Result |\n|----------|----------|--------|------|--------|\n| ... | ... | ... | ... | ✅/❌ |\n\n### Hooks\n| Hook | Matcher | Triggered | Result |\n|------|---------|-----------|--------|\n| ... | ... | ... | ✅/❌ |\n\n---\n\n## 3. Artifacts Verification\n\n| Artifact | Path | Expected State | Actual State |\n|----------|------|----------------|--------------|\n| ... | ... | ... | ✅/❌ |\n\n---\n\n## 4. Issues/Bugs\n\n| Severity | Description | Location |\n|----------|-------------|----------|\n| ... | ... | ... |\n\n---\n\n## 5. Overall Result\n\n**Verdict**: ✅ PASS / ❌ FAIL\n\n**Summary**: [1-2 sentence summary]\n```\n\n---\n\n## Scripts Reference\n\n| Script | Purpose |\n|--------|---------|\n| `find-session-files.sh` | Locate all files for a session ID |\n| `extract-subagent-calls.sh` | Parse subagent invocations from debug log |\n| `extract-hook-events.sh` | Parse hook events from debug log |\n\n---\n\n## Usage Example\n\n```\nUser: \"Analyze session 3cc71c9f-d27a-4233-9dbc-c4f07ea6ec5b against .claude/skills/specify/SKILL.md\"\n\n1. Find session files\n2. Parse SKILL.md → Expected: Explore, gap-analyzer, reviewer, hooks\n3. Analyze debug log → Extract actual calls\n4. Verify artifacts → Check .dev-flow/\n5. Compare → Build verification table\n6. Generate report → PASS/FAIL with details\n```\n\n---\n\n## Additional Resources\n\n### Reference Files\n- **`references/analysis-patterns.md`** - Detailed grep patterns for log analysis\n- **`references/common-issues.md`** - Known issues and troubleshooting\n\n### Scripts\n- **`scripts/find-session-files.sh`** - Session file locator\n- **`scripts/extract-subagent-calls.sh`** - SubAgent call extractor\n- **`scripts/extract-hook-events.sh`** - Hook event extractor\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/references/analysis-patterns.md",
    "content": "# Analysis Patterns for Session Analyzer\n\nDetailed grep/search patterns for extracting information from Claude Code debug logs.\n\n---\n\n## Debug Log Structure\n\nDebug logs are located at `~/.claude/debug/{sessionId}.txt` and contain timestamped entries:\n\n```\n2026-01-13T09:39:26.905Z [DEBUG] {message}\n```\n\n---\n\n## SubAgent Patterns\n\n### SubAgent Start\n```bash\n# Pattern\ngrep \"SubagentStart with query:\" debug.txt\n\n# Example output\n2026-01-13T09:39:26.905Z [DEBUG] Getting matching hook commands for SubagentStart with query: Explore\n```\n\n### SubAgent Stop\n```bash\n# Pattern\ngrep \"SubagentStop with query:\" debug.txt\n\n# With agent ID (session tracking)\ngrep \"agent_id.*agent_transcript_path\" debug.txt\n```\n\n### SubAgent Session Registration\n```bash\n# Pattern - shows when hooks are registered for subagent\ngrep \"Registered.*frontmatter hook.*from agent\" debug.txt\n\n# Example\n2026-01-13T09:43:08.203Z [DEBUG] Registered 1 frontmatter hook(s) from agent 'gap-analyzer' for session a373157\n```\n\n---\n\n## Hook Patterns\n\n### PreToolUse Hook Trigger\n```bash\n# Pattern\ngrep \"executePreToolHooks called for tool:\" debug.txt\n\n# Example\n2026-01-13T09:39:40.000Z [DEBUG] executePreToolHooks called for tool: Write\n```\n\n### Hook Matcher Check\n```bash\n# Pattern\ngrep \"Getting matching hook commands for PreToolUse with query:\" debug.txt\n\n# With match count\ngrep \"Matched.*unique hooks for query\" debug.txt\n\n# Example\n2026-01-13T09:39:40.000Z [DEBUG] Matched 1 unique hooks for query \"Write\" (1 before deduplication)\n```\n\n### Hook Permission Decision\n```bash\n# Pattern\ngrep \"permissionDecision\" debug.txt\n\n# Example (allow)\n\"permissionDecision\": \"allow\"\n\n# Example (deny)\n\"permissionDecision\": \"deny\"\n```\n\n### Prompt-Based Hook Processing\n```bash\n# Pattern - hook is being processed\ngrep \"Hooks: Processing prompt hook with prompt:\" debug.txt\n\n# Pattern - model response\ngrep \"Hooks: Model response:\" debug.txt\n\n# Pattern - condition result\ngrep \"Prompt hook condition was\" debug.txt\n\n# Example (met)\n2026-01-13T09:48:09.076Z [DEBUG] Hooks: Prompt hook condition was met\n\n# Example (not met)\n2026-01-13T09:45:59.297Z [DEBUG] Hooks: Prompt hook condition was not met: REJECT - ...\n```\n\n### Stop Hook Events\n```bash\n# Pattern\ngrep \"Getting matching hook commands for Stop\" debug.txt\n```\n\n### SubagentStop Hook Events\n```bash\n# Pattern - converted from Stop to SubagentStop\ngrep \"Converting Stop hook to SubagentStop\" debug.txt\n\n# Example\n2026-01-13T09:43:08.202Z [DEBUG] Converting Stop hook to SubagentStop for agent 'gap-analyzer'\n```\n\n---\n\n## Tool Usage Patterns\n\n### Tool Execution\n```bash\n# Pattern\ngrep \"executePreToolHooks called for tool:\" debug.txt\n```\n\n### File Write Operations\n```bash\n# Pattern - file creation/modification\ngrep \"FileHistory: Tracked file modification for\" debug.txt\n\n# Pattern - atomic write\ngrep \"File.*written atomically\" debug.txt\n\n# Example\n2026-01-13T09:39:40.036Z [DEBUG] File /path/to/file.md written atomically\n```\n\n### Bash Command Execution\n```bash\n# Pattern - PreToolHooks for Bash\ngrep \"executePreToolHooks called for tool: Bash\" debug.txt\n```\n\n---\n\n## Skill/Session Patterns\n\n### Skill Loading\n```bash\n# Pattern - skill hooks registered\ngrep \"Added session hook for event\" debug.txt\ngrep \"Registered.*hooks from skill\" debug.txt\n\n# Example\n2026-01-13T09:39:14.449Z [DEBUG] Added session hook for event PreToolUse in session 3cc71c9f-...\n2026-01-13T09:39:14.449Z [DEBUG] Registered 2 hooks from skill 'specify'\n```\n\n### Session Hook Cleanup\n```bash\n# Pattern\ngrep \"Cleared all session hooks for session\" debug.txt\n```\n\n---\n\n## AskUserQuestion Patterns\n\n```bash\n# Pattern - PreToolHooks\ngrep \"executePreToolHooks called for tool: AskUserQuestion\" debug.txt\n\n# Pattern - PostToolHooks\ngrep \"PostToolUse with query: AskUserQuestion\" debug.txt\n```\n\n---\n\n## Error Patterns\n\n### Hook Errors\n```bash\n# Pattern\ngrep -i \"error\\|failed\\|exception\" debug.txt | grep -i hook\n```\n\n### Tool Errors\n```bash\n# Pattern\ngrep \"Tool.*error\\|Tool.*failed\" debug.txt\n```\n\n---\n\n## Reviewer-Specific Patterns\n\n### Reviewer Verdict Extraction\n```bash\n# Pattern - look for model response containing OKAY or REJECT\ngrep -A5 \"Hooks: Model response:\" debug.txt | grep -E '\"ok\":|\"reason\":'\n\n# Example (OKAY)\n{\n  \"ok\": true,\n  \"reason\": \"Plan approved by reviewer...\"\n}\n\n# Example (REJECT)\n{\n  \"ok\": false,\n  \"reason\": \"REJECT - The plan has a critical contradiction...\"\n}\n```\n\n---\n\n## Artifact Patterns\n\n### Draft File Operations\n```bash\n# Pattern - draft creation\ngrep \"\\.dev-flow/drafts/\" debug.txt | grep \"written atomically\"\n\n# Pattern - draft deletion (look for rm command)\ngrep \"rm.*\\.dev-flow/drafts/\" debug.txt\n```\n\n### Plan File Operations\n```bash\n# Pattern - plan creation\ngrep \"\\.dev-flow/plans/\" debug.txt | grep \"written atomically\"\n```\n\n---\n\n## Timeline Reconstruction\n\nTo reconstruct a session timeline:\n\n```bash\n# Extract all timestamped events for key operations\ngrep -E \"(SubagentStart|SubagentStop|executePreToolHooks|Prompt hook condition|written atomically)\" debug.txt | sort\n```\n\n---\n\n## Combined Analysis Query\n\nFull analysis of a specify skill session:\n\n```bash\n# 1. Check Explore agents\ngrep \"SubagentStart with query: Explore\" debug.txt | wc -l\n\n# 2. Check gap-analyzer\ngrep \"SubagentStart with query: gap-analyzer\" debug.txt\n\n# 3. Check reviewer calls and results\ngrep -E \"(SubagentStart with query: reviewer|Prompt hook condition)\" debug.txt\n\n# 4. Check plan-guard.sh hook\ngrep \"permissionDecision\" debug.txt\n\n# 5. Check artifacts\ngrep -E \"(\\.dev-flow/drafts/|\\.dev-flow/plans/).*written atomically\" debug.txt\n\n# 6. Final Stop hook result\ngrep -A10 \"Getting matching hook commands for Stop\" debug.txt | tail -20\n```\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/references/common-issues.md",
    "content": "# Common Issues and Troubleshooting\n\nKnown issues when analyzing Claude Code sessions and how to diagnose them.\n\n---\n\n## Session File Issues\n\n### Issue: Debug Log Not Found\n\n**Symptom**: `~/.claude/debug/{sessionId}.txt` doesn't exist\n\n**Possible Causes**:\n1. Session was too old and debug logs were cleaned up\n2. Debug logging was disabled\n3. Session ID is incorrect\n\n**Workaround**:\n- Use the main session log (`.jsonl`) for limited analysis\n- Main log contains tool calls but not detailed hook execution\n\n### Issue: Large Debug Log (>50MB)\n\n**Symptom**: Log file too large to read entirely\n\n**Solution**:\n- Use `grep` with specific patterns instead of reading entire file\n- Use `tail` to get recent entries\n- Use `offset` and `limit` when reading with Read tool\n\n---\n\n## SubAgent Analysis Issues\n\n### Issue: SubAgent Not Recorded\n\n**Symptom**: Expected subagent call not found in logs\n\n**Possible Causes**:\n1. SubAgent was never actually called\n2. SubAgent type name differs from expected (case-sensitive)\n3. Log was truncated\n\n**Diagnosis**:\n```bash\n# List all unique subagent types\ngrep \"SubagentStart with query:\" debug.txt | sed 's/.*query: //' | sort | uniq\n```\n\n### Issue: Missing SubAgent Result\n\n**Symptom**: `SubagentStart` found but no `SubagentStop`\n\n**Possible Causes**:\n1. SubAgent is still running (background task)\n2. SubAgent crashed\n3. Session ended before subagent completed\n\n**Diagnosis**:\n```bash\n# Count starts vs stops\ngrep -c \"SubagentStart\" debug.txt\ngrep -c \"SubagentStop\" debug.txt\n```\n\n---\n\n## Hook Analysis Issues\n\n### Issue: Hook Not Triggered\n\n**Symptom**: Expected hook not found in `Getting matching hook commands` entries\n\n**Possible Causes**:\n1. Matcher pattern doesn't match the tool name\n2. Hook not registered (skill not loaded)\n3. Hook has wrong event type\n\n**Diagnosis**:\n```bash\n# Check if skill hooks were registered\ngrep \"Registered.*hooks from skill\" debug.txt\n\n# Check what hooks are being queried\ngrep \"Getting matching hook commands for\" debug.txt | head -20\n```\n\n### Issue: Hook Triggered But No Effect\n\n**Symptom**: Hook matched (count > 0) but expected behavior didn't occur\n\n**Possible Causes**:\n1. Hook script returned error\n2. Hook returned `allow` when should have returned `deny`\n3. Prompt hook condition was not met\n\n**Diagnosis**:\n```bash\n# Check hook execution result\ngrep -A5 \"Matched.*unique hooks\" debug.txt | grep -E \"permissionDecision|ok\"\n```\n\n### Issue: Prompt Hook Always Returns False\n\n**Symptom**: `Prompt hook condition was not met` consistently\n\n**Possible Causes**:\n1. Prompt is too vague for model to understand\n2. Context doesn't contain expected information\n3. Model misinterprets the criteria\n\n**Diagnosis**:\n```bash\n# See the full model response\ngrep -A20 \"Hooks: Model response:\" debug.txt\n```\n\n---\n\n## Artifact Issues\n\n### Issue: File Not Created\n\n**Symptom**: Expected artifact file not in `written atomically` logs\n\n**Possible Causes**:\n1. Write was blocked by PreToolUse hook\n2. Path was wrong\n3. Write tool was never called\n\n**Diagnosis**:\n```bash\n# Check if Write was attempted\ngrep \"executePreToolHooks called for tool: Write\" debug.txt\n\n# Check permission decision\ngrep -A10 \"executePreToolHooks called for tool: Write\" debug.txt | grep \"permissionDecision\"\n```\n\n### Issue: File Exists But Should Be Deleted\n\n**Symptom**: Draft file still exists after session ended\n\n**Possible Causes**:\n1. Bash `rm` command was never executed\n2. Skill ended before cleanup step\n3. Wrong file path in rm command\n\n**Diagnosis**:\n```bash\n# Check for rm commands\ngrep \"Bash\" debug.txt | grep -i \"rm\"\n```\n\n---\n\n## Reviewer-Specific Issues\n\n### Issue: Reviewer Never Returns OKAY\n\n**Symptom**: Multiple REJECT responses, no OKAY\n\n**Possible Causes**:\n1. Plan genuinely has issues that weren't fixed\n2. Reviewer criteria too strict\n3. Plan edits not addressing reviewer feedback\n\n**Diagnosis**:\n```bash\n# Extract all reviewer responses\ngrep -B2 -A10 \"Hooks: Model response:\" debug.txt | grep -E '\"ok\"|\"reason\"'\n```\n\n### Issue: Reviewer Called But No Hook Result\n\n**Symptom**: `SubagentStart with query: reviewer` found but no `Prompt hook condition` result\n\n**Possible Causes**:\n1. Reviewer subagent has no Stop hook configured\n2. Hook conversion to SubagentStop failed\n3. Reviewer is still running\n\n**Diagnosis**:\n```bash\n# Check if Stop hook was converted\ngrep \"Converting Stop hook to SubagentStop for agent 'reviewer'\" debug.txt\n```\n\n---\n\n## Timing Issues\n\n### Issue: Events Out of Order\n\n**Symptom**: Timeline doesn't make sense (e.g., Stop before Start)\n\n**Possible Causes**:\n1. Parallel operations (intended behavior)\n2. Log entries from different sessions mixed\n3. Clock synchronization issues\n\n**Solution**:\n- Filter by session ID if multiple sessions in same timeframe\n- Look at specific operation sequences, not global order\n\n### Issue: Large Time Gaps\n\n**Symptom**: Long pauses between operations\n\n**Possible Causes**:\n1. User interaction (AskUserQuestion waiting)\n2. API rate limiting\n3. Model thinking time\n\n**Diagnosis**:\n```bash\n# Find gaps > 30 seconds\nawk -F'T|Z' '{print $2}' debug.txt | sort | uniq -c | sort -rn | head\n```\n\n---\n\n## Analysis Script Issues\n\n### Issue: Script Returns Empty JSON\n\n**Symptom**: Scripts return `{ \"summary\": { \"total\": 0 } }`\n\n**Possible Causes**:\n1. Debug log path is wrong\n2. Log format changed\n3. No matching events in this session\n\n**Solution**:\n- Verify debug log path exists and has content\n- Manually grep for expected patterns to verify format\n\n### Issue: Script Permission Denied\n\n**Symptom**: `Permission denied` when running scripts\n\n**Solution**:\n```bash\nchmod +x scripts/*.sh\n```\n\n---\n\n## Validation Checklist\n\nWhen analysis seems wrong, verify:\n\n1. **Correct Session ID**: Double-check the UUID\n2. **Files Exist**: Run `find-session-files.sh` first\n3. **Skill Was Loaded**: Look for \"Registered.*hooks from skill\"\n4. **Right Timeframe**: Check timestamps match expected session time\n5. **Complete Session**: Session ended normally (not interrupted)\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/scripts/extract-hook-events.sh",
    "content": "#!/bin/bash\n# extract-hook-events.sh - Extract Hook events from debug log\n#\n# Usage: extract-hook-events.sh <debug-log-path>\n# Output: JSON with hook events, triggers, and results\n\nset -euo pipefail\n\nDEBUG_LOG=\"${1:-}\"\n\nif [[ -z \"$DEBUG_LOG\" ]] || [[ ! -f \"$DEBUG_LOG\" ]]; then\n    echo \"Usage: $0 <debug-log-path>\" >&2\n    exit 1\nfi\n\necho \"{\"\n\n# Extract PreToolUse hooks\necho '  \"pre_tool_use\": ['\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n    tool_name=$(echo \"$line\" | grep -oP 'PreToolUse with query: \\K\\S+' || echo \"\")\n\n    if [[ -n \"$tool_name\" ]]; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"tool\": \"%s\"}' \"$timestamp\" \"$tool_name\"\n    fi\ndone < <(grep \"Getting matching hook commands for PreToolUse\" \"$DEBUG_LOG\" 2>/dev/null || true)\necho \"\"\necho \"  ],\"\n\n# Extract hook matches\necho '  \"hook_matches\": ['\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n    match_count=$(echo \"$line\" | grep -oP 'Matched \\K\\d+' || echo \"0\")\n    query=$(echo \"$line\" | grep -oP 'for query \"\\K[^\"]+' || echo \"\")\n\n    if [[ \"$match_count\" -gt 0 ]]; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"query\": \"%s\", \"matched\": %s}' \"$timestamp\" \"$query\" \"$match_count\"\n    fi\ndone < <(grep \"Matched .* unique hooks for query\" \"$DEBUG_LOG\" 2>/dev/null || true)\necho \"\"\necho \"  ],\"\n\n# Extract prompt hook results\necho '  \"prompt_hook_results\": ['\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n\n    if echo \"$line\" | grep -q \"Prompt hook condition was met\"; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"result\": \"met\"}' \"$timestamp\"\n    elif echo \"$line\" | grep -q \"Prompt hook condition was not met\"; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"result\": \"not_met\"}' \"$timestamp\"\n    fi\ndone < <(grep \"Prompt hook condition\" \"$DEBUG_LOG\" 2>/dev/null || true)\necho \"\"\necho \"  ],\"\n\n# Extract permission decisions\necho '  \"permission_decisions\": ['\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n    decision=$(echo \"$line\" | grep -oP 'permissionDecision.*:\\s*\"\\K[^\"]+' || echo \"\")\n\n    if [[ -n \"$decision\" ]]; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"decision\": \"%s\"}' \"$timestamp\" \"$decision\"\n    fi\ndone < <(grep \"permissionDecision\" \"$DEBUG_LOG\" 2>/dev/null || true)\necho \"\"\necho \"  ],\"\n\n# Summary\nstop_hooks=$(grep -c \"Getting matching hook commands for Stop\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\npre_tool_hooks=$(grep -c \"Getting matching hook commands for PreToolUse\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\npost_tool_hooks=$(grep -c \"Getting matching hook commands for PostToolUse\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\nsubagent_stop=$(grep -c \"Getting matching hook commands for SubagentStop\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\nprompt_hooks=$(grep -c \"Processing prompt hook\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\n\ncat << EOF\n  \"summary\": {\n    \"PreToolUse\": $pre_tool_hooks,\n    \"PostToolUse\": $post_tool_hooks,\n    \"Stop\": $stop_hooks,\n    \"SubagentStop\": $subagent_stop,\n    \"prompt_hooks_processed\": $prompt_hooks\n  }\n}\nEOF\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/scripts/extract-subagent-calls.sh",
    "content": "#!/bin/bash\n# extract-subagent-calls.sh - Extract SubAgent invocations from debug log\n#\n# Usage: extract-subagent-calls.sh <debug-log-path>\n# Output: JSON array of subagent calls with timestamps and results\n\nset -euo pipefail\n\nDEBUG_LOG=\"${1:-}\"\n\nif [[ -z \"$DEBUG_LOG\" ]] || [[ ! -f \"$DEBUG_LOG\" ]]; then\n    echo \"Usage: $0 <debug-log-path>\" >&2\n    exit 1\nfi\n\necho \"{\"\necho '  \"subagent_calls\": ['\n\n# Extract SubagentStart events\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n    agent_name=$(echo \"$line\" | grep -oP 'SubagentStart with query: \\K\\S+' || echo \"\")\n\n    if [[ -n \"$agent_name\" ]]; then\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"event\": \"start\", \"agent\": \"%s\"}' \"$timestamp\" \"$agent_name\"\n    fi\ndone < <(grep \"SubagentStart with query:\" \"$DEBUG_LOG\" 2>/dev/null || true)\n\necho \"\"\necho \"  ],\"\n\n# Extract SubagentStop events with results\necho '  \"subagent_results\": ['\n\nfirst=true\nwhile IFS= read -r line; do\n    timestamp=$(echo \"$line\" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]+Z' || echo \"\")\n\n    if [[ -n \"$timestamp\" ]]; then\n        # Look for hook result after this line\n        if [[ \"$first\" != \"true\" ]]; then\n            echo \",\"\n        fi\n        first=false\n        printf '    {\"timestamp\": \"%s\", \"event\": \"stop\"}' \"$timestamp\"\n    fi\ndone < <(grep \"SubagentStop with query:\" \"$DEBUG_LOG\" 2>/dev/null || true)\n\necho \"\"\necho \"  ],\"\n\n# Summary counts\nexplore_count=$(grep -c \"SubagentStart with query: Explore\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\ngap_analyzer_count=$(grep -c \"SubagentStart with query: gap-analyzer\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\nreviewer_count=$(grep -c \"SubagentStart with query: reviewer\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\nworker_count=$(grep -c \"SubagentStart with query: worker\" \"$DEBUG_LOG\" 2>/dev/null || echo \"0\")\n\ncat << EOF\n  \"summary\": {\n    \"Explore\": $explore_count,\n    \"gap-analyzer\": $gap_analyzer_count,\n    \"reviewer\": $reviewer_count,\n    \"worker\": $worker_count,\n    \"total\": $((explore_count + gap_analyzer_count + reviewer_count + worker_count))\n  }\n}\nEOF\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-analyzer/scripts/find-session-files.sh",
    "content": "#!/bin/bash\n# find-session-files.sh - Locate all files related to a Claude Code session\n#\n# Usage: find-session-files.sh <session-id>\n# Output: JSON with paths to session files\n\nset -euo pipefail\n\nSESSION_ID=\"${1:-}\"\n\nif [[ -z \"$SESSION_ID\" ]]; then\n    echo \"Usage: $0 <session-id>\" >&2\n    exit 1\nfi\n\nCLAUDE_DIR=\"$HOME/.claude\"\n\n# Find main session log (in projects directory)\nMAIN_LOG=$(find \"$CLAUDE_DIR/projects\" -name \"${SESSION_ID}.jsonl\" 2>/dev/null | head -1)\n\n# Find debug log\nDEBUG_LOG=\"$CLAUDE_DIR/debug/${SESSION_ID}.txt\"\nif [[ ! -f \"$DEBUG_LOG\" ]]; then\n    DEBUG_LOG=\"\"\nfi\n\n# Find agent transcripts (subagent sessions)\nAGENT_LOGS=$(find \"$CLAUDE_DIR/projects\" -name \"agent-*.jsonl\" 2>/dev/null | xargs grep -l \"$SESSION_ID\" 2>/dev/null || true)\n\n# Find todo file\nTODO_FILE=$(find \"$CLAUDE_DIR/todos\" -name \"*${SESSION_ID}*.json\" 2>/dev/null | head -1)\n\n# Find session environment\nSESSION_ENV=\"$CLAUDE_DIR/session-env/${SESSION_ID}\"\nif [[ ! -d \"$SESSION_ENV\" ]]; then\n    SESSION_ENV=\"\"\nfi\n\n# Output as JSON\ncat << EOF\n{\n  \"session_id\": \"$SESSION_ID\",\n  \"main_log\": \"$MAIN_LOG\",\n  \"debug_log\": \"$DEBUG_LOG\",\n  \"agent_logs\": [$(echo \"$AGENT_LOGS\" | sed 's/^/\"/' | sed 's/$/\"/' | tr '\\n' ',' | sed 's/,$//' | sed 's/^\"\"$//')],\n  \"todo_file\": \"$TODO_FILE\",\n  \"session_env\": \"$SESSION_ENV\",\n  \"found\": {\n    \"main_log\": $([ -n \"$MAIN_LOG\" ] && echo \"true\" || echo \"false\"),\n    \"debug_log\": $([ -n \"$DEBUG_LOG\" ] && echo \"true\" || echo \"false\")\n  }\n}\nEOF\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-wrap/SKILL.md",
    "content": "---\r\nname: session-wrap\r\ndescription: This skill should be used when the user asks to \"wrap up session\", \"end session\", \"session wrap\", \"/wrap\", \"document learnings\", \"what should I commit\", or wants to analyze completed work before ending a coding session.\r\nversion: 2.0.0\r\n---\r\n\r\n# Session Wrap Skill\r\n\r\nComprehensive session wrap-up workflow with multi-agent analysis.\r\n\r\n## Execution Flow\r\n\r\n```\r\n┌─────────────────────────────────────────────────────┐\r\n│  1. Check Git Status                                │\r\n├─────────────────────────────────────────────────────┤\r\n│  2. Phase 1: 4 Analysis Agents (Parallel)           │\r\n│     ┌─────────────────┬─────────────────┐           │\r\n│     │  doc-updater    │  automation-    │           │\r\n│     │  (docs update)  │  scout          │           │\r\n│     ├─────────────────┼─────────────────┤           │\r\n│     │  learning-      │  followup-      │           │\r\n│     │  extractor      │  suggester      │           │\r\n│     └─────────────────┴─────────────────┘           │\r\n├─────────────────────────────────────────────────────┤\r\n│  3. Phase 2: Validation Agent (Sequential)          │\r\n│     ┌───────────────────────────────────┐           │\r\n│     │       duplicate-checker           │           │\r\n│     │  (Validate Phase 1 proposals)     │           │\r\n│     └───────────────────────────────────┘           │\r\n├─────────────────────────────────────────────────────┤\r\n│  4. Integrate Results & AskUserQuestion             │\r\n├─────────────────────────────────────────────────────┤\r\n│  5. Execute Selected Actions                        │\r\n└─────────────────────────────────────────────────────┘\r\n```\r\n\r\n## Step 1: Check Git Status\r\n\r\n```bash\r\ngit status --short\r\ngit diff --stat HEAD~3 2>/dev/null || git diff --stat\r\n```\r\n\r\n## Step 2: Phase 1 - Analysis Agents (Parallel)\r\n\r\nExecute 4 agents in parallel (single message with 4 Task calls).\r\n\r\n### Session Summary (Provide to all agents)\r\n\r\n```\r\nSession Summary:\r\n- Work: [Main tasks performed in session]\r\n- Files: [Created/modified files]\r\n- Decisions: [Key decisions made]\r\n```\r\n\r\n### Parallel Execution\r\n\r\n```\r\nTask(\r\n    subagent_type=\"doc-updater\",\r\n    description=\"Document update analysis\",\r\n    prompt=\"[Session Summary]\\n\\nAnalyze if CLAUDE.md, context.md need updates.\"\r\n)\r\n\r\nTask(\r\n    subagent_type=\"automation-scout\",\r\n    description=\"Automation pattern analysis\",\r\n    prompt=\"[Session Summary]\\n\\nAnalyze repetitive patterns or automation opportunities.\"\r\n)\r\n\r\nTask(\r\n    subagent_type=\"learning-extractor\",\r\n    description=\"Learning points extraction\",\r\n    prompt=\"[Session Summary]\\n\\nExtract learnings, mistakes, and new discoveries.\"\r\n)\r\n\r\nTask(\r\n    subagent_type=\"followup-suggester\",\r\n    description=\"Follow-up task suggestions\",\r\n    prompt=\"[Session Summary]\\n\\nSuggest incomplete tasks and next session priorities.\"\r\n)\r\n```\r\n\r\n### Agent Roles\r\n\r\n| Agent | Role | Output |\r\n|-------|------|--------|\r\n| **doc-updater** | Analyze CLAUDE.md/context.md updates | Specific content to add |\r\n| **automation-scout** | Detect automation patterns | skill/command/agent suggestions |\r\n| **learning-extractor** | Extract learning points | TIL format summary |\r\n| **followup-suggester** | Suggest follow-up tasks | Prioritized task list |\r\n\r\n## Step 3: Phase 2 - Validation Agent (Sequential)\r\n\r\nRun after Phase 1 completes (dependency on Phase 1 results).\r\n\r\n```\r\nTask(\r\n    subagent_type=\"duplicate-checker\",\r\n    description=\"Phase 1 proposal validation\",\r\n    prompt=\"\"\"\r\nValidate Phase 1 analysis results.\r\n\r\n## doc-updater proposals:\r\n[doc-updater results]\r\n\r\n## automation-scout proposals:\r\n[automation-scout results]\r\n\r\nCheck if proposals duplicate existing docs/automation:\r\n1. Complete duplicate: Recommend skip\r\n2. Partial duplicate: Suggest merge approach\r\n3. No duplicate: Approve for addition\r\n\"\"\"\r\n)\r\n```\r\n\r\n## Step 4: Integrate Results\r\n\r\n```markdown\r\n## Wrap Analysis Results\r\n\r\n### Documentation Updates\r\n[doc-updater summary]\r\n- Duplicate check: [duplicate-checker feedback]\r\n\r\n### Automation Suggestions\r\n[automation-scout summary]\r\n- Duplicate check: [duplicate-checker feedback]\r\n\r\n### Learning Points\r\n[learning-extractor summary]\r\n\r\n### Follow-up Tasks\r\n[followup-suggester summary]\r\n```\r\n\r\n## Step 5: Action Selection\r\n\r\n```\r\nAskUserQuestion(\r\n    questions=[{\r\n        \"question\": \"Which actions would you like to perform?\",\r\n        \"header\": \"Wrap Options\",\r\n        \"multiSelect\": true,\r\n        \"options\": [\r\n            {\"label\": \"Create commit (Recommended)\", \"description\": \"Commit changes\"},\r\n            {\"label\": \"Update CLAUDE.md\", \"description\": \"Document new knowledge/workflows\"},\r\n            {\"label\": \"Create automation\", \"description\": \"Generate skill/command/agent\"},\r\n            {\"label\": \"Skip\", \"description\": \"End without action\"}\r\n        ]\r\n    }]\r\n)\r\n```\r\n\r\n## Step 6: Execute Selected Actions\r\n\r\nExecute only the actions selected by user.\r\n\r\n---\r\n\r\n## Quick Reference\r\n\r\n### When to Use\r\n\r\n- End of significant work session\r\n- Before switching to different project\r\n- After completing a feature or fixing a bug\r\n\r\n### When to Skip\r\n\r\n- Very short session with trivial changes\r\n- Only reading/exploring code\r\n- Quick one-off question answered\r\n\r\n### Arguments\r\n\r\n- Empty: Proceed interactively (full workflow)\r\n- Message provided: Use as commit message and commit directly\r\n\r\n## Additional Resources\r\n\r\nSee `references/multi-agent-patterns.md` for detailed orchestration patterns.\r\n"
  },
  {
    "path": "plugins/session-wrap/skills/session-wrap/references/multi-agent-patterns.md",
    "content": "# Multi-Agent Orchestration Patterns\r\n\r\nDetailed patterns for designing multi-agent workflows in Claude Code.\r\n\r\n## Core Principles\r\n\r\n> **\"Agent architecture should reflect the dependency graph of the task\"**\r\n> — Anthropic Multi-Agent Research\r\n\r\nIf subtasks don't read or modify each other's state, run them **parallel**.\r\nIf previous output is next input, run them **sequential**.\r\n\r\n## Parallel vs Sequential Decision Criteria\r\n\r\n| Condition | Recommended Pattern |\r\n|-----------|-------------------|\r\n| Subtasks are independent (no shared state) | **Parallel** |\r\n| Previous step output is next step input | **Sequential** |\r\n| Diverse perspectives/expertise needed | **Parallel** (Fan-out) |\r\n| Result coherence/consistency important | **Sequential** |\r\n| Proposals need validation before action | **2-Phase** (Generate→Validate) |\r\n\r\n## Anthropic's 6 Composable Patterns\r\n\r\n| Pattern | Description | When to Use |\r\n|---------|-------------|-------------|\r\n| **Prompt Chaining** | Sequential steps, each output is next input | Data transformation pipelines |\r\n| **Routing** | Branch to specialized agents by input type | Multi-domain processing |\r\n| **Parallelization** | Independent tasks run simultaneously | Multi-angle analysis, speed optimization |\r\n| **Orchestrator-Worker** | Dynamic task assignment | Complex coding/research |\r\n| **Evaluator-Optimizer** | Generate→Evaluate iteration loop | Quality improvement needed |\r\n| **Autonomous Agent** | Minimal intervention, environment feedback | Long-running tasks |\r\n\r\n## 2-Phase Pipeline Pattern\r\n\r\nFor workflows generating proposals that need validation:\r\n\r\n```\r\nPhase 1: Analysis/Generation (Parallel)\r\n┌──────────┬──────────┬──────────┐\r\n│ Agent A  │ Agent B  │ Agent C  │  ← Independent analysis\r\n└────┬─────┴────┬─────┴────┬─────┘\r\n     │          │          │\r\n     └──────────┼──────────┘\r\n                ↓\r\nPhase 2: Validation (Sequential)\r\n┌─────────────────────────────────┐\r\n│         Validator Agent         │  ← Validate Phase 1 results\r\n└─────────────────────────────────┘\r\n```\r\n\r\n### Application Examples\r\n\r\n**Session wrap workflow:**\r\n- Phase 1: doc-updater, automation-scout, learning-extractor, followup-suggester (parallel)\r\n- Phase 2: duplicate-checker (sequential)\r\n\r\n**Code review workflow:**\r\n- Phase 1: security-reviewer, style-checker, performance-analyzer (parallel)\r\n- Phase 2: final-reviewer (sequential)\r\n\r\n**Research workflow:**\r\n- Phase 1: source-finder, fact-checker, perspective-gatherer (parallel)\r\n- Phase 2: synthesizer (sequential)\r\n\r\n## State Management Principles\r\n\r\n```\r\n❌ Avoid:\r\n- Mutable state shared between concurrent agents\r\n- Assuming synchronous updates across agent boundaries\r\n- Assuming independence without explicit verification\r\n\r\n✅ Recommend:\r\n- Isolate agents as much as possible\r\n- Pass state explicitly via output_key\r\n- Define conflict resolution strategy for result aggregation\r\n- Pass lightweight references (not full data)\r\n```\r\n\r\n## Anti-Patterns\r\n\r\n| Anti-Pattern | Problem | Alternative |\r\n|--------------|---------|-------------|\r\n| Adding meaningless agents | Only increases complexity | Check if single agent sufficient first |\r\n| Excessive multi-hop communication | Latency increase | Direct communication or parallelization |\r\n| Unclear task boundaries | Duplicate work, gaps | Define clear objective, output format, boundaries |\r\n| Rigid plan adherence | Can't adapt to runtime discoveries | Use adaptive orchestrator |\r\n\r\n## Model Selection for Agents\r\n\r\n| Use Case | Recommended Model |\r\n|----------|------------------|\r\n| Analysis requiring depth | `sonnet` or `opus` |\r\n| Quick validation | `haiku` |\r\n| Default/inherit from parent | `inherit` |\r\n| Creative/complex reasoning | `opus` |\r\n| Cost-sensitive batch operations | `haiku` |\r\n\r\n## Implementing in Claude Code\r\n\r\n### Parallel Execution\r\n\r\nSend multiple Task calls in a single message:\r\n\r\n```python\r\n# All 4 agents start simultaneously\r\nTask(subagent_type=\"agent-a\", prompt=\"...\")\r\nTask(subagent_type=\"agent-b\", prompt=\"...\")\r\nTask(subagent_type=\"agent-c\", prompt=\"...\")\r\nTask(subagent_type=\"agent-d\", prompt=\"...\")\r\n```\r\n\r\n### Sequential Execution\r\n\r\nWait for previous result before next call:\r\n\r\n```python\r\n# First call\r\nresult_1 = Task(subagent_type=\"agent-a\", prompt=\"...\")\r\n\r\n# Use result_1 in next call\r\nTask(subagent_type=\"agent-b\", prompt=f\"Validate: {result_1}\")\r\n```\r\n\r\n### Hybrid (2-Phase)\r\n\r\n```python\r\n# Phase 1: Parallel\r\nTask(subagent_type=\"analyzer-1\", prompt=\"...\")\r\nTask(subagent_type=\"analyzer-2\", prompt=\"...\")\r\nTask(subagent_type=\"analyzer-3\", prompt=\"...\")\r\n\r\n# Wait for all Phase 1 results\r\n\r\n# Phase 2: Sequential (uses Phase 1 results)\r\nTask(\r\n    subagent_type=\"validator\",\r\n    prompt=f\"\"\"\r\n    Validate these proposals:\r\n\r\n    Analyzer 1: {result_1}\r\n    Analyzer 2: {result_2}\r\n    Analyzer 3: {result_3}\r\n    \"\"\"\r\n)\r\n```\r\n\r\n## Agent Design for Multi-Agent Systems\r\n\r\n### Clear Boundaries\r\n\r\nEach agent should have:\r\n- **Single responsibility**: One clear focus area\r\n- **Defined inputs**: What it expects to receive\r\n- **Structured output**: Consistent format for downstream consumption\r\n- **No side effects**: Don't modify state other agents depend on\r\n\r\n### Communication Protocol\r\n\r\n```markdown\r\n## Agent Output Format\r\n\r\n### Summary\r\n[One-line summary]\r\n\r\n### Detailed Findings\r\n[Structured analysis]\r\n\r\n### Recommendations\r\n[Actionable items with priorities]\r\n\r\n### Confidence\r\n[Self-assessment of analysis quality]\r\n```\r\n\r\n## Scaling Considerations\r\n\r\n### When to Add More Agents\r\n\r\n✅ Add agent when:\r\n- Distinct expertise domain needed\r\n- Independent analysis possible\r\n- Clear boundary definable\r\n- Reduces complexity vs. single agent\r\n\r\n❌ Don't add agent when:\r\n- Same expertise as existing agent\r\n- Would create tight coupling\r\n- Simple prompt modification sufficient\r\n- Adds latency without value\r\n\r\n### Performance Optimization\r\n\r\n1. **Minimize Phase 2 agents**: Validation should be lightweight\r\n2. **Right-size Phase 1**: 3-5 parallel agents typically optimal\r\n3. **Use haiku for validation**: Fast, cheap, sufficient for checking\r\n4. **Batch where possible**: Combine related analyses in single agent\r\n\r\n## References\r\n\r\n- [Anthropic Multi-Agent Research](https://www.anthropic.com/engineering/multi-agent-research-system)\r\n- [Azure AI Agent Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/ai-agent-design-patterns)\r\n- [Building AI Agents - Evaluator-Optimizer Pattern](https://research.aimultiple.com/building-ai-agents/)\r\n"
  },
  {
    "path": "plugins/team-assemble/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"team-assemble\",\n  \"version\": \"1.0.0\",\n  \"description\": \"Dynamically assemble expert agent teams for complex tasks using Claude Code's agent teams feature\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  },\n  \"repository\": \"https://github.com/team-attention/plugins-for-claude-natives\",\n  \"license\": \"MIT\",\n  \"keywords\": [\"agent-teams\", \"multi-agent\", \"team\", \"parallel\", \"orchestration\", \"task-decomposition\"]\n}\n"
  },
  {
    "path": "plugins/team-assemble/README.md",
    "content": "# team-assemble\n\nDynamically assemble expert agent teams for complex tasks using Claude Code's agent teams feature.\n\n## What It Does\n\nInstead of manually designing and coordinating multiple agents, team-assemble:\n\n1. **Analyzes** your task and identifies relevant codebase areas\n2. **Scouts** the codebase to understand what agents are needed\n3. **Designs** an optimal team with the right roles and dependencies\n4. **Executes** agents in parallel where possible\n5. **Validates** results against acceptance criteria\n6. **Cleans up** the team automatically\n\n## Prerequisites\n\nAgent teams are experimental and must be enabled first. Add to your `settings.json`:\n\n```json\n{\n  \"env\": {\n    \"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\": \"1\"\n  }\n}\n```\n\nSee `skills/team-assemble/references/enable-agent-teams.md` for detailed setup instructions.\n\n## Usage\n\nTell Claude what you need and ask it to assemble a team:\n\n```\nAssemble a team to refactor our authentication from session-based to JWT\n```\n\n```\nUse team-assemble to evaluate caching strategies — Redis vs Memcached vs in-memory\n```\n\n```\nAssemble a team to extract shared utilities from three microservices into a common library\n```\n\nThe skill will guide you through approval gates before execution begins.\n\n## How It Works\n\n### 6-Phase Workflow\n\n```\nPhase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5 → Phase 6\nTask       Codebase   Integrate  Execute   Validate   Complete\nAnalysis   Scouts     & Confirm                       & Cleanup\n```\n\n### Model 3-Tier Strategy\n\nAgents are assigned models based on their role:\n\n| Model | Purpose | Examples |\n|-------|---------|---------|\n| opus | Strategy & judgment | Scouts, architects |\n| sonnet | Standard execution | Implementers, QA, support |\n| haiku | Research & writing | Researchers, editors |\n\n### Key Features\n\n- **Dynamic agent design** — no fixed catalog, agents are tailored to each task\n- **Parallel execution** — independent agents run simultaneously\n- **Acceptance criteria** — every team has measurable validation criteria\n- **Verify/fix loop** — QA validates, support fixes (max 3 rounds)\n- **Two approval gates** — user confirms scope (Phase 1) and team composition (Phase 3)\n\n## Plugin Contents\n\n```\nteam-assemble/\n├── .claude-plugin/\n│   └── plugin.json\n├── skills/\n│   └── team-assemble/\n│       ├── SKILL.md                        # Main skill definition\n│       └── references/\n│           ├── agents.md                   # Agent example bank\n│           ├── prompt-templates.md         # Prompt templates for all phases\n│           ├── examples.md                 # Worked examples\n│           └── enable-agent-teams.md       # Setup guide\n└── README.md\n```\n"
  },
  {
    "path": "plugins/team-assemble/skills/team-assemble/SKILL.md",
    "content": "---\nname: team-assemble\ndescription: Analyze tasks and dynamically assemble expert agent teams using Claude Code's TeamCreate API. Scouts your codebase, designs optimal agents, and executes with validation.\nallowed-tools: [Agent, Bash, Read, Write, Edit, Glob, Grep, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, TaskGet, TeamCreate, TeamDelete, SendMessage]\nversion: 1.0.0\n---\n\n# Team Assemble\n\nAnalyze a task, dynamically design the right expert agents, and orchestrate them as a team using Claude Code's agent teams feature.\n\n## Prerequisites\n\nAgent teams must be enabled. See `references/enable-agent-teams.md` for setup instructions.\n\n## When to Use\n\n- Complex tasks that decompose into 2+ independent subtasks\n- Work where role separation is clear (e.g., research + implementation + validation)\n- Tasks that benefit from parallel execution\n\n**Do NOT use for:** single-file edits, simple questions, purely sequential work\n\n---\n\n## Core Principles\n\n- **Model 3-tier** — choose by role purpose (details: `references/agents.md`)\n  - `opus` — strategy/judgment (scouts, complex execution)\n  - `sonnet` — standard execution/validation (worker, qa, support)\n  - `haiku` — exploration/writing (researcher, editor)\n- **No fixed catalog** — agents are designed dynamically per task\n- **Example bank** — `references/agents.md` provides reference examples (not mandatory)\n\n---\n\n## Workflow\n\n```\nPhase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5 → Phase 6\nTask       Codebase   Integrate  Execute   Validate   Complete\nAnalysis   Scouts     & Confirm                       & Cleanup\n                                           ↕ FAIL → support fix (max 3x)\n```\n\n---\n\n## Phase 1: Task Analysis\n\nAnalyze the user's request and identify relevant areas of the codebase:\n\n1. Examine project structure, CLAUDE.md files, and README files\n2. Identify which parts of the codebase are relevant to the task\n3. Determine if codebase scouting (Phase 2) would help, or if the task is straightforward enough to skip to Phase 3\n\n**Get user approval via AskUserQuestion:**\n\n```\nI've analyzed your task and identified the following areas of interest:\n\n- [x] src/auth/ — Authentication module (needs refactoring)\n- [x] tests/auth/ — Corresponding tests\n- [ ] src/api/ — Not directly affected\n\nI'll scout these areas to design an optimal team.\n```\n\nOptions: \"Looks good, proceed\" / \"I'd like to adjust the scope\"\n\nFor straightforward tasks that don't need codebase exploration, skip Phase 2 and go directly to Phase 3 — design the team yourself using `references/agents.md` as a guide.\n\n---\n\n## Phase 2: Codebase Scouts (Parallel)\n\nLaunch **scout agents** in parallel to explore relevant areas of the codebase.\n\n### Scout Configuration\n\n- **Model**: opus\n- **subagent_type**: `general-purpose` (constrained to read-only via prompt)\n- **Parallel**: launch multiple scouts simultaneously when exploring different areas\n\n### Scout Mission\n\nEach scout reads the relevant codebase area and proposes agents for the task:\n\n1. Read key files (CLAUDE.md, README.md, source code, configs)\n2. Analyze the intersection between the task and the codebase area\n3. Propose agents needed (name, role, tasks, reference files)\n4. Reference `references/agents.md` for examples, but freely design new agents\n\n> Prompt template: `references/prompt-templates.md` § Codebase Scout\n\n### Scout Output Format\n\n```\n## Scout Report: {area}\n\n### Current State\n- {file structure summary}\n- {relevance to the task}\n\n### Proposed Agents\n| Agent | Role | Tasks | Reference Files |\n|-------|------|-------|-----------------|\n| {name} | {role} | {task} | {files} |\n\n### Notes\n- {area-specific constraints or patterns to follow}\n```\n\n---\n\n## Phase 3: Integrate & Confirm Team\n\nMerge scout reports into a final team composition:\n\n1. **Deduplicate** — merge similar agent proposals from different scouts\n2. **Gap analysis** — check for missing roles\n3. **Add qa + support** — validation/fix agents are always included\n4. **Dependency graph** — design execution order between agents\n5. **Define acceptance criteria** — measurable criteria for Phase 5 validation\n\n### Team Proposal\n\n**Get final approval via AskUserQuestion:**\n\n```\nProposed team: {team-name}\n\n| # | Agent | Role | Tasks | Dependencies |\n|---|-------|------|-------|--------------|\n| 1 | architect | System design | Design new auth flow | - |\n| 2 | implementer | Code changes | Implement the design | #1 |\n| 3 | test-writer | Test coverage | Write tests for changes | #2 |\n| 4 | qa | Validation | PASS/FAIL against acceptance criteria | #2, #3 |\n\nAcceptance criteria:\n- [ ] AC-1: {measurable criterion}\n- [ ] AC-2: {measurable criterion}\n```\n\nOptions: \"Looks good, execute\" / \"I'd like to adjust roles\"\n\nIf the user selects \"adjust roles\", ask what specifically to change. After 2+ revision requests, switch to free-text input.\n\n---\n\n## Phase 4: Execution\n\n### Create Team & Distribute Tasks\n\n```\nTeamCreate(team_name: \"{keyword}-team\", description: \"Task description\")\n```\n\nteam_name convention: core keyword + `-team`\n\nCreate TaskCreate entries for each agent, then set dependencies with TaskUpdate.\n\n### Launch Teammates\n\n- **Model**: apply 3-tier based on role (`references/agents.md`)\n- **subagent_type**: `\"general-purpose\"`\n- **mode**: `\"bypassPermissions\"`\n- **Parallel**: launch agents without blockedBy dependencies in a single message\n- **Sequential**: inject preceding agent results into the next agent's prompt\n\n### Teammate Prompt Requirements\n\n1. **Context** — project background and how this task fits the whole\n2. **Specific goal** — exactly what to achieve\n3. **Reference files** — file paths identified by scouts\n4. **Constraints** — what NOT to do, scope limits\n5. **Output format** — expected deliverable format\n6. **Team info** — team_name, task ID\n\n> Detailed prompt structure: `references/prompt-templates.md`\n\n---\n\n## Phase 5: Validation\n\nqa (sonnet) evaluates each **acceptance criterion** from Phase 3.\n\n### Validation Process\n\n```\nAgent(name: \"qa\", model: \"sonnet\", prompt: \"\"\"\n## Acceptance Criteria\n- [ ] AC-1: {criterion}\n\n## Validation Target\n{Phase 4 execution results}\n\nEvaluate each criterion with evidence-based PASS/FAIL judgment.\nNo PASS without evidence.\n\n## Output Format\n| # | Criterion | Verdict | Evidence |\nOverall: PASS / FAIL\nInclude fix suggestions for any FAIL items.\n\"\"\")\n```\n\n### FAIL Handling\n\nsupport (sonnet) fixes only FAIL items → qa re-validates:\n\n- **Max 3 rounds** of verify/fix loop\n- If the same error repeats 3 times, stop immediately\n- After 3 rounds, halt the pipeline and report to user:\n\n```\n## Validation Failed — Manual Intervention Needed\n\n### Repeated Failures\n- AC-{N}: {criterion} — {failure reason}\n\n### Attempted Fixes\n1. {attempt 1}  2. {attempt 2}  3. {attempt 3}\n\n### Recommended Action\n{what needs to be done manually}\n```\n\n---\n\n## Phase 6: Complete & Cleanup\n\n### Result Report\n\n```\n## Team Results: {team-name}\n\n### Acceptance Criteria\n- [x] AC-1: {criterion} — PASS\n\n### Per-Agent Results\n- {agent}: {result summary}\n\n### Deliverables\n- {file paths or outputs}\n\n### Validation History\n- Validated {N} times, {M} fixes applied\n```\n\n### Team Cleanup\n\n```\nSendMessage(type: \"shutdown_request\", recipient: \"{name}\", content: \"Work complete\")\nTeamDelete()\n```\n\n---\n\n## Common Mistakes\n\n| Mistake | Correct Approach |\n|---------|------------------|\n| Creating team without user approval | Get AskUserQuestion approval in Phase 1 + Phase 3 |\n| Executing without acceptance criteria | Always define criteria in Phase 3 |\n| Running scouts for simple tasks | Skip Phase 2 for straightforward work |\n| Skipping validation | Always run Phase 5 after execution |\n| Ignoring model tiers | Use opus/sonnet/haiku based on role purpose |\n| Only picking from fixed catalog | Scouts design freely; examples are reference only |\n| Forgetting TeamDelete | Always shutdown_request → TeamDelete |\n| Infinite FAIL loop | Max 3 verify/fix rounds, then report to user |\n\n## Additional Resources\n\n- **`references/agents.md`** — Agent example bank with model tier guide\n- **`references/prompt-templates.md`** — Scout + execution + QA prompt templates\n- **`references/examples.md`** — Worked examples: feature dev, refactoring, research\n- **`references/enable-agent-teams.md`** — How to enable agent teams in Claude Code\n"
  },
  {
    "path": "plugins/team-assemble/skills/team-assemble/references/agents.md",
    "content": "# Agent Example Bank\n\nReference examples for designing agents. Scouts use these as inspiration, not as a fixed catalog.\n\n> This is an **example bank**, not a mandatory catalog. Scouts should reference these but can design entirely new agents optimized for each task. Names, roles, and tasks should all be tailored to the work at hand.\n\n## General Policy\n\n- **subagent_type**: `general-purpose`\n- **mode**: `bypassPermissions`\n\n### Model 3-Tier\n\nChoose based on role purpose:\n\n| Model | When to Use | Example Roles |\n|-------|-------------|---------------|\n| `opus` | Strategy/judgment required | scouts, complex execution agents |\n| `sonnet` | Standard execution/validation | workers, qa, support |\n| `haiku` | Information gathering/writing | researcher, editor, simple writer |\n\n**Decision rule**: \"Does it need to make new judgments?\" → opus. \"Does it execute against given criteria?\" → sonnet. \"Does it find information or write text?\" → haiku.\n\n---\n\n## Software Engineering Examples\n\n| Agent | Role | Use Case |\n|-------|------|----------|\n| architect | System design, API design, module structure | \"Design the auth refactor\" |\n| implementer | Write production code based on a design | \"Implement the new auth flow\" |\n| test-writer | Write unit/integration/e2e tests | \"Add test coverage for auth\" |\n| refactorer | Improve code structure without changing behavior | \"Clean up legacy patterns\" |\n| migrator | Database/schema/API migration scripts | \"Migrate from v1 to v2 schema\" |\n| reviewer | Code review with specific focus area | \"Review for security issues\" |\n| debugger | Investigate and fix specific bugs | \"Find root cause of timeout\" |\n| docs-writer | Write or update documentation | \"Update API docs for new endpoints\" |\n\n## Research & Analysis Examples\n\n| Agent | Role | Use Case |\n|-------|------|----------|\n| researcher | Investigate topics, gather information | \"Research auth library options\" |\n| analyst | Analyze data, derive insights | \"Analyze error patterns in logs\" |\n| benchmarker | Performance testing and comparison | \"Benchmark three caching strategies\" |\n| competitor-analyst | Compare competing solutions | \"Compare our API with competitors\" |\n\n## Content & Communication Examples\n\n| Agent | Role | Use Case |\n|-------|------|----------|\n| writer | Draft documents, reports, emails | \"Write the RFC for the new feature\" |\n| editor | Refine and polish written content | \"Edit the blog post for clarity\" |\n| strategist | Plan content or communication strategy | \"Plan the launch announcement\" |\n\n## Project Management Examples\n\n| Agent | Role | Use Case |\n|-------|------|----------|\n| product-manager | PRD writing, backlog management, requirements | \"Write the PRD for notifications\" |\n| project-coordinator | Timeline, milestones, stakeholder comms | \"Create the migration timeline\" |\n| ux-designer | Wireframes, user journeys, UI specs | \"Design the onboarding flow\" |\n\n---\n\n## Common Agents (Always Included)\n\nEvery team must include these roles:\n\n| Agent | Role | Notes |\n|-------|------|-------|\n| qa | Evidence-based PASS/FAIL evaluation against acceptance criteria | Phase 5 validation only |\n| support | Fix FAIL items from qa, verify/fix loop | Deployed only when Phase 5 has FAILs |\n\n### qa Behavior Rules\n\n- No PASS without evidence\n- No subjective judgment — only objective evaluation against acceptance criteria\n- No direct code modifications (validation only)\n\n### support Behavior Rules\n\n- Fix only FAIL items — no out-of-scope changes\n- Do not modify anything qa did not flag\n\n---\n\n## Agent Design Checklist\n\nWhen a scout designs a new agent:\n\n1. **Name**: kebab-case, role should be immediately obvious\n2. **Role**: one sentence defining the core responsibility\n3. **Tasks**: specific deliverables for this particular project\n4. **Reference files**: file/directory paths the agent needs to read\n5. **Constraints**: what the agent must NOT do (scope guard)\n6. **Dependencies**: which agent results must come first\n"
  },
  {
    "path": "plugins/team-assemble/skills/team-assemble/references/enable-agent-teams.md",
    "content": "# Enable Agent Teams in Claude Code\n\nAgent teams are experimental and disabled by default. You must enable them before using the team-assemble skill.\n\n## Quick Setup\n\nAdd this to your `settings.json`:\n\n```json\n{\n  \"env\": {\n    \"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS\": \"1\"\n  }\n}\n```\n\n### Where is settings.json?\n\n| Scope | Path |\n|-------|------|\n| User (global) | `~/.claude/settings.json` |\n| Project | `.claude/settings.json` (in project root) |\n\nUser settings apply to all projects. Project settings apply only to that project.\n\n### Alternative: Environment Variable\n\nSet it in your shell profile (`~/.zshrc`, `~/.bashrc`, etc.):\n\n```bash\nexport CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1\n```\n\nThen restart your terminal or run `source ~/.zshrc`.\n\n## Verify It's Enabled\n\nAfter setting, start Claude Code and ask it to create a team. If agent teams are not enabled, Claude Code will not have access to TeamCreate, SendMessage, and related tools.\n\n## Display Modes\n\nAgent teams support two display modes:\n\n| Mode | Description | Setup |\n|------|-------------|-------|\n| **in-process** (default) | All teammates in one terminal. Use Shift+Down to cycle. | Works everywhere |\n| **split panes** | Each teammate gets its own pane | Requires tmux or iTerm2 |\n\nTo set a display mode, add to `settings.json`:\n\n```json\n{\n  \"teammateMode\": \"in-process\"\n}\n```\n\nOr pass as a flag:\n\n```bash\nclaude --teammate-mode in-process\n```\n\n### Split Pane Setup (Optional)\n\nFor split-pane mode, install one of:\n\n- **tmux**: `brew install tmux` (macOS) or your system's package manager\n- **iTerm2**: install the [`it2` CLI](https://github.com/mkusaka/it2), then enable Python API in iTerm2 → Settings → General → Magic → Enable Python API\n\n> Note: Split-pane mode is not supported in VS Code's integrated terminal, Windows Terminal, or Ghostty.\n\n## Key Controls (In-Process Mode)\n\n| Key | Action |\n|-----|--------|\n| Shift+Down | Cycle through teammates |\n| Enter | View a teammate's session |\n| Escape | Interrupt teammate's current turn |\n| Ctrl+T | Toggle task list |\n\n## Known Limitations\n\n- **No session resumption**: `/resume` and `/rewind` do not restore in-process teammates\n- **One team per session**: clean up the current team before starting a new one\n- **No nested teams**: teammates cannot spawn their own teams\n- **Permissions inherited**: all teammates start with the lead's permission mode\n- **Shutdown can be slow**: teammates finish their current operation before shutting down\n\n## Official Documentation\n\nFor the latest information: https://code.claude.com/docs/en/agent-teams\n"
  },
  {
    "path": "plugins/team-assemble/skills/team-assemble/references/examples.md",
    "content": "# Team Assemble — Worked Examples\n\nThree examples showing the full workflow for different task types.\n\n---\n\n## Example 1: Feature Development (Auth Refactor)\n\n**User input**: \"Assemble a team to refactor our authentication from session-based to JWT\"\n\n### Phase 1: Task Analysis\n\n```\nRelevant codebase areas:\n- [x] src/auth/ — Current session-based auth implementation\n- [x] src/middleware/ — Auth middleware\n- [x] tests/auth/ — Auth test suite\n- [ ] src/api/ — Uses auth but not directly modified\n```\n\n→ AskUserQuestion approval\n\n### Phase 2: Codebase Scouts\n\nauth-scout (opus) + middleware-scout (opus) launched in parallel:\n\n```\nAgent(name=\"auth-scout\", model=\"opus\", subagent_type=\"general-purpose\", prompt=\"\"\"\nYou are a codebase scout for the authentication area.\nTask: Refactor from session-based to JWT authentication\nTarget path: src/auth/\n...\n\"\"\")\nAgent(name=\"middleware-scout\", model=\"opus\", subagent_type=\"general-purpose\", prompt=\"\"\"\nYou are a codebase scout for the middleware area.\nTask: Refactor from session-based to JWT authentication\nTarget path: src/middleware/\n...\n\"\"\")\n```\n\n### Phase 3: Integrate & Confirm\n\n```\nProposed team: auth-refactor-team\n\n| # | Agent | Role | Tasks | Dependencies |\n|---|-------|------|-------|--------------|\n| 1 | architect | System design | Design JWT flow, token strategy, migration path | - |\n| 2 | implementer | Code changes | Implement JWT auth, update middleware | #1 |\n| 3 | test-writer | Test coverage | Update auth tests for JWT flow | #2 |\n| 4 | qa | Validation | PASS/FAIL against acceptance criteria | #2, #3 |\n\nAcceptance criteria:\n- [ ] AC-1: JWT token generation and validation implemented\n- [ ] AC-2: All existing auth tests pass or are updated\n- [ ] AC-3: Middleware correctly validates JWT tokens\n- [ ] AC-4: No regression in existing API endpoints\n```\n\n→ AskUserQuestion final approval\n\n### Phase 4: Execution\n\n```\nTeamCreate(team_name: \"auth-refactor-team\", description: \"Refactor auth from session to JWT\")\n\n# Round 1: architect (independent)\nAgent(name=\"architect\", model=\"opus\", ...)\n\n# Round 2: implementer (#1 result included)\nAgent(name=\"implementer\", model=\"sonnet\", prompt=\"Design:\\n{architect_result}\\n...\")\n\n# Round 3: test-writer (#2 result included)\nAgent(name=\"test-writer\", model=\"sonnet\", prompt=\"Implementation:\\n{implementer_result}\\n...\")\n```\n\n### Phase 5~6: Validate → Complete\n\nqa PASS → result report → TeamDelete\n\n---\n\n## Example 2: Research & Decision Making\n\n**User input**: \"Use a team to evaluate caching strategies for our API — Redis vs Memcached vs in-memory\"\n\n### Phase 1: Task Analysis\n\n```\nRelevant codebase areas:\n- [x] src/api/ — API endpoints that need caching\n- [x] config/ — Infrastructure configuration\n- [ ] tests/ — Not directly relevant yet\n\n→ General research task — skip Phase 2 (no codebase scouting needed)\n→ Design team directly using references/agents.md\n```\n\n### Phase 3: Team Design (Phase 2 skipped)\n\n```\nProposed team: caching-eval-team\n\n| # | Agent | Role | Tasks | Dependencies |\n|---|-------|------|-------|--------------|\n| 1 | redis-researcher | Redis analysis | Research Redis: features, performance, ops cost | - |\n| 2 | memcached-researcher | Memcached analysis | Research Memcached: features, performance, ops cost | - |\n| 3 | inmemory-researcher | In-memory analysis | Research in-memory caching: features, limits, trade-offs | - |\n| 4 | analyst | Comparison | Synthesize findings into comparison matrix | #1, #2, #3 |\n| 5 | qa | Validation | Verify completeness and accuracy | #4 |\n\nAcceptance criteria:\n- [ ] AC-1: All three options analyzed with feature/performance/cost dimensions\n- [ ] AC-2: Comparison matrix with clear trade-offs\n- [ ] AC-3: Recommendation with rationale based on our API's access patterns\n```\n\n### Phase 4: Execution\n\n```\n# Round 1: Three researchers in parallel (no dependencies)\nAgent(name=\"redis-researcher\", model=\"haiku\", ...)\nAgent(name=\"memcached-researcher\", model=\"haiku\", ...)\nAgent(name=\"inmemory-researcher\", model=\"haiku\", ...)\n\n# Round 2: analyst synthesizes (#1, #2, #3 results included)\nAgent(name=\"analyst\", model=\"sonnet\", prompt=\"Research findings:\\n{all_results}\\n...\")\n```\n\n### Phase 5~6: Validate → Complete\n\nqa validates comparison matrix completeness → PASS → deliver report → TeamDelete\n\n---\n\n## Example 3: Multi-File Refactoring\n\n**User input**: \"Assemble a team to extract shared utilities from three microservices into a common library\"\n\n### Phase 1: Task Analysis\n\n```\nRelevant codebase areas:\n- [x] services/user-service/src/utils/ — User service utilities\n- [x] services/order-service/src/utils/ — Order service utilities\n- [x] services/payment-service/src/utils/ — Payment service utilities\n- [x] packages/ — Target for common library\n```\n\n→ AskUserQuestion approval\n\n### Phase 2: Codebase Scouts\n\nThree scouts launched in parallel (one per service):\n\n```\nAgent(name=\"user-svc-scout\", model=\"opus\", ...)\nAgent(name=\"order-svc-scout\", model=\"opus\", ...)\nAgent(name=\"payment-svc-scout\", model=\"opus\", ...)\n```\n\nEach scout identifies shared patterns and proposes extraction candidates.\n\n### Phase 3: Integrate & Confirm\n\n```\nProposed team: common-lib-team\n\n| # | Agent | Role | Tasks | Dependencies |\n|---|-------|------|-------|--------------|\n| 1 | lib-architect | Library design | Define common lib API, module structure | - |\n| 2 | lib-implementer | Create library | Build the common library package | #1 |\n| 3 | user-svc-migrator | Service update | Update user-service to use common lib | #2 |\n| 4 | order-svc-migrator | Service update | Update order-service to use common lib | #2 |\n| 5 | payment-svc-migrator | Service update | Update payment-service to use common lib | #2 |\n| 6 | qa | Validation | Verify all services work with common lib | #3, #4, #5 |\n\nAcceptance criteria:\n- [ ] AC-1: Common library created with shared utilities\n- [ ] AC-2: All three services updated to import from common lib\n- [ ] AC-3: No duplicate utility code remains in services\n- [ ] AC-4: All existing tests pass\n```\n\n### Phase 4: Execution\n\n```\n# Round 1: lib-architect (independent)\nAgent(name=\"lib-architect\", model=\"opus\", ...)\n\n# Round 2: lib-implementer (#1 result)\nAgent(name=\"lib-implementer\", model=\"sonnet\", ...)\n\n# Round 3: Three migrators in parallel (#2 result, each owns different service)\nAgent(name=\"user-svc-migrator\", model=\"sonnet\", ...)\nAgent(name=\"order-svc-migrator\", model=\"sonnet\", ...)\nAgent(name=\"payment-svc-migrator\", model=\"sonnet\", ...)\n```\n\n→ Round 3 agents run in parallel because they own non-overlapping files\n\n### Phase 5~6: Validate → Complete\n\nqa checks AC-1~4 → PASS → result report → TeamDelete\n"
  },
  {
    "path": "plugins/team-assemble/skills/team-assemble/references/prompt-templates.md",
    "content": "# Prompt Templates\n\n## 1. Codebase Scout Prompt\n\nTemplate for scout agents in Phase 2.\n\n```\n## Mission\n\nYou are a codebase scout for the {area} area. Design expert agents to handle the task described below.\n\n## Task\n{user_task_description}\n\n## Target Path\n{target_path}\n\n## Instructions\n\n1. Read key files in this area to understand the current state:\n   - CLAUDE.md, README.md (if present)\n   - Directory structure (use Glob to explore)\n   - Existing files/data related to the task\n\n2. Propose agents needed for this task:\n   - Use `references/agents.md` examples as inspiration (not mandatory)\n   - Design entirely new agents when the task demands it\n   - For each agent: name, role, specific tasks, reference files\n\n3. Report any area-specific constraints or patterns to follow.\n\n## Constraints\n- Do NOT modify any files — exploration and analysis only\n- Agent names must be kebab-case\n\n## Output Format\n\n### Current State\n- {file structure and key findings summary}\n\n### Proposed Agents\n| Agent | Role | Tasks | Reference Files |\n|-------|------|-------|-----------------|\n| {name} | {role} | {specific task} | {file paths} |\n\n### Notes\n- {area-specific constraints, existing patterns, easy-to-miss details}\n```\n\n### Usage Example\n\n```python\n# Scout for the auth module\nAgent(\n    name=\"auth-scout\",\n    model=\"opus\",\n    subagent_type=\"general-purpose\",\n    prompt=scout_template.format(\n        area=\"authentication\",\n        target_path=\"src/auth/\",\n        user_task_description=\"Refactor authentication to support OAuth2\"\n    )\n)\n```\n\nWhen multiple areas are relevant, **launch scouts in parallel in a single message**:\n\n```python\n# Simultaneous execution\nAgent(name=\"auth-scout\", ...)\nAgent(name=\"api-scout\", ...)\n```\n\n---\n\n## 2. Execution Agent Prompt\n\nGeneral prompt structure for execution agents in Phase 4.\n\n### Structure (5 Sections)\n\n```\n## Context\n{Project background and where this task fits in the overall work}\n{Include preceding agent results if this task has dependencies}\n\n## Goal\n{Exactly what to achieve — clear and measurable}\n\n## Reference Files\n{List of relevant file paths identified by scouts}\n- {path/to/file1} — {purpose}\n- {path/to/file2} — {purpose}\n\n## Constraints\n- {What NOT to do}\n- {Files/scope that must not be changed}\n- {Rules to follow}\n\n## Output Format\n{Specific shape of the deliverable — markdown file, table, checklist, code, etc.}\n\n## Team Info\n- team_name: {team-name}\n- task_id: {task-id}\n- On completion: TaskUpdate(taskId: \"{task-id}\", status: \"completed\")\n```\n\n### Including Preceding Results\n\nFor agents with dependencies, inject the preceding agent's output directly:\n\n```\n## Context\nYou are part of the auth-refactor team.\n\n### Preceding Results\n{architect_result}\n\nBased on the design above, implement the new authentication flow.\n```\n\n---\n\n## 3. QA Prompt\n\nUsed for Phase 5 validation.\n\n```\n## Context\n{Summary of overall team work}\n\n## Acceptance Criteria\n- [ ] AC-1: {criterion 1}\n- [ ] AC-2: {criterion 2}\n- [ ] AC-3: {criterion 3}\n\n## Validation Target\n{All Phase 4 execution results}\n\n## Goal\nEvaluate each acceptance criterion with evidence-based PASS/FAIL judgment.\n- No PASS without evidence\n- No subjective judgment — only objective evaluation against criteria\n- No direct modifications (validation only)\n\n## Output Format\n| # | Criterion | Verdict | Evidence |\n|---|-----------|---------|----------|\n| 1 | {AC-1} | PASS/FAIL | {specific evidence} |\n\nOverall: PASS / FAIL\nIf FAIL:\n- Failure reason: {specific problem}\n- Fix suggestion: {how to fix it}\n\n## Team Info\n- team_name: {team-name}\n- task_id: {task-id}\n- On completion: TaskUpdate(taskId: \"{task-id}\", status: \"completed\")\n```\n\n---\n\n## 4. Support Prompt\n\nUsed when Phase 5 produces FAILs.\n\n```\n## Context\n{Summary of team work}\n\n## Failed Items\n{qa's FAIL verdicts and fix suggestions}\n\n## Goal\nFix only the FAIL items. Do NOT change anything beyond FAIL scope.\n\n## Constraints\n- Do NOT modify anything qa did not flag\n- Do NOT break existing PASS items\n\n## Output Format\n| # | Failed Criterion | Fix Applied | Files Modified |\n|---|-----------------|-------------|----------------|\nState what needs to be re-validated.\n\n## Team Info\n- team_name: {team-name}\n- task_id: {task-id}\n- On completion: TaskUpdate(taskId: \"{task-id}\", status: \"completed\")\n```\n\n---\n\n## Prompt Writing Tips\n\n- **Be specific**: not \"refactor the code\" but \"extract the validation logic from UserController into a ValidationService class\"\n- **Limit scope**: explicitly list which files/directories can be modified\n- **Fix output format**: require structured output (tables, checklists) not free-form text\n- **Include preceding results**: for dependent tasks, paste the previous agent's output into the prompt body\n- **List reference files**: include relevant file paths from scout findings in the prompt\n"
  },
  {
    "path": "plugins/youtube-digest/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"youtube-digest\",\n  \"version\": \"0.2.0\",\n  \"description\": \"Summarize YouTube videos with transcript, insights, Korean translation, and quizzes\",\n  \"author\": {\n    \"name\": \"Team Attention\",\n    \"url\": \"https://github.com/team-attention\"\n  }\n}\n"
  },
  {
    "path": "plugins/youtube-digest/README.md",
    "content": "# YouTube Digest\n\nYouTube 영상을 분석하여 요약, 인사이트, 한글 번역을 생성하고 퀴즈로 학습 이해도를 테스트하는 플러그인.\n\n## Features\n\n- **Transcript 추출**: yt-dlp로 자막 추출 (한/영 수동/자동)\n- **고유명사 교정**: 웹 검색으로 자동 자막 오인식 보정\n- **문서 생성**: 요약 + 인사이트 + 전체 번역\n- **학습 퀴즈**: 3단계 × 3문제 = 9문제 이해도 테스트\n- **Deep Research**: 웹 심층 조사 옵션\n\n## Prerequisites\n\n- `yt-dlp` 설치 필요\n\n```bash\nbrew install yt-dlp\n```\n\n## Usage\n\n```\n유튜브 정리해줘 https://www.youtube.com/watch?v=xxxxx\n```\n\n또는:\n- \"영상 요약해줘\"\n- \"transcript 번역해줘\"\n- \"YouTube digest\"\n\n## Output\n\n`research/readings/youtube/{YYYY-MM-DD}-{title}.md` 형식으로 저장:\n- 요약 및 주요 포인트\n- 인사이트 및 적용점\n- 전체 스크립트 (한글 번역)\n- 퀴즈 결과 및 오답 노트\n"
  },
  {
    "path": "plugins/youtube-digest/skills/youtube-digest/SKILL.md",
    "content": "---\nname: youtube-digest\ndescription: This skill should be used when the user asks to \"유튜브 정리\", \"영상 요약\", \"transcript 번역\", \"YouTube digest\", \"영상 퀴즈\", or provides a YouTube URL for analysis. Extracts transcript, generates summary/insights/Korean translation, and tests comprehension with 9 quiz questions across 3 difficulty levels. Optional Deep Research for web-based follow-up.\n---\n\n# YouTube Digest\n\nYouTube 영상 분석 → 요약/인사이트/번역 문서 생성 → 퀴즈 테스트.\n\n## 워크플로우\n\n### 1. 메타데이터 수집\n\n```bash\nscripts/extract_metadata.sh \"<URL>\"\n```\n\n추출: title, description, channel, upload_date, duration, tags\n\n### 2. Transcript 추출\n\n```bash\nscripts/extract_transcript.sh \"<URL>\" [output_dir]\n```\n\n우선순위: 수동 자막(ko→en) > 자동 생성 자막(ko→en)\n\n### 3. 맥락 파악 (WebSearch)\n\n웹 검색으로 고유명사 정확한 표기 수집:\n- `\"{영상 제목}\" {채널명} summary`\n- `\"{발표자명}\" {주제 키워드}`\n\n### 4. Transcript 교정\n\n자동 자막의 고유명사 오인식을 웹 검색 결과로 대체:\n- Kora → Cora, cloud code → Claude Code, every → Every.to\n\n### 5. 문서 생성\n\n```markdown\n---\ntitle: {영상 제목}\nurl: {YouTube URL}\nchannel: {채널명}\ndate: {업로드 날짜}\nduration: {영상 길이}\nprocessed_at: {처리 일시}\n---\n\n# {영상 제목}\n\n## 요약\n{3-5문장 요약 + 주요 포인트 3개}\n\n## 인사이트\n### 핵심 아이디어\n### 적용 가능한 점\n\n## 전체 스크립트 (한글 번역)\n[00:00] ...\n```\n\n### 6. 파일 저장\n\n위치: `research/readings/youtube/{YYYY-MM-DD}-{sanitized-title}.md`\n\n### 7. 학습 퀴즈\n\n3단계 × 3문제 = 총 9문제. AskUserQuestion으로 각 단계 3문제 동시 출제.\n\n| 단계 | 난이도 | 출제 기준 |\n|------|--------|----------|\n| 1 | 기본 | 핵심 인사이트, 주요 개념 |\n| 2 | 중급 | 인사이트 + 세부 내용 연결 |\n| 3 | 심화 | 세부 내용, 적용/분석 |\n\n문제 유형 상세: `references/quiz-patterns.md`\n\n#### 결과 처리\n\n틀린 문제에 대해 정답과 해설 제공 후, 문서 끝에 퀴즈 결과 추가:\n\n```markdown\n## 퀴즈 결과\n\n총점: 7/9 (78%) | 1단계 3/3 ✅ | 2단계 2/3 | 3단계 2/3\n\n### 오답 노트\n\n**Q5**: {질문}\n- 선택: B → 정답: C\n- {1-2문장 해설}\n```\n\n### 8. 후속 선택\n\n퀴즈 완료 후 AskUserQuestion:\n- **한 번 더 퀴즈**: 다른 문제로 재테스트\n- **Deep Research**: 웹 심층 조사 (`references/deep-research.md` 참조)\n- **종료**: 마무리\n\n## 참고사항\n\n### 자막 언어 우선순위\n1. 한국어 수동 → 2. 영어 수동 → 3. 한국어 자동 → 4. 영어 자동\n\n### 불완전한 자막 처리\n- 고유명사 오인식: 4단계에서 일괄 대체\n- 이해 불가 부분: `[불명확]` 표시\n\n### yt-dlp 옵션\n- `--list-subs`: 자막 목록 확인\n- `--cookies-from-browser chrome`: 로그인 필요 시\n\n## 리소스\n\n- `scripts/extract_metadata.sh` - 메타데이터 추출\n- `scripts/extract_transcript.sh` - 자막 추출\n- `references/quiz-patterns.md` - 퀴즈 문제 유형 상세\n- `references/deep-research.md` - Deep Research 워크플로우\n"
  },
  {
    "path": "plugins/youtube-digest/skills/youtube-digest/references/deep-research.md",
    "content": "# Deep Research Workflow\n\n퀴즈 완료 후 \"Deep Research\" 선택 시 수행하는 웹 심층 조사.\n\n## 워크플로우\n\n### 1. 병렬 웹 검색 (WebSearch x 3-5)\n\n검색 쿼리 예시:\n- `\"{주제}\" 심층 분석`\n- `\"{핵심 개념}\" 사례 연구`\n- `\"{발표자/채널}\" 관련 자료`\n- `\"{기술/방법론}\" best practices`\n- `\"{주제}\" research paper`\n\n### 2. 병렬 웹 페이지 수집 (WebFetch)\n\n검색 결과 중 관련성 높은 3-5개 페이지 병렬 fetch.\n\n### 3. 기존 문서 확장\n\n원본 YouTube Digest 파일에 `## Deep Research` 섹션 추가:\n\n```markdown\n## Deep Research\n\n> 생성일: {날짜}\n> 검색 쿼리: {사용된 쿼리 목록}\n\n### 추가 맥락\n\n{웹 검색으로 발견한 배경 정보, 관련 개념 확장}\n\n### 관련 자료\n\n| 출처 | 요약 | URL |\n|------|------|-----|\n| {출처1} | {1줄 요약} | {URL} |\n| {출처2} | {1줄 요약} | {URL} |\n\n### 심화 인사이트\n\n{영상 내용 + 웹 검색 결과를 종합한 추가 인사이트}\n\n### 실행 가능한 Next Steps\n\n- {구체적인 액션 아이템 1}\n- {구체적인 액션 아이템 2}\n```\n"
  },
  {
    "path": "plugins/youtube-digest/skills/youtube-digest/references/quiz-patterns.md",
    "content": "# Quiz Patterns\n\n3단계 퀴즈 출제 가이드.\n\n## 난이도별 문제 유형\n\n### 1단계 (기본) - 인사이트 이해도\n\n핵심 메시지와 주요 개념 확인:\n- \"이 영상의 핵심 메시지는 무엇인가요?\"\n- \"발표자가 강조한 가장 중요한 원칙은?\"\n- \"영상에서 제시한 주요 문제점은?\"\n\n### 2단계 (중급) - 인사이트와 세부 내용 연결\n\n개념 간 관계와 근거 확인:\n- \"X 개념이 중요한 이유로 발표자가 든 근거는?\"\n- \"A와 B의 차이점으로 설명된 것은?\"\n- \"해당 전략의 장점으로 언급된 것은?\"\n\n### 3단계 (심화) - 세부 내용 및 적용\n\n사례 분석과 구체적 데이터:\n- \"언급된 사례에서 성공 요인으로 분석된 것은?\"\n- \"이 접근법을 적용할 때 주의점으로 언급된 것은?\"\n- \"발표자가 제시한 구체적인 수치/데이터는?\"\n\n## AskUserQuestion 형식\n\n```\nAskUserQuestion:\nquestions:\n  - question: \"[1단계 - 기본] 질문 1...\"\n    header: \"Q1\"\n    options:\n      - label: \"A\"\n        description: \"선택지 A 설명\"\n      - label: \"B\"\n        description: \"선택지 B 설명\"\n      - label: \"C\"\n        description: \"선택지 C 설명\"\n      - label: \"D\"\n        description: \"선택지 D 설명\"\n    multiSelect: false\n  - question: \"[1단계 - 기본] 질문 2...\"\n    header: \"Q2\"\n    ...\n  - question: \"[1단계 - 기본] 질문 3...\"\n    header: \"Q3\"\n    ...\n```\n\n## 결과 처리\n\n각 단계 완료 후:\n1. 정답/오답 즉시 표시\n2. 틀린 문제에 대해 상세 해설 제공:\n   - 정답이 무엇인지\n   - 왜 그것이 정답인지 (영상 내용 참조)\n   - 관련 타임스탬프 (있을 경우)\n"
  },
  {
    "path": "plugins/youtube-digest/skills/youtube-digest/scripts/extract_metadata.sh",
    "content": "#!/bin/bash\n# YouTube 메타데이터 추출\n# Usage: ./extract_metadata.sh <URL>\n\nURL=\"$1\"\n\nif [ -z \"$URL\" ]; then\n  echo \"Usage: $0 <YouTube URL>\"\n  exit 1\nfi\n\nyt-dlp --dump-json --no-download \"$URL\"\n"
  },
  {
    "path": "plugins/youtube-digest/skills/youtube-digest/scripts/extract_transcript.sh",
    "content": "#!/bin/bash\n# YouTube 자막 추출\n# Usage: ./extract_transcript.sh <URL> [output_dir]\n\nURL=\"$1\"\nOUTPUT_DIR=\"${2:-.}\"\n\nif [ -z \"$URL\" ]; then\n  echo \"Usage: $0 <YouTube URL> [output_dir]\"\n  exit 1\nfi\n\n# JSON3 형식으로 자막 추출 (한국어 > 영어 우선)\nyt-dlp --write-auto-sub --sub-lang \"ko,en\" --skip-download --convert-subs json3 \\\n  -o \"$OUTPUT_DIR/%(title)s.%(ext)s\" \"$URL\"\n"
  }
]